This is an archived page. It's not actively maintained.

Mozmill tests

MozMill is a deprecated test tool and framework that has been superceded by Marionette. The tests are being migrated over to Marionette. For Thunderbird MozMill tests see Thunderbird MozMill Testing

Mozmill is not just another testing tool inside the automated testing framework provided by Mozilla. Instead, it offers possibilities other test suites cannot fulfill. Mozmill does not require a "test-enabled" Firefox build. Any official build, including releases and nightly builds, work out-of-the-box. You only need to install Mozmill once. After that, you can immediately run Mozmill tests on any local build.

Mozmill tests are written in JavaScript and get executed in the scope of the browser window, which enables them to have access to any part of the UI and also to all available XPCOM components. Using Mozmill's command line client also offers the ability to run tests that require a restart of the application.

Mozmill test automation

Running functional tests with Mozmill in an automated manner is very helpful for mozQA. In the past, all the tests had to be run manually. Seeing a still increasing number of manual tests, it takes longer for mozQA to run all the needed tests against release candidates or nightly builds of Firefox. The way Mozmill operates can help us to automate nearly all of those tests and let them run on all platforms and across localized builds.

To handle all the work that needs to be done to have a fully automated Mozmill test suite available, the Mozmill Test Automation project has been created. Head over to the project page and see which sub-projects we are working on and how the work is coordinated.

In the following, we give tips and tricks on using Mozmill to run our existing Mozmill tests against Firefox and how you can contribute to the project by creating new or fixing broken tests. The following information tells you everything you need to know to start helping out.

Installing Mozmill

To get Mozmill running on your machine, you only have to download and unzip our complete Mozmill environment. Just make sure that you pick the most recent version for your platform. That is all. Once unzipped, execute the 'run' command. After that, all of the tools are available.

Alternatively, you can use the Python Package Index:

pip install mozmill

If you want to start with Mozmill framework development, have a look at the step-by-step installation instructions on the Mozmill page itself.

The Mozmill-Test repository

Having a central place for storage makes it always easier to distribute existing content to consumers. That is why a distributed version control system is used to manage the test repository and to give access to current tests and our self-developed shared modules. Fortunately, this repository has already been created at and is based on Mercurial.

The test repository

To be able to run Mozmill tests, you have to be familiar with our repository and the tools we are using. Read this section to learn how to clone the repository, run the tests, and contribute by writing or fixing tests.

If you prefer git over mercurial, read the following details on how to use Mercurial then head over to the git section where we cover the differences.

Mercurial Installation

Before a copy of the repository can be cloned to the local disk, Mercurial has to be installed by following these instructions.

Configuring Mercurial

With Mercurial installed, the default configuration has to be prepared. All the changes should be made in the default Mercurial resource configuration file. If the file does not exist on your machine, you should create it; then open the file in your preferred editor and update its contents so it includes the configuration information below:

username = Your Real Name <>
merge = internal:merge (or your-merge-program)

git = 1
showfunc = 1
unified = 8

qnew = -U

hgext.color = =
hgext.transplant =

pretxncommit.whitespace = hg export tip | (! egrep '^\+(.*[ ]*|[\t]*)$') = ! hg qtop > /dev/null 2>&1

As you can see, a couple of entries have been added. Under the [ui] section, the username should be set to your full name and the preferred email address. If you do not want to use the internal merge tool, you can specify your preferred application in the merge line; otherwise you can leave it set to internal:merge. Within the [diff] section, the output from the diff command can be specified. It is suggested to leave the values as they stand. The next section [extensions] enables the Mercurial Queue and Transplant extension, which can be used to handle a patch queue for easier management. Last but not least, hooks have been added to the [hooks] section to make sure that no trailing white-spaces are introduced and that you don't destroy the local repository when calling "hg pull" while a patch is applied. With those changes, the environment has been prepared to clone the Mozmill test repository.

Cloning the test repository

The cloning process is a one time action. Once you have a copy of the repository on your machine it can be updated instead; see the next section. Cloning the repository only requires one command, which retrieves all the files from the central repository and save them to a subfolder of your choice. Change into a folder of your choice before executing the hg clone command:

$ cd %folder%
$ hg clone [subfolder]

Now a copy of the repository can be found under the specified subfolder. If you wish to use the repository name as the name of the subfolder, don't specify that parameter and a copy is saved under mozmill-tests.

Updating the local copy

To always stay on the bleeding edge, you have to pull the newest version of the repository regularly. With the command below, all new, changed, and removed files are updated in your local copy (run this in the specified subfolder of the cloned repository, where an .hg file is located):

$ hg pull -u
Note: Before you run any of the Mozmill tests in Firefox make sure you have the latest revision checked out.

Handling branches

The mozmill-tests repository contains tests for different versions of Firefox. That is necessary because UI elements or their behavior could have been changed between major versions. With only one set of tests and modules in place, the test-run would produce test failures and make the results unreliable.

Instead of using multiple repositories for the different versions of Firefox we handle everything inside the same repository by using named branches. At the moment, the following heads exist in the repository:

default -> Firefox Nightly (Download)
mozilla-aurora -> Firefox Aurora (Download)
mozilla-beta ->Firefox Beta (Download)
mozilla-release -> Firefox Release (Download)
mozilla-esr31 -> Firefox 31.0 ESR (Download)

By cloning the repository, the default branch is selected automatically. As long as the tests are run against a Nightly build of Firefox, that is fine. However, if you want to run the tests against an older version, the branch has to be switched. To check which branches exist, run the command below.  The command produces a list of branches, with the revision ID of the latest check-in.

$ hg branches
default                     4330:2c1e3a8a982e
mozilla-beta                4329:d23d24af886b
mozilla-aurora              4328:bbc166ff702a
mozilla-esr31               4327:99ba85df489a
mozilla-release             4326:9bfa4c5996b3

If you do not know which branch is selected, run:

$ hg branch

Given the output, the default branch is currently selected, and the tests will work with versions of Firefox Nightly builds. If another branch is needed because tests have to be run against Firefox builds on the Aurora channel, the following command switches to the aurora branch:

$ hg up -C mozilla-aurora
84 files updated, 0 files merged, 1 files removed, 0 files unresolved

The repository and all of its tests are updated to the latest version the of tests in that branch.

Note: According to the rapid release cycle of Firefox, code merges between the branches happen every six weeks. Our branches have to follow the merge process on the same day. If needed, see these details and step by step instructions.

Using Git

If you prefer git over mercurial, we have a mirror set up at We do not work with pull requests for mozmill-tests. Instead, export a patch and upload it to Bugzilla.

Clone the repository with:

git clone 

The different branch names are the same mentioned above with one difference: Instead if the default branch used with hg, git uses master.

Preparing a patch

To work on a patch using git, you should first create a new development branch based on the branch you want that code to be based off. Usually, this will always be the master branch and has not to be specified. However, if your patch needs to land on a different branch then master, it needs the source branch name:

git checkout -b %branch_name% %source_branch%

Now you can work on the changes. When you are done, make sure you commit the updates. Before check if any new file has to be added for tracking.

git status
git add %filename%
git commit -m "Bug %bug_id% - %commit_message%. r=%reviewer%"

You can check the latest commit with:

git show

You can create a patch using format-patch.

git format-patch HEAD^

Before uploading the patch to Bugzilla, make the patch mercurial-compatible by using the git-patch-to-hg-patch from

Update a patch

If you need to update a patch, make the necessary changes, and commit them as described earlier. However, rebase all of the commits against the source branch first. Afterward git log should only show a single commit for all of your changes.

git commit -m "whatever because it will be lost by the interactive rebase"
git rebase -i %source_branch%  # Mark all commits except the first one with 'f' (which stands for fixup)
git log

Rebase against latest changes to mozmill-tests

If other changes landed in mozmill-tests which are in conflict with your patch, you will have to rebase your changes. For this update the source branch and rebase your changes on top:

git checkout master
git pull origin master
git checkout %work_branch%
git rebase master 

At this point, you might need to resolve conflicts and finish the rebase. Follow the instructions given by git rebase.

Running Mozmill tests

To get familiar with Mozmill test scripts, you can take a look at the Firefox tests that you checked out of the mozmill-test repository.

To run the tests you use the Mozmill command line client with one of the options as given below. A fresh profile will automatically be created so the test always runs in a clean environment. Keep in mind, however, that if you want to run multiple tests inside a folder, all those tests will be executed in the same profile. Beneath those normal tests you will also be able to run restart tests like what is needed for extension installations.

You can run the mozmill command with the --help option to get a list of available options:

$ mozmill --help
Usage: mozmill [options]

UI Automation tool for Mozilla applications

  --version             show program's version number and exit
  -h, --help            show this help message and exit

  MozRunner options:
    -p PROFILE, --profile=PROFILE
                        The path to the profile to operate on. If none,
                        creates a new profile in temp directory
    -a ADDONS, --addon=ADDONS
                        Addon paths to install. Can be a filepath, a directory
                        containing addons, or a url
                        An addon manifest to install
    --pref=PREFS        A preference to set. Must be a key-value pair
                        separated by a ':'
    --preferences=FILE  read preferences from a JSON or INI file. For INI, use
                        'file.ini:section' to specify a particular section.
    -b BINARY, --binary=BINARY
                        Binary path.
    --app=APP           Application to use [DEFAULT: firefox]
    --app-arg=APPARGS   provides an argument to the test application
                        run under a debugger, e.g. gdb or valgrind
                        arguments to the debugger
    --interactive       run the program interactively
    --info              Print module information

  MozMill options:
    -t TESTS, --test=TESTS
                        Run test
    --timeout=TIMEOUT   Seconds before harness timeout if no communication is
                        taking place
    --restart           Restart the application and reset the profile between
                        each test file
    -m MANIFEST, --manifest=MANIFEST
                        test manifest .ini file
    -D, --debug         debug mode
    --list-tests        List test files that would be run, in order
                        Specify an event handler given a file PATH and the
                        CLASS in the file
                        Path of directory to use for screenshots
    --disable=HANDLER   Disable a default event handler
    --manual            start the browser without running any tests

  Report options:
    --report=URL        Report the results. Requires URL to results server.
                        Use 'stdout' for stdout.

  Logging options:
    -l LOG_FILE, --log-file=LOG_FILE
                        Log all events to file.
                        level of console logging (default: INFO)
                        Level of file logging if --log-file has been specified
                        (default: INFO)
                        Format for logging (default: pprint-color)

Three of these options are the ones you will use most:

  • The most important option is -m, which specifies a manifest with references to tests to be executed
  • To run a single test or a folder with tests the -t option can be used.
  • The -b option is useful if the Firefox binary cannot automatically found, so it can be specified by pointing to the binary or app bundle on OS X
  • The --console-level=ERROR option lets you get more comprehensive error output in the shell window. Below you can find some examples specific to our mozmill-test repository for Firefox.


To start the default Firefox application, execute the given manifest with tests, and close Firefox itself afterward:

$ mozmill -m firefox/tests/functional/testPreferences/manifest.ini

To start the default Firefox application, execute the given test, and close Firefox itself afterward:

$ mozmill -t firefox/tests/functional/testPreferences/testRestoreHomepageToDefault.js

To start the default Firefox application, execute all the tests in the given folder and its subfolders, and close Firefox itself afterward:

$ mozmill -t firefox/tests/functional/testPreferences/

To start the specified version of Firefox (Windows, Linux, or OS X), execute the given test, and close the browser afterward:

$ mozmill -t firefox/tests/functional/testPreferences/testRestoreHomepageToDefault.js -b "c:\firefox 3.5\firefox.exe"   (Windows)
$ mozmill -t firefox/tests/functional/testPreferences/testRestoreHomepageToDefault.js -b "/usr/bin/firefox"             (Linux)
$ mozmill -t firefox/tests/functional/testPreferences/testRestoreHomepageToDefault.js -b "/Applications/"    (Mac OS X)
Note: When using the -b option the full path to the executable has to be specified on Windows and Linux while on OS X the application bundle can be used.
Note: When using OSX you may not have a default Firefox set. You will need to include the binary when running tests.

Restart tests

Restart tests allow you to run tests like installing an extension which need a restart to finish. For restart tests you will specify a manifest or a test folder via the -t option. It will run all the test files in that manifest or folder in an alphabetical order.

To start the system's default Firefox application, run all the tests in the given manifest by restarting Firefox in between each test, and finally close Firefox, you can use the following command, for example. The same profile is used for all test files inside this folder.

$ mozmill -m firefox/tests/functional/restartTests/testExtensionInstallUninstall/manifest.ini

To start the system's default Firefox application, run the restart tests for all sub folders, and finally close Firefox, a command like the following can be used. The same profile is only used for one subfolder; it's not shared between the different subfolders.

$ mozmill -t firefox/tests/functional/restartTests/

To start the specified version of Firefox, run all the tests in the given folder by restarting Firefox in between each test, and close the browser afterward:

$ mozmill -t firefox/tests/functional/restartTests/testExtensionInstallUninstall/ -b "c:\firefox 3.5\firefox.exe"   (Windows)
$ mozmill -t firefox/tests/functional/restartTests/testExtensionInstallUninstall/ -b "/usr/bin/firefox"             (Linux)
$ mozmill -t firefox/tests/functional/restartTests/testExtensionInstallUninstall/ -b "/Applications/"    (Mac OS X)
Note: When using the -b option the full path to the executable has to be specified on Windows and Linux while on OS X the application bundle is used.

Run via the automation scripts

To run all of our Mozmill tests you should use our mozmill automation scripts. They handle all various kinds of testruns supported as of now. To get an overview type 'testrun_' in the mozmill-env terminal and hit the tabulator key twice.

Example to run our functional tests for the given version of Firefox:

$ testrun_functional --report= %path_to_firefox% (on Mac to .app)

The testrun script will automatically clone the remote mozmill-tests repository, selects the correct named branch for the version of Firefox to test, runs all the tests, and reports results to our Mozmill dashboard.

Note: If you want to use a local version of the tests you can use the --repository option with the path added, which is supported by any of the scripts.

Writing Mozmill tests

Now that you know how to run Mozmill tests, you can help by writing new tests or by fixing existing ones. It's not hard to do, but you have to follow some simple rules so we can guarantee long-living and understandable tests for everyone.

How to start

To make it easier for you to create your first Mozmill tests, we have prepared a couple of template files. They will help you get familiar with the license block, needed test functions, shared modules, and the proper syntax to use when writing tests. You can find these files in your local version of the test repository or online.

Some specific things to pay attention to when creating tests:

  • Please update the name and the email address in the license block.
  • Use a meaningful name for your test function; one which indicates the overall target of the test.

Logging test results

Results are logged in our tests through either of two verification objects: assert and expect.

These should both be imported into your test module (note that the exact path may differ, depending on which subdirectory your test is in):

var {assert, expect} = require("../../../lib/assertions");

Each object has the same methods, detailed below. If an assert or expect method passes, each of them will log a PASS for that verification and continue. The difference is in what happens when a test fails:

A failure in an expect method will not stop the test, but will log a FAIL to the results system. Any failed result will still cause the test to also be marked as failed overall. Examples of verifications that would usually use expect include color, non-essential item text, and other aspects of state that don't really affect anything else.

expect should be used when failure for that test result will not invalidate the rest of the test.

A failure in an assert method will not only log a FAIL, but stop the test. Examples of verifications that would usually use assert include tab or dialog presence, whether a page has loaded, and other aspects of state that completely block the test if they're not as expected.

assert should be used when failure for that test result will invalidate the rest of the test.

When possible, expect should be used so that the test will continue, both to get partial results and to provide additional context to the failure. Only use assert when continuing on failure doesn't make any sense.

assert / expect methods

ok(aValue, aMessage)

Logs a PASS if aValue is true, and a FAIL if aValue is false. Use this when you have a single true/false value to test. For comparisons between an actual and expected value, see equal()and notEqual() below.

For non-boolean values, true/false operates in terms of JavaScript truth. For example, 0 and null are false; 1 and "foo" are true.

expect.ok(button.getNode().hidden, "Button is hidden");

equal(aValue, aExpected, aMessage)

Logs a PASS if aValue exactly equals aExpected, FAIL otherwise. Use this for comparisons between an actual and expected value.

assert.equal(numTabs, 3, "The correct number of tabs are shown");

notEqual(aValue, aNotExpected, aMessage)

Logs a PASS if aValue exactly equals anything other than aNotExpected, FAIL otherwise. The most common cases for this are checking that something is not 0 or a blank string, or when checking that a text value is changing but the new value isn't predictable. For predictable values, favor an equal() comparison with the new value.

assert.notEqual(newText, oldText, "The text has changed");

match(aString, aRegEx, aMessage)

Logs a PASS if aString matches the regular expression given in aRegEx, FAIL otherwise.

expect.match(captionText, "/mozilla/i", "The word 'Mozilla' appears somewhere in the caption");

notMatch(aString, aRegEx, aMessage)

Logs a PASS if aString does not match the regular expression given in aRegEx, FAIL otherwise.

expect.notMatch(captionText, "/mozilla/i", "The word 'Mozilla' does not appear in the caption");


Logs an unconditional PASS. This should be used extremely rarely, and only in cases where a fully custom verification structure is needed and none of the other methods make sense to use. It's almost always better to save the result as a boolean and use ok() instead.

expect.pass("If the code got here, this test is passing (for now)");


Logs an unconditional FAIL. This should be used extremely rarely, and only in cases where a fully custom verification structure is needed and none of the other methods make sense to use. It's almost always better to save the result as a boolean and use ok() instead."If the code got here, this test is failing");

Coding style

There are some coding style rules you should follow when writing new tests or contributing to existing tests. These rules help make the review process as efficient as possible and makes it easier for others to read your code.

If that was not enough information, you should take a look at the existing tests or shared modules in the Mozmill test repository.

Tips and tricks

Sometimes you will run into trouble while creating Mozmill tests. Here are some suggestions that may help you sort out the problems you might run into.

  • Get familiar with the functionality provided by Mozmill and all of our Shared Modules; this will ease the test creation process.
  • Use the Inspector or Recorder to create the skeleton of your test. You have to add additional steps like calls to sleep functions or element checks before the test can be run.
  • If you are using to load a web page, a controller.waitForPageLoad() has to be used right afterward to prevent continuing with the test before the page finishes loading; calling controller.sleep() is not sufficient.
  • Use the controller's menu API to reach commands which are only available via the main menu. A list of existing IDs for menu items can be found in the file. Due to our localization efforts please always use the IDs of menu items instead of their names.
  • If your test needs exactly one tab open use TabbedBrowsingAPI.closeAllTabs(controller); inside the setupModule() function.
  • If you modify preferences or other global data, make sure to reset those values inside the teardownModule() function. That will clean up the environment for the next Mozmill test.
  • Avoid using any hardcoded strings for the elementslib Lookup() function. Doing so will break Mozmill tests for localized builds. After using the inspector you have to manually remove those attributes (e.g. label or accesskey) from the element string (see the next bullet).
  • If an element can only be referenced by the elementslib Lookup() function please try to remove as many attributes as possible from each hierarchy. That will make the test more readable and can avoid failed lookups when the code in Firefox changes.

The review process

Before your test can be checked into the mozmill-test repository, you have to pass the review process. The reviewer has to learn about the test and check if everything is done correctly. In order to make the review as easy as possible, be sure your test script abides by the guidelines given above. In addition to checking the syntax and code style of the test, make sure the test runs with the command line client before requesting a review. If questions arise feel free to ask in #automation or the automation developer mailing list at any time.

Simplified patch creation

The easiest way to create a patch is by using the hg diff command bounded by two other commands. With hg add you advise Mercurial to start tracking your test file. It's needed to see your test content in the diff output. Once the patch has been created you can use hg rm to safely remove the test from the tracking list. That will guarantee that no conflicts will happen when you pull a new version to your local copy of the repository.

Imagine you have created a test called testZoomSettings.js which is saved under tests/functional/testLayout/ and you want to create a patch called patch_file:

$ hg add tests/functional/testLayout/testZoomSettings.js
$ hg diff >patch_file
$ hg rm -f tests/functional/testLayout/testZoomSettings.js

After running those commands, you will find the file patch_file in the current folder which can be uploaded as attachment to the bug report.

Advanced patch creation

As you can imagine, it's hard to track all your files when you are working on several tests in parallel, because all those files will lingering around in your working copy. To prevent that and to gain the overview you can use the Mercurial Queue extension.

In the example below, you can see how it works, starting with a new test named testZoomSettings.js:

$ hg qnew zoomsettings                                    (Add a new named patch to the queue of patches)
$ vi tests/functional/testLayout/testZoomSettings.js       (Create your test and apply the template structure)
$ hg add tests/functional/testLayout/testZoomSettings.js  (Start tracking the test file)
$ hg diff                                                 (Create a diff output of the current state)
$ hg qrefresh -m "Commit message (see below)"             (Update the patch by accepting all changes and giving a necessary commit message)
$ vi tests/functional/testLayout/testZoomSettings.js       (Continue to update your test)
$ hg diff                                                 (Create a diff against the last version of your patch)
$ hg qdiff                                                (Create a complete diff against the current version of the repository)
$ hg qrefresh                                             (Refresh the patch with the latest changes)
$ hg qpop                                                 (Pop the patch from the stack)
$ hg qpush                                                (Push the patch back to the stack)
$ hg export tip >patch_file                               (Create a patch based on the current state)

Commit message

Your commit message should follow a standard format of:

"Bug %number% - %Description%. r=%reviewer1%, r=%reviewer2%..."

Description should include a concise description of the changes made. Please do not include the branch name in the description.


$ hg qrefresh -m "Bug 553616 - Fixing testPasteLocationBar.js to use utils clipboard clearing. r=hskupin"


When using the advanced way of creating a patch, all existing patches are located under .hg/patches. Before you ask for a review you should check the patch to ensure that it's valid. You can use the online review tool. The only warnings you should get should be those indicating that lines are too long. Further you should also check that the test is working as expected. The best solution here is to run the test via the appropriate testrun script and report the results to our dashboard. Mention the link to the report in the review request.

Reviews are managed in Bugzilla. So once your new test has been created, file a new bug report (see bug 479720 as example). Also add the MozTrap testcase IDs for all branches in the first comment of the bug. Finally your patch for the test has to be attached to the bug. Now you can request review from Henrik Skupin, or Andreea Matei. If the bug you're working on has a mentor assigned at the "whiteboard" tag, then you can use request review from that person.

Note: Initial patches should be created for the default branch of the mozmill-test repository. Tests for older versions of Firefox will be backported after the test has been landed on the default branch.

Landing of patches

Once a patch has been reviewed and is ready for check-in, the reviewer will land the patch immediately or will add the keyword "checkin-needed" if another person has to land the patch. If you are the one who has check-in permissions and you have to land the patch, the following steps should be obeyed:

Preparation: Before you can push any patch to the repository the .hg/hgrc file of the local copy has to be updated so it contains the default-push path, which is usually an ssh connection:

default =
default-push = ssh://


  1. Make sure that the correct branch of the mozmill-tests repository has been selected, if not update accordingly.
  2. Run a "hg pull -u" to make sure that no other patches have been pushed since your last pull request.
  3. Download the patch to your local disk and import it via "hg qimport %patch%".
  4. Use "hg qpush" to push the patch to your queue. It will end-up on-top of your local queue. You can check with "hg tip".
  5. Run "hg out" to check if the user, the email address, and the summary has been set correctly.
  6. If the user name is not valid, update the changeset with "hg qrefresh -u "%username% <%email%>".
  7. If the summary is not valid, update the changeset with "hg qrefresh -m 'Bug %number% - %Description%. r=%reviewer1%, r=%reviewer2%...'".
  8. Run "hg qfinish tip", which removes the patch from your queue and commit the changes.
  9. Finally, push the patch to the public repository with "hg push".
Note: If you want to become a committer, please review our commit policy.

Transplanting a Patch

In some instances, it will be necessary to check in a patch on several branches. Using the transplant extension makes it easy.

1. Update to the target branch

hg pull -u && hg update -C %target_branch%

2. Transplant the source changeset

hg transplant %changeset_ID%

3. Finally, push the change

hg push
NOTE: To use transplant, you need to have the transplant extension added to your .hgrc file

Backing out patches

If a new test immediately fails after its check-in, we will have to back out the responsible changeset. Follow those instructions in how to correctly do a back-out. As back-out comment use "Backed out changeset %id% due to %failure%".

Merging heads

If multiple heads have been created accidentally on a branch, those have to be merged into the original head of the given branch.

$ hg heads                                              # Check if multiple heads per branch exist
$ hg up -C %target_branch%                              # Switch to the target branch
$ hg merge -r %changeset%                               # Merge duplicate changeset
$ hg diff                                               # Check diff of merge and ask for feedback/review if necessary
$ hg commit -m "Merge %changeset% into %target_branch%  # Commit the merge and specify a comment
$ hg push                                               # Push the merge

Managing Mozmill test failures

If you encounter a Mozmill test failure that can be consistently reproduced, you should raise a bug in Bugzilla using one of the following templates:

Ideas on how to investigate failures:

Bug Priorities

We use the following priorities for bugs:

  •  P1 - critical failures, constant (regressions)
  • P2 - new tests - the ones with [qa-needed] whiteboard entry are handled first
  • P3 - intermittent failures, enhancements
  • P4 - Rare failures (once a month)
  • P5 - refactoring, small enhancements

If a bug becomes dependent on another bug, make sure the priority is reflected in the blocking bug. You can also add the [qa-automation-blocked] whiteboard entry so the bug shows up in our blockers list.

Other types of Mozmill tests

Mozmill is also able to automate testing in various other areas. For now we cover areas like: