この記事はまだボランティアによって 日本語 に翻訳されていません。ぜひ MDN に参加して翻訳を手伝ってください！
Mozilla code is covered by a large number of automated unit- and regression tests. These tests are run on Mozilla's Automated Testing constantly and developers are expected to make sure their changes do not break the automated test-suites.
Mozilla uses several home-grown automated testing frameworks, an overview of each framework with links to detailed documentation is available at Mozilla automated testing.
This page lists the steps to run the automated tests.
Run tests from your build
Configuring the build
In order to run most tests, you must have a properly configured build. Platform (Gecko, Toolkit) tests, as well as Firefox-specific tests, are usually run on a Firefox build. The test suite may not account for non-standard build configurations, such as disabling libxul or individual features.
Build Documentation has the general instructions on building Firefox. The default build options are suitable for running the automated tests. The tests can be run on both debug and release builds. To run tests with leak checking you must enable --enable-tracerefcnt or --enable-trace-malloc.
xpcshell-based tests can be executed by running the xpcshell-test mach command:
$ mach xpcshell-test
This command is self-documenting:
$ mach help xpcshell-test
"compiled code" tests
The following command execute the standalone (aka "compiled-code") tests:
$ make -C $(OBJDIR) check
Note: on the Gecko 1.9.0 branch (Firefox 3.0), the compiled code and xpcshell-based tests are both run using "make check".
If any of the tests fail, you get a message like:
make: *** [check] Error 2
make exits without an error, all the tests passed.
To run the whole Mochitest suite use the following commands:
$ mach mochitest $ make -C $(OBJDIR) mochitest-ipc-plugins
The tests may take a while to complete (40 minutes on a 2Ghz Macbook as of June 2008) and the browser window must be focused during the test, otherwise some tests will fail.
See the individual sections on the Mochitest page for more information.
Use the following command to run reftests or crashtests:
$ mach reftest $ mach crashtest
Note: on the Gecko 1.9.0 branch (Firefox 3.0), crashtests and reftests must be run manually (and require the creation of a separate profile). See the README.
Reftests take about 20 minutes on the configuration listed above.
If the command prints any output (
UNEXPECTED FAIL or similar), some reftests have failed.
On Windows piping to 'cat' can cause the output to be displayed if it would not otherwise be.
If only crashtests in some directory should be run, ./mach crashtest foo/bar/crashtests.list in the top level source code directory should work
Browser chrome tests
Browser chrome tests can be run in Firefox using the following command:
$ mach mochitest-browser
cd <OBJ_DIR> cd _tests/testing/mochitest python runtests.py --browser-chrome --autorun
Talos testing system
The Talos testing system is our framework for running the Firefox performance tests.
Run tests against builds generated from automation (aka Treeherder)
If you are reading this, it is because you want to run tests in the same way as in automation/treeherder.
TaskCluster vs Buildbot
TaskCluster, the shiny new CI and Buildbot, the CI that served us for almost a decade, have different ways for you to reproduce the job.
If you want to run the tests on your local host read the page "How to run Mozharness as a developer". You have instructions on how to run the tests on a loaner or your localhost.
Running Mozharness as a developer can most of the time get you there, however, there are tool and library differences that could require you to get a loaner (e.g. Linux hosts have anti-aliasing disabled and makes a lot of reftests to fail OR your host runs much faster and triggers intermittent tests).
If the job you want to reproduce runs on TaskCluster (see TaskCluster jobs on Treeherder ) you can use an SSH/VNC like interfaces by using the 'One-click loaner' button (read TaskCluster interactive session to know more).
You can also use Docker to run the jobs locally. Notice that your host has different specs and can produce intermittent test failures.
NOTE: In the near future we would like to make this process much smoother.