Browser chrome tests

  • Revision slug: Browser_chrome_tests
  • Revision title: Browser chrome tests
  • Revision id: 286634
  • Created:
  • Creator: ratcliffe_mike
  • Is current revision? No
  • Comment 1 words added, 1 words removed

Revision Content

The browser chrome test suite is an automated testing framework designed to allow testing of application chrome windows using JavaScript. It currently allows you to run JavaScript code in the same scope as the main Firefox browser window and report results using the same functions as the Mochitest test framework. The browser chrome test suite depends on runtests.py from the Mochitest framework, so it won't work in a build with Mochitest disabled (--disable-mochitest).

Running the browser chrome tests

To run Mochitest, first build Mozilla with your changes; then:

  • In 1.9.1 and later (since {{ Bug(417516) }} was fixed) run the following command from the top-level directory:
    • make -C $(OBJDIR) mochitest-browser-chrome
  • To test on older branches, run Mochitest's runtests.py script passing it the --browser-chrome command line argument:
    • cd $(OBJDIR)/_tests/testing/mochitest
      python runtests.py --browser-chrome

This will launch your build and open a "browser chrome tests" window. The "run all tests" button will start the test run, and report the results in the UI and to stdout. There's also an option to output the results to file, using the same command line parameter used by Mochitest (--log-file=/path/to/file).

You can tell the test harness to run the tests automatically at startup without user interaction by passing the --autorun parameter to runtests.py. This can be used in combination with the --close-when-done parameter to fully automate the tests.

The browser chrome test suite does respect the optional --test-path argument, in order to run specific groups of tests. As with Mochitest, the path specified by the --test-path argument is the path to a test or directory within the Mozilla source tree. If the path points to a directory, then the tests in that directory and all of its subdirectories will be run.

For example, to run the tests in browser/base/content/test the command would be:

TEST_PATH=browser/base/content/test/ make -C $(OBJDIR) mochitest-browser-chrome

Writing browser chrome tests

Browser chrome tests are snippets of JavaScript code that run in the browser window's global scope. A simple test would look like this:

 function test() {
   ok(gBrowser, "gBrowser exists");
   is(gBrowser, getBrowser(), "gBrowser and getBrowser() are the same");
 }

The test() function is invoked by the test harness when the test is run. The test file can contain other functions, they will be ignored unless invoked by test().

Note: Be careful when naming your functions and variables. Since the test files are executed in the same scope as the browser window, conflicting variable names could cause trouble while running the tests. You should attempt to reduce the side effects of the testing code and "clean up" after yourself, to avoid influencing other tests.

The comparison functions are identical to those supported by Mochitests, see how the comparison functions work in the Mochitest documentation for more details. The EventUtils helper functions are available on the EventUtils object defined in the global scope.

The test file name must be prefixed with "browser_", and must have a file extension of ".js". Files that don't match this pattern will be ignored by the test harness. Using a descriptive file name is strongly encouraged instead of just using a bug number.

You can collect common utils and helpers in a file called head.js, that must live in the same folder as the browser-chrome tests. This file will be injected into the test scope for each test living in the same folder. Notice that any function call in head.js main scope will run before the main test() method.

Asynchronous tests

The test suite also supports asynchronous tests, using the same function names as Mochitest. Call waitForExplicitFinish() from test() if you want to delay reporting a result until after test() has returned. Call finish() once the test is complete. Be aware that the test harness will mark tests that take too long to complete as FAILED (the current timeout is 30 seconds).

 function test() {
   waitForExplicitFinish();
   setTimeout(completeTest, 1000);
 }
 
 function completeTest() {
   ok(true, "Timeout ran");
   finish();
 }

If your test is randomly timing out and you think that's just due to it taking too long, you can extend the timeout. Be aware that this is not a solution; you should investigate why your test is taking so long, since it's most likely due to a bad test design or a performance problem. If you can rewrite the test to make is shorter, split it into smaller tests, or find why it's taking so long, you should really do that instead!

 function test() {
   // requestLongerTimeout accepts an integer factor, that is a multiplier for the the default 30 seconds timeout.
   // So a factor of 2 means: "Wait for at last 60s (2*30s)".
   requestLongerTimeout(2);
   waitForExplicitFinish();
   
   setTimeout(completeTest, 40000);
 }
 
 function completeTest() {
   ok(true, "Timeout did not run");
   finish();
 }

For browser chrome tests, you also have the option of using generators for tests with a lot of asynchronous calls by defining a function called generatorTest() instead of test(). Your callback function will need to call the test function nextStep():

// Browser chrome example
function generatorTest() {
  // ... load a web page ...
  addEventListener("load", nextStep, false);
  yield;

  // ... run test ...
}

Exceptions in tests

Any exceptions thrown under test() will be caught and reported in the test output as a failure. Exceptions thrown outside of test() (e.g. in a timeout, event handler, etc) will not be caught, but will result in a timed out test if they prevent finish() from being called.

Cleaning up after yourself

If you need to do special clean up after running your test, you can register a cleanup function that is guaranteed to be run after your test finishes. You can call registerCleanupFunction() at any point in your test, even in head.js if you need to register a clean up function for all tests in that folder. Notice that you can register as many clean up functions as you will. Clean up functions are also guaranteed to be called if your test timeouts, so you can ensure that in case of timeouts you won't pollute next running tests and causing them to fail.

registerCleanupFunction(function() {
  // Clean up test related stuff here.
});

function test() {
  // Add some test related stuff.
}

When writing tests, design for failure. It is much better to call registerCleanupFunction() than doing the cleanup after you have successfully run your tests because the cleanup functions are always called, no matter what. For instance, if you change a preference you want to make sure that the preference is always reset so that it doesn't impact other tests after yours.

Adding a new browser chrome test to the tree

To add a new browser chrome test to the tree, follow the Mochitest instructions, keeping in mind that the browser chrome tests must be copied into _tests/testing/mochitest/browser instead of _tests/testing/mochitest/tests. Using _BROWSER_TEST_FILES rather than _TEST_FILES as the variable name for the list of tests to install is also recommended, to better differentiate the two sets of tests. Also remember that the test file's name must begin with "browser_" for the test to be recognized as a browser chrome test.

{{ languages( { "ja": "Ja/Browser_chrome_tests" } ) }}

Revision Source

<p>The browser chrome test suite is an automated testing framework designed to allow testing of application chrome windows using JavaScript. It currently allows you to run JavaScript code in the same scope as the main Firefox browser window and report results using the same functions as the <a href="/en/Mochitest" title="en/Mochitest">Mochitest test framework</a>. The browser chrome test suite depends on runtests.py from the Mochitest framework, so it won't work in a build with Mochitest disabled (<code>--disable-mochitest</code>).</p>
<h3 id="Running_the_browser_chrome_tests" name="Running_the_browser_chrome_tests">Running the browser chrome tests</h3>
<p>To run Mochitest, first <a href="/En/Developer_Guide/Build_Instructions" title="en/Build_Documentation">build Mozilla</a> with your changes; then:</p>
<ul> <li>In 1.9.1 and later (since {{ Bug(417516) }} was fixed) run the following command from the top-level directory: <ul> <li><code>make -C $(OBJDIR) mochitest-browser-chrome</code></li> </ul> </li> <li>To test on older branches, run Mochitest's <code>runtests.py</code> script passing it the <code>--browser-chrome</code> command line argument: <ul> <li><code>cd $(OBJDIR)/_tests/testing/mochitest<br> python runtests.py --browser-chrome<br> </code></li> </ul> </li>
</ul>
<p>This will launch your build and open a "browser chrome tests" window. The "run all tests" button will start the test run, and report the results in the UI and to stdout. There's also an option to output the results to file, using the same command line parameter used by <a class="internal" href="/en/Mochitest" title="En/Mochitest">Mochitest</a> (<code>--log-file=</code><em><code>/path/to/file</code></em>).</p>
<p>You can tell the test harness to run the tests automatically at startup without user interaction by passing the <code>--autorun</code> parameter to <code>runtests.py</code>. This can be used in combination with the <code>--close-when-done</code> parameter to fully automate the tests.</p>
<p>The browser chrome test suite does respect the optional <code>--test-path</code> argument, in order to run specific groups of tests. As with <a class="internal" href="/en/Mochitest" title="En/Mochitest">Mochitest</a>, the path specified by the <code>--test-path</code> argument is the path to a test or directory within the Mozilla source tree. If the path points to a directory, then the tests in that directory and all of its subdirectories will be run.</p>
<p>For example, to run the tests in <code>browser/base/content/test</code> the command would be:</p>
<pre>TEST_PATH=browser/base/content/test/ make -C $(OBJDIR) mochitest-browser-chrome
</pre>
<h3 id="Writing_browser_chrome_tests" name="Writing_browser_chrome_tests">Writing browser chrome tests</h3>
<p>Browser chrome tests are snippets of JavaScript code that run in the browser window's global scope. A simple test would look like this:</p>
<pre class="brush: js"> function test() {
   ok(gBrowser, "gBrowser exists");
   is(gBrowser, getBrowser(), "gBrowser and getBrowser() are the same");
 }
</pre>
<p>The <code>test()</code> function is invoked by the test harness when the test is run. The test file can contain other functions, they will be ignored unless invoked by <code>test()</code>.</p>
<div class="note"><strong>Note:</strong> Be careful when naming your functions and variables. Since the test files are executed in the same scope as the browser window, conflicting variable names could cause trouble while running the tests. You should attempt to reduce the side effects of the testing code and "clean up" after yourself, to avoid influencing other tests.</div>
<p>The comparison functions are identical to those supported by Mochitests, see <a href="/en/Mochitest#Test_functions" title="en/Mochitest#Test_functions">how the comparison functions work</a> in the Mochitest documentation for more details. The <a class="external" href="http://mxr.mozilla.org/mozilla/source/testing/mochitest/tests/SimpleTest/EventUtils.js" title="http://mxr.mozilla.org/mozilla/source/testing/mochitest/tests/SimpleTest/EventUtils.js">EventUtils helper functions</a> are available on the <code>EventUtils</code> object defined in the global scope.</p>
<p>The test file name must be prefixed with "browser_", and must have a file extension of ".js". Files that don't match this pattern will be ignored by the test harness. Using a descriptive file name is strongly encouraged instead of just using a bug number.</p>
<p>You can collect common utils and helpers in a file called head.js, that must live in the same folder as the browser-chrome tests. This file will be injected into the test scope for each test living in the same folder. Notice that any function call in <code>head.js</code> main scope will run before the main test() method.</p>
<h4 id="Test_functions" name="Test_functions">Asynchronous tests</h4>
<p>The test suite also supports asynchronous tests, using the same function names as Mochitest. Call <code>waitForExplicitFinish()</code> from <code>test()</code> if you want to delay reporting a result until after <code>test()</code> has returned. Call <code>finish()</code> once the test is complete. Be aware that the test harness will mark tests that take too long to complete as FAILED (the current timeout is 30 seconds).</p>
<pre class="brush: js"> function test() {
   waitForExplicitFinish();
   setTimeout(completeTest, 1000);
 }
 
 function completeTest() {
   ok(true, "Timeout ran");
   finish();
 }
</pre>
<p>If your test is randomly timing out and you think that's just due to it taking too long, you can extend the timeout. <strong>Be aware that this is not a solution; you should investigate why your test is taking so long, since it's most likely due to a bad test design or a performance problem.</strong> If you can rewrite the test to make is shorter, split it into smaller tests, or find why it's taking so long, you should really do that instead!</p>
<pre class="brush: js"> function test() {
   // requestLongerTimeout accepts an integer factor, that is a multiplier for the the default 30 seconds timeout.
   // So a factor of 2 means: "Wait for at last 60s (2*30s)".
   requestLongerTimeout(2);
   waitForExplicitFinish();
   
   setTimeout(completeTest, 40000);
 }
 
 function completeTest() {
   ok(true, "Timeout did not run");
   finish();
 }
</pre>
<p>For browser chrome tests, you also have the option of using <a href="/en/JavaScript/New_in_JavaScript/1.7#Generators" title="https://developer.mozilla.org/en/New_in_JavaScript_1.7#Generators">generators</a> for tests with a lot of asynchronous calls by defining a function called <code>generatorTest()</code> instead of <code>test()</code>. Your callback function will need to call the test function <code>nextStep()</code>:</p>
<pre class="brush: js">// Browser chrome example
function generatorTest() {
  // ... load a web page ...
  addEventListener("load", nextStep, false);
  yield;

  // ... run test ...
}</pre>
<h4 id="Test_functions" name="Test_functions">Exceptions in tests</h4>
<p>Any exceptions thrown under <code>test()</code> will be caught and reported in the test output as a failure. Exceptions thrown outside of <code>test()</code> (e.g. in a timeout, event handler, etc) will not be caught, but will result in a timed out test if they prevent <code>finish()</code> from being called.</p>
<h4 id="Test_functions" name="Test_functions">Cleaning up after yourself</h4>
<p>If you need to do special clean up after running your test, you can register a cleanup function that is guaranteed to be run after your test finishes. You can call <code>registerCleanupFunction()</code> at any point in your test, even in <code>head.js</code> if you need to register a clean up function for all tests in that folder. Notice that you can register as many clean up functions as you will. Clean up functions are also guaranteed to be called if your test timeouts, so you can ensure that in case of timeouts you won't pollute next running tests and causing them to fail.</p>
<pre class="brush: js">registerCleanupFunction(function() {
  // Clean up test related stuff here.
});

function test() {
  // Add some test related stuff.
}
</pre>
<p><strong>When writing tests, design for failure.</strong> It is <em>much</em> better to call <code>registerCleanupFunction()</code> than doing the cleanup after you have successfully run your tests because the cleanup functions are always called, no matter what. For instance, if you change a preference you want to make sure that the preference is <em>always</em> reset so that it doesn't impact other tests after yours.</p>
<h3 id="Adding_a_new_browser_chrome_test_to_the_tree" name="Adding_a_new_browser_chrome_test_to_the_tree">Adding a new browser chrome test to the tree</h3>
<p>To add a new browser chrome test to the tree, follow the <a href="/en/Mochitest#Adding_tests_to_the_tree" title="en/Mochitest#Adding_tests_to_the_tree">Mochitest instructions</a>, keeping in mind that the browser chrome tests must be copied into <code>_tests/testing/mochitest/browser</code> instead of <code>_tests/testing/mochitest/tests</code>. Using <code>_BROWSER_TEST_FILES</code> rather than <code>_TEST_FILES</code> as the variable name for the list of tests to install is also recommended, to better differentiate the two sets of tests. Also remember that the test file's name must begin with "browser_" for the test to be recognized as a browser chrome test.</p>
<p>{{ languages( { "ja": "Ja/Browser_chrome_tests" } ) }}</p>
Revert to this revision