Mochitest

  • Revision slug: Mochitest
  • Revision title: Mochitest
  • Revision id: 9837
  • Created:
  • Creator: Jonathan_Watt
  • Is current revision? No
  • Comment The pythin 'runtests' script is the official script, and the perl script is deprecated

Revision Content

Introduction

Mochitest is an automated testing framework built on top of the MochiKit JavaScript libraries. It's just one of the automated regression testing frameworks used by Mozilla. Tests report success or failure to the test harness using JavaScript function calls.

Mochitest's unique strength is that it runs tests written as webpages in a full browser environment where the tests have chrome (elevated) privileges. This allows JavaScript in the tests to do much, much more than it would otherwise be able to do. In addition to the capabilities a script would normally have (e.g. DOM manipulation), scripts can access XPCOM components and services, and even access the browser itself. This allows a script to, say, simulate user input to the browser's user interface, before examining the browser to verify that the input had the intended results.

Mochitest's use of JavaScript function calls to communicate test success or failure can make it unsuitable for certain types of test. Only things that can in some way be tested using JavaScript (with chrome privileges!) can be tested with this framework. Given some creativity, that's actually much more than you might first think, but it's not possible to write Mochitest tests to directly test a non-scripted C++ component, for example.

Try not to use Mochitest

Yes, really. For many things Mochitest is overkill. In general you should always try to use one of the lighterweight testing frameworks. For example, if you only want to test a single XPCOM component then you should use xpcshell. On the other hand there are some things that Mochitest cannot do, or isn't designed to do. For example, for visual output tests you should try to use the reftest framework. For more information on the different types of automated testing frameworks see Mozilla automated testing.

Running tests

The Mozilla build machines run Mochitest as part of the build process, so we get to know pretty quickly if someone commits a change to the source code that breaks something. However, it is still a good idea to run Mochitest yourself before you commit any risky new code. You don't want to be the one who wastes everyone's time by breaking the tree if you can help it. :-)

To run Mochitest, first build Mozilla with your changes, then change directory to $(OBJDIR)/_tests/testing/mochitest.

Running the whole test suite

To run the entire Mochitest test suite, call the 'runtests' script without passing it any command line arguments:

python runtests.py

This will open your build with a document containing a "Run Tests" link at the top. To run the tests simply click this link and watch the results being generated. Test pass/fail is reported for each test as it runs and is recorded on the page.

Image:Mochitest.png

Note: you should keep focus on the browser window while the test are being run, as some may fail otherwise (like the one for {{template.Bug(330705)}} for example).

TODO: mention there is also a perl script called runtests.pl, but it appears to be deprecated with an intention to remove it?

Running select tests

To run a single test (perhaps a new test you just added) or a subset of the entire Mochitest suite, add a --test-path option pointing to the test or directory of tests that you want to run. For example, to run the first test for bug 123456, call runtests.py like this:

python runtests.py --test-path=dom/src/jsurl/test/test_bug123456-1.html

To run all the jsurl tests automatically, call it like this:

python runtests.py --test-path=dom/src/jsurl/ --autorun

The path specified by the --test-path is relative to the $(OBJDIR)/_tests/testing/mochitest/tests directory. If the path is a directory, the tests in that directory and all of its subdirectories will be loaded.

Logging test-run output

The output from a test-run can be sent to the console and/or a file (by default the results are only displayed in the browser). There are several levels of detail to choose from. The levels are DEBUG, INFO, WARNING, ERROR and FATAL, where DEBUG produces the highest detail, and FATAL produces the least.

To log to a file use --log-file=FILE. By default the logging level will be INFO but you can change this using --file-level=LEVEL.

To turn on logging to the console use --console-level=LEVEL.

For example, to log test-run output to the file ~/mochitest.log at DEBUG level detail you would use:

python runtests.py --log-file=~/mochitest.log --file-level=DEBUG

Other 'runtests' options

The 'runtests' script recognizes several other options - use the --help option to get a list. Note that there is separate documentation for the --chrome, --browser-chrome and --a11y options.

Writing tests

A Mochitest test is simply an HTML, XHTML or XUL file that contains some JavaScript to test for some condition(s). (SVG support is on the way.)

Test templates

You can avoid typing out boilerplate by using the {{template.Source("testing/mochitest/gen_template.pl", "gen_template")}} perl script to generate a test template (the directory {{template.Source("testing/mochitest/")}} is included as part of a normal checkout on trunk). This script takes two optional arguments:

  1. -b : a bug number
  2. -type : template type. {html|xhtml|xul}. defaults to html.

For example:

cd mozilla/testing/mochitest/
perl gen_template.pl -b=123456 > path/to/test_bug123456.html
perl gen_template.pl -b=123456 --type=xul > path/to/test_bug123456.xul

Note that Mochitest requires the file name of all tests to begin with the string "test_". See the section below for help on deciding where your tests should go in the tree.

The elements with id 'content' and 'display' in the generated file can be used by your script if you need elements to mess around with. TODO: uh, would someone care to expand on that?

Test functions

Each test must contain some JavaScript that will run and tell Mochitest whether the test has passed or failed. MochiTest.js provides a number of functions for the test to use that communicate the pass/fail to Mochitest. These include:

  • ok(expressionThatShouldBeTrue, "Error message") -- tests a value for truthiness
  • is(thingA, thingB, "Error message") -- compares two values (using ==, not ===)
  • isnot(thingA, thingB, "Error message") -- opposite of is()

See the {{template.Source("testing/mochitest/README.txt", "README")}} for an example of their use.

If you want to include a test for something that currently fails, then instead of commenting it out, you should use one of the "todo" equivalents so Tinderbox can notice if it suddenly starts passing:

  • todo(falseButShouldBeTrue, "Error message")
  • todo_is(thingA, thingB, "Error message")
  • todo_isnot(thingA, thingB, "Error message")

Helper functions

Right now all of Mochikit is available (this will change in {{template.Bug(367393)}}); {{template.Bug(367569)}} added sendChar, sendKey, and sendString helpers. These are available in {{template.Source("testing/mochitest/tests/SimpleTest/EventUtils.js")}}.

Adding tests to the tree

Once you've written a new test you need to add it to the Mozilla source tree and tell the build system about it so that the Mozilla tinderboxes will run it automatically.

Choosing a location

New Mochitest tests should go somewhere close to the code they are testing, hopefully in the same module, so that ownership of the test cases is clear. For example, if you create a new test for some HTML feature, you probably want to put the test in {{template.Source("content/html/content/test")}} or {{template.Source("content/html/document/test")}}. If a test directory does not exist near the code you are testing you can add a new test directory as the patch in {{template.Bug(368531)}} demonstrates.

Makefile changes

To tell the build system about your new test you need to add the name of your test file to _TEST_FILES in the test directory's Makefile.in.

If your test spans multiple files files, only name the main one "test_...". This is the one that will show up in the list of testcases to run. The other files should have some other name, but must still be added to the _TEST_FILES in Makefile.in.

Keep in mind that if you're adding chrome tests, you'll need to change the Makefile to install the tests in _tests/testing/mochitest/chrome rather than _tests/testing/mochitest/tests.

Building and running your new tests

Before committing a new tests you should check that the Makefile.in changes are correct and that your tests pass as you expect them to. To check your test, first export it to the Mochitest directory by running the command:

make

in the object directory corresponding to the test file's location in the source tree. Now open Mochitest as described above, but this time, instead of clicking on the "Run Tests" link, search for your test and click on it.

FAQ

How do I find an error in the log?

Search for the string "ERROR FAIL" to find unexpected failures. You can also search for "SimpleTest FINISHED" to see the final test summary. This is particularly useful when viewing full Tinderbox logs, since the Mochitest output isn't necessarily at the end of the combined log.

What if my tests have failures in them?

You still have to test that. Mochitest provides a todo() function that is identical to ok(), but is expected to fail. We've also added todo_is() and todo_isnot() to match is() and isnot().

What if my tests aren't done when onload fires?

Call SimpleTest.waitForExplicitFinish() before onload fires. Then, when you're done, call SimpleTest.finish().

What if I need to change a preference to run my test?

netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");
var prefService = Components.classes["@mozilla.org/preferences-service;1"]
                            .getService(Components.interfaces.nsIPrefService);
var domBranch = prefService.getBranch("dom.");
var oldVal = domBranch.getIntPref("max_script_run_time");
domBranch.setIntPref("max_script_run_time", 0);

// do what you need

domBranch.setIntPref("max_script_run_time", oldVal);

Can tests be run under a chrome URL?

Yes, use python runtests.py --chrome. Keep in mind that the xpcshell test harness should be your first choice for XPCOM testing. Only use mochitest if you need events, browser features, networking, etc.

How can I get around the error "Permission denied to get property XPCComponents.classes"?

Adding the following line to your test file (and each event handler) will allow full XPCOM usage.

netscape.security.PrivilegeManager.enablePrivilege('UniversalXPConnect');

This approach is obviously inconvenient. That's why we're working on the build hacking necessary to copy tests into a chrome directory for testing.

How do I change the HTTP headers or status sent with a file used in a Mochitest?

Create a text file next to the file whose headers you want to modify. The name of the text file should be the name of the file whose headers you're modifying followed by ^headers^. For example, if you have a file foo.jpg, the text file should be named foo.jpg^headers^. (Don't try to actually use the headers file in any other way in the test, because the HTTP server's hidden-file functionality prevents any file ending in exactly one ^ from being served.) Edit the file to contain the headers and/or status you want to set, like so:

HTTP 404 Not Found
Content-Type: text/html
Random-Header-of-Doom: 17

The first line sets the HTTP status and (optionally a) description associated with the file. This line is optional; you don't need it if you're fine with the normal response status and description. Any other lines in the file describe additional headers which you want to add or overwrite (most typically the Content-Type header, for the latter case) on the response. The format follows the conventions of HTTP, except that you don't need to have HTTP line endings and you can't use a header more than once (the last line for a particular header wins). The file may end with at most one blank line to match Unix text file conventions, but the trailing newline isn't strictly necessary.

How do I test issues which only show up when tests are run across domains?

The Mochitest harness runs one web server to serve tests, but through the magic of proxy autoconfig, all test files are available on a variety of different domains and ports. Tests running on any of these servers (with two exceptions for testing privilege escalation functionality) automatically have the ability to request elevated privileges such as UniversalXPConnect. The full list of domains and ports on which tests are served, all of which serve exactly the same content as http://localhost:8888, is:

  • http://localhost:8888
  • http://example.org:80
  • http://test1.example.org:80
  • http://test2.example.org:80
  • http://sub1.test1.example.org:80
  • http://sub1.test2.example.org:80
  • http://sub2.test1.example.org:80
  • http://sub2.test2.example.org:80
  • http://example.org:8000
  • http://test1.example.org:8000
  • http://test2.example.org:8000
  • http://sub1.test1.example.org:8000
  • http://sub1.test2.example.org:8000
  • http://sub2.test1.example.org:8000
  • http://sub2.test2.example.org:8000
  • http://example.com:80
  • http://test1.example.com:80
  • http://test2.example.com:80
  • http://sub1.test1.example.com:80
  • http://sub1.test2.example.com:80
  • http://sub2.test1.example.com:80
  • http://sub2.test2.example.com:80
  • http://sectest1.example.org:80
  • http://sub.sectest2.example.org:80
  • http://sub1.ält.example.org:8000
  • http://sub2.ält.example.org:80
  • http://exämple.test:80
  • http://sub1.exämple.test:80
  • http://παράδειγμα.δοκιμή:80
  • http://sub1.παράδειγμα.δοκιμή:80
  • http://sectest2.example.org:80 (does not have ability to request UniversalXPConnect and friends)
  • http://sub.sectest1.example.org:80 (does not have ability to request UniversalXPConnect and friends)

Unfortunately, there is currently no support for running tests over non-HTTP protocols such as FTP or HTTPS in ways that are useful for cross-domain testing. This limitation will probably be rectified in the future.

How do I write tests that check header values, method types, etc. of HTTP requests?

To write such a test, you simply need to write an SJS (server-side JavaScript) for it. An SJS is simply a JavaScript file with the extension sjs which is loaded in a sandbox; the global property handleRequest defined by the script is then executed with request and response objects, and the script populates the response based on the information in the request.

Here's an example of a simple SJS:

function handleRequest(request, response)
{
  // avoid confusing cache behaviors
  response.setHeader("Cache-Control", "no-cache", false);

  response.setHeader("Content-Type", "text/plain", false);
  response.write("Hello world!");
}

The exact properties of the request and response parameters are defined in the nsIHttpRequestMetadata and nsIHttpResponse interfaces in {{template.Source("netwerk/test/httpserver/nsIHttpServer.idl", "nsIHttpServer.idl")}}. Note carefully: the browser is free to cache responses generated by your script, so if you ever want an SJS to return different data for multiple requests to the same URL, you should add a Cache-Control: no-cache header to the response to prevent the test from accidentally failing if it's manually run multiple times in the same Mochitest session.

A simple example of an SJS used in reftests is {{template.Source("modules/libpr0n/test/reftest/generic/check-header.sjs", "check-header.sjs")}}.

{{ wiki.languages( { "ja": "ja/Mochitest" } ) }}

Revision Source

<p>
</p>
<h3 name="Introduction"> Introduction </h3>
<p>Mochitest is an automated testing framework built on top of the <a class="external" href="http://mochikit.com/">MochiKit</a> JavaScript libraries. It's just one of the automated regression testing frameworks used by Mozilla. Tests report success or failure to the test harness using JavaScript function calls.
</p><p>Mochitest's unique strength is that it runs tests written as webpages in a full browser environment where the tests have chrome (elevated) privileges. This allows JavaScript in the tests to do much, much more than it would otherwise be able to do. In addition to the capabilities a script would normally have (e.g. DOM manipulation), scripts can access XPCOM components and services, and even access the browser itself. This allows a script to, say, simulate user input to the browser's user interface, before examining the browser to verify that the input had the intended results.
</p><p>Mochitest's use of JavaScript function calls to communicate test success or failure can make it unsuitable for certain types of test. Only things that can in some way be tested using JavaScript (with chrome privileges!) can be tested with this framework. Given some creativity, that's actually much more than you might first think, but it's not possible to write Mochitest tests to directly test a non-scripted C++ component, for example.
</p>
<h3 name="Try_not_to_use_Mochitest"> Try not to use Mochitest </h3>
<p>Yes, really. For many things Mochitest is overkill. In general you should always try to use one of the lighterweight testing frameworks. For example, if you only want to test a single XPCOM component then you should use <a href="en/Writing_xpcshell-based_unit_tests">xpcshell</a>. On the other hand there are some things that Mochitest cannot do, or isn't designed to do. For example, for visual output tests you should try to use the <a href="en/Creating_reftest-based_unit_tests">reftest</a> framework. For more information on the different types of automated testing frameworks see <a href="en/Mozilla_automated_testing">Mozilla automated testing</a>.
</p>
<h3 name="Running_tests"> Running tests </h3>
<p>The Mozilla build machines run Mochitest as part of the build process, so we get to know pretty quickly if someone commits a change to the source code that breaks something. However, it is still a good idea to run Mochitest yourself before you commit any risky new code. You don't want to be the one who wastes everyone's time by breaking the tree if you can help it. :-)
</p><p>To run Mochitest, first <a href="en/Build_Documentation">build Mozilla</a> with your changes, then change directory to <code>$(OBJDIR)/_tests/testing/mochitest</code>.
</p>
<h4 name="Running_the_whole_test_suite"> Running the whole test suite </h4>
<p>To run the entire Mochitest test suite, call the 'runtests' script without passing it any command line arguments:
</p>
<pre class="eval">python runtests.py
</pre>
<p>This will open your build with a document containing a "Run Tests" link at the top. To run the tests simply click this link and watch the results being generated. Test pass/fail is reported for each test as it runs and is recorded on the page.
</p><p><img alt="Image:Mochitest.png" src="File:en/Media_Gallery/Mochitest.png">
</p><p><b>Note:</b> you should keep focus on the browser window while the test are being run, as some may fail otherwise (like the one for {{template.Bug(330705)}} for example).
</p><p>TODO: mention there is also a perl script called runtests.pl, but it appears to be deprecated with an intention to remove it?
</p>
<h4 name="Running_select_tests"> Running select tests </h4>
<p>To run a single test (perhaps a new test you just added) or a subset of the entire Mochitest suite, add a <code>--test-path</code> option pointing to the test or directory of tests that you want to run.  For example, to run the first test for bug 123456, call runtests.py like this:
</p>
<pre class="eval">python runtests.py --test-path=dom/src/jsurl/test/test_bug123456-1.html
</pre>
<p>To run all the jsurl tests automatically, call it like this:
</p>
<pre class="eval">python runtests.py --test-path=dom/src/jsurl/ --autorun
</pre>
<p>The path specified by the --test-path is relative to the $(OBJDIR)/_tests/testing/mochitest/tests directory. If the path is a directory, the tests in that directory and all of its subdirectories will be loaded.
</p>
<h4 name="Logging_test-run_output"> Logging test-run output </h4>
<p>The output from a test-run can be sent to the console and/or a file (by default the results are only displayed in the browser). There are several levels of detail to choose from. The levels are DEBUG, INFO, WARNING, ERROR and FATAL, where DEBUG produces the highest detail, and FATAL produces the least.
</p><p>To log to a file use --log-file=FILE. By default the logging level will be INFO but you can change this using --file-level=LEVEL.
</p><p>To turn on logging to the console use --console-level=LEVEL.
</p><p>For example, to log test-run output to the file ~/mochitest.log at DEBUG level detail you would use:
</p>
<pre class="eval">python runtests.py --log-file=~/mochitest.log --file-level=DEBUG
</pre>
<h4 name="Other_.27runtests.27_options"> Other <code>'runtests'</code> options </h4>
<p>The 'runtests' script recognizes several other options - use the --help option to get a list. Note that there is separate documentation for the <a href="en/Chrome_tests">--chrome</a>, <a href="en/Browser_chrome_tests">--browser-chrome</a> and <a href="en/Accessibility">--a11y</a> options.
</p>
<h3 name="Writing_tests"> Writing tests </h3>
<p>A Mochitest test is simply an HTML, XHTML or XUL file that contains some JavaScript to test for some condition(s). (SVG support is on the way.)
</p>
<h4 name="Test_templates"> Test templates </h4>
<p>You can avoid typing out boilerplate by using the {{template.Source("testing/mochitest/gen_template.pl", "gen_template")}} perl script to generate a test template (the directory {{template.Source("testing/mochitest/")}} is included as part of a normal checkout on trunk). This script takes two optional arguments:
</p>
<ol><li>  -b : a bug number
</li><li>  -type : template type. {html|xhtml|xul}. defaults to html.
</li></ol>
<p>For example:
</p>
<pre class="eval">cd mozilla/testing/mochitest/
perl gen_template.pl -b=123456 &gt; path/to/test_bug123456.html
perl gen_template.pl -b=123456 --type=xul &gt; path/to/test_bug123456.xul
</pre>
<p>Note that Mochitest requires the file name of all tests to begin with the string "test_". See the section below for help on deciding where your tests should go in the tree.
</p><p>The elements with id 'content' and 'display' in the generated file can be used by your script if you need elements to mess around with. TODO: uh, would someone care to expand on that?
</p>
<h4 name="Test_functions"> Test functions </h4>
<p>Each test must contain some JavaScript that will run and tell Mochitest whether the test has passed or failed. MochiTest.js provides a number of functions for the test to use that communicate the pass/fail to Mochitest. These include:
</p>
<ul><li> ok(expressionThatShouldBeTrue, "Error message") -- tests a value for truthiness
</li><li> is(thingA, thingB, "Error message") -- compares two values (using ==, not ===)
</li><li> isnot(thingA, thingB, "Error message") -- opposite of is()
</li></ul>
<p>See the {{template.Source("testing/mochitest/README.txt", "README")}} for an example of their use.
</p><p>If you want to include a test for something that currently fails, then instead of commenting it out, you should use one of the "todo" equivalents so Tinderbox can notice if it suddenly starts passing:
</p>
<ul><li> todo(falseButShouldBeTrue, "Error message")
</li><li> todo_is(thingA, thingB, "Error message")
</li><li> todo_isnot(thingA, thingB, "Error message")
</li></ul>
<h4 name="Helper_functions"> Helper functions </h4>
<p>Right now all of Mochikit is available (this will change in {{template.Bug(367393)}}); {{template.Bug(367569)}} added <code>sendChar</code>, <code>sendKey</code>, and <code>sendString</code> helpers. These are available in {{template.Source("testing/mochitest/tests/SimpleTest/EventUtils.js")}}.
</p>
<h3 name="Adding_tests_to_the_tree"> Adding tests to the tree </h3>
<p>Once you've written a new test you need to add it to the Mozilla source tree and tell the build system about it so that the Mozilla tinderboxes will run it automatically.
</p>
<h4 name="Choosing_a_location"> Choosing a location </h4>
<p>New Mochitest tests should go somewhere close to the code they are testing, hopefully in the same module, so that ownership of the test cases is clear. For example, if you create a new test for some HTML feature, you probably want to put the test in {{template.Source("content/html/content/test")}} or {{template.Source("content/html/document/test")}}. If a test directory does not exist near the code you are testing you can add a new test directory as the patch in {{template.Bug(368531)}} demonstrates.
</p>
<h4 name="Makefile_changes"> Makefile changes </h4>
<p>To tell the build system about your new test you need to add the name of your test file to <code>_TEST_FILES</code> in the test directory's <code>Makefile.in</code>.
</p><p>If your test spans multiple files files, only name the main one "test_...". This is the one that will show up in the list of testcases to run. The other files should have some other name, but must still be added to the <code>_TEST_FILES</code> in <code>Makefile.in</code>.
</p><p>Keep in mind that if you're adding chrome tests, you'll need to change the Makefile to install the tests in <code>_tests/testing/mochitest/<b>chrome</b></code> rather than <code>_tests/testing/mochitest/<b>tests</b></code>.
</p>
<h4 name="Building_and_running_your_new_tests"> Building and running your new tests </h4>
<p>Before committing a new tests you should check that the Makefile.in changes are correct and that your tests pass as you expect them to. To check your test, first export it to the Mochitest directory by running the command:
</p>
<pre class="eval">make
</pre>
<p>in the object directory corresponding to the test file's location in the source tree. Now open Mochitest as described above, but this time, instead of clicking on the "Run Tests" link, search for your test and click on it.
</p>
<h3 name="FAQ"> FAQ </h3>
<h4 name="How_do_I_find_an_error_in_the_log.3F"> How do I find an error in the log? </h4>
<p>Search for the string "ERROR FAIL" to find unexpected failures. You can also search for "SimpleTest FINISHED" to see the final test summary. This is particularly useful when viewing full Tinderbox logs, since the Mochitest output isn't necessarily at the end of the combined log.
</p>
<h4 name="What_if_my_tests_have_failures_in_them.3F"> What if my tests have failures in them? </h4>
<p>You still have to test that. Mochitest provides a <code>todo()</code> function that is identical to <code>ok()</code>, but is expected to fail.  We've also added <code>todo_is()</code> and <code>todo_isnot()</code> to match <code>is()</code> and <code>isnot()</code>.
</p>
<h4 name="What_if_my_tests_aren.27t_done_when_onload_fires.3F"> What if my tests aren't done when onload fires? </h4>
<p>Call <code>SimpleTest.waitForExplicitFinish()</code> before onload fires.  Then, when you're done, call <code>SimpleTest.finish()</code>.
</p>
<h4 name="What_if_I_need_to_change_a_preference_to_run_my_test.3F"> What if I need to change a preference to run my test? </h4>
<pre class="eval">netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");
var prefService = Components.classes["@mozilla.org/preferences-service;1"]
                            .getService(Components.interfaces.nsIPrefService);
var domBranch = prefService.getBranch("dom.");
var oldVal = domBranch.getIntPref("max_script_run_time");
domBranch.setIntPref("max_script_run_time", 0);

// do what you need

domBranch.setIntPref("max_script_run_time", oldVal);
</pre>
<h4 name="Can_tests_be_run_under_a_chrome_URL.3F"> Can tests be run under a chrome URL? </h4>
<p>Yes, use <code>python runtests.py --chrome</code>. Keep in mind that the <a href="en/Writing_xpcshell-based_unit_tests">xpcshell test harness</a> should be your first choice for XPCOM testing. Only use mochitest if you need events, browser features, networking, etc.
</p>
<h4 name="How_can_I_get_around_the_error_.22Permission_denied_to_get_property_XPCComponents.classes.22.3F"> How can I get around the error "Permission denied to get property XPCComponents.classes"? </h4>
<p>Adding the following line to your test file (and each event handler) will allow full XPCOM usage.
</p><p><code> netscape.security.PrivilegeManager.enablePrivilege('UniversalXPConnect'); </code>
</p><p>This approach is obviously inconvenient. That's why we're working on the build hacking necessary to copy tests into a chrome directory for testing.
</p>
<h4 name="How_do_I_change_the_HTTP_headers_or_status_sent_with_a_file_used_in_a_Mochitest.3F"> How do I change the HTTP headers or status sent with a file used in a Mochitest? </h4>
<p>Create a text file next to the file whose headers you want to modify.  The name of the text file should be the name of the file whose headers you're modifying followed by <code>^headers^</code>.  For example, if you have a file <code>foo.jpg</code>, the text file should be named <code>foo.jpg^headers^</code>.  (Don't try to actually use the headers file in any other way in the test, because the HTTP server's hidden-file functionality prevents any file ending in exactly one <code>^</code> from being served.)  Edit the file to contain the headers and/or status you want to set, like so:
</p>
<pre class="eval">HTTP 404 Not Found
Content-Type: text/html
Random-Header-of-Doom: 17
</pre>
<p>The first line sets the HTTP status and (optionally a) description associated with the file.  This line is optional; you don't need it if you're fine with the normal response status and description.  Any other lines in the file describe additional headers which you want to add or overwrite (most typically the Content-Type header, for the latter case) on the response.  The format follows the conventions of HTTP, except that you don't need to have HTTP line endings and you can't use a header more than once (the last line for a particular header wins).  The file may end with at most one blank line to match Unix text file conventions, but the trailing newline isn't strictly necessary.
</p>
<h4 name="How_do_I_test_issues_which_only_show_up_when_tests_are_run_across_domains.3F"> How do I test issues which only show up when tests are run across domains? </h4>
<p>The Mochitest harness runs one web server to serve tests, but through the magic of proxy autoconfig, all test files are available on a variety of different domains and ports.  Tests running on any of these servers (with two exceptions for testing privilege escalation functionality) automatically have the ability to request elevated privileges such as UniversalXPConnect.  The full list of domains and ports on which tests are served, all of which serve exactly the same content as <code><span class="plain">http://localhost:8888</span></code>, is:
</p>
<ul><li> <span class="plain">http://localhost:8888</span>
</li><li> <span class="plain">http://example.org:80</span>
</li><li> <span class="plain">http://test1.example.org:80</span>
</li><li> <span class="plain">http://test2.example.org:80</span>
</li><li> <span class="plain">http://sub1.test1.example.org:80</span>
</li><li> <span class="plain">http://sub1.test2.example.org:80</span>
</li><li> <span class="plain">http://sub2.test1.example.org:80</span>
</li><li> <span class="plain">http://sub2.test2.example.org:80</span>
</li><li> <span class="plain">http://example.org:8000</span>
</li><li> <span class="plain">http://test1.example.org:8000</span>
</li><li> <span class="plain">http://test2.example.org:8000</span>
</li><li> <span class="plain">http://sub1.test1.example.org:8000</span>
</li><li> <span class="plain">http://sub1.test2.example.org:8000</span>
</li><li> <span class="plain">http://sub2.test1.example.org:8000</span>
</li><li> <span class="plain">http://sub2.test2.example.org:8000</span>
</li><li> <span class="plain">http://example.com:80</span>
</li><li> <span class="plain">http://test1.example.com:80</span>
</li><li> <span class="plain">http://test2.example.com:80</span>
</li><li> <span class="plain">http://sub1.test1.example.com:80</span>
</li><li> <span class="plain">http://sub1.test2.example.com:80</span>
</li><li> <span class="plain">http://sub2.test1.example.com:80</span>
</li><li> <span class="plain">http://sub2.test2.example.com:80</span>
</li><li> <span class="plain">http://sectest1.example.org:80</span>
</li><li> <span class="plain">http://sub.sectest2.example.org:80</span>
</li><li> <span class="plain">http://sub1.ält.example.org:8000</span>
</li><li> <span class="plain">http://sub2.ält.example.org:80</span>
</li><li> <span class="plain">http://exämple.test:80</span>
</li><li> <span class="plain">http://sub1.exämple.test:80</span>
</li><li> <span class="plain">http://παράδειγμα.δοκιμή:80</span>
</li><li> <span class="plain">http://sub1.παράδειγμα.δοκιμή:80</span>
</li><li> <span class="plain">http://sectest2.example.org:80</span> (does <b>not</b> have ability to request UniversalXPConnect and friends)
</li><li> <span class="plain">http://sub.sectest1.example.org:80</span> (does <b>not</b> have ability to request UniversalXPConnect and friends)
</li></ul>
<p>Unfortunately, there is currently no support for running tests over non-HTTP protocols such as FTP or HTTPS in ways that are useful for cross-domain testing.  This limitation will probably be rectified in the future.
</p>
<h4 name="How_do_I_write_tests_that_check_header_values.2C_method_types.2C_etc._of_HTTP_requests.3F"> How do I write tests that check header values, method types, etc. of HTTP requests? </h4>
<p>To write such a test, you simply need to write an SJS (server-side JavaScript) for it.  An SJS is simply a JavaScript file with the extension <code>sjs</code> which is loaded in a sandbox; the global property <code>handleRequest</code> defined by the script is then executed with request and response objects, and the script populates the response based on the information in the request.
</p><p>Here's an example of a simple SJS:
</p>
<pre class="eval">function handleRequest(request, response)
{
  // avoid confusing cache behaviors
  response.setHeader("Cache-Control", "no-cache", false);

  response.setHeader("Content-Type", "text/plain", false);
  response.write("Hello world!");
}
</pre>
<p>The exact properties of the request and response parameters are defined in the <code>nsIHttpRequestMetadata</code> and <code>nsIHttpResponse</code> interfaces in <code>{{template.Source("netwerk/test/httpserver/nsIHttpServer.idl", "nsIHttpServer.idl")}}</code>.  Note carefully: the browser is free to cache responses generated by your script, so if you ever want an SJS to return different data for multiple requests to the same URL, you should add a <code>Cache-Control: no-cache</code> header to the response to prevent the test from accidentally failing if it's manually run multiple times in the same Mochitest session.
</p><p>A simple example of an SJS used in reftests is <code>{{template.Source("modules/libpr0n/test/reftest/generic/check-header.sjs", "check-header.sjs")}}</code>.
</p>
<div class="noinclude">
</div>
{{ wiki.languages( { "ja": "ja/Mochitest" } ) }}
Revert to this revision