Mochitest

  • Revision slug: Mochitest
  • Revision title: Mochitest
  • Revision id: 9818
  • Created:
  • Creator: Jonathan_Watt
  • Is current revision? No
  • Comment Remove the FAQ question on running a single test since this has now been incorporated into the main text as a subsection

Revision Content

Introduction

Mochitest is an automated testing framework built on top of the MochiKit JavaScript libraries. It's just one of the automated regression testing facilities Mozilla developers have at their disposal. Tests report success or failure to the test harness using JavaScript function calls.

Mochitest's unique strength is that it runs tests written as webpages in a full browser environment where the tests have chrome (elevated) privileges. This allows JavaScript in the tests to do much, much more than it would otherwise be able to do. In addition to the capabilities a script would normally have (e.g. DOM manipulation), scripts can access XPCOM components and services, and even access the browser itself. This allows a script to, say, simulate user input to the browser's user interface, before examining the browser to verify that the input had the intended results.

Mochitest's use of JavaScript function calls to communicate test success or failure can make it unsuitable for certain types of test. Only things that can in some way be tested using JavaScript (with chrome privileges!) can be tested with this framework. Given some creativity, that's actually much more than you might first think, but it's not possible to write Mochitest tests to directly test a non-scripted C++ component, for example.

Try not to use Mochitest

Yes, really. For many things Mochitest is overkill. In general you should always try to use one of the lighterweight testing frameworks. For example, if you only want to test a single XPCOM component then you should use xpcshell. On the other hand there are some things that Mochitest cannot do, or isn't designed to do. For example, for visual output tests you should try to use the reftest framework. For more information on the different types of automated testing frameworks see Mozilla automated testing.

Running Mochitest

The Mozilla tinderboxes run Mochitest as part of the build process, so we get to know pretty quickly if someone commits a change to the source code that breaks something. However, it is still a good idea to run Mochitest yourself before you commit new code. You don't want to be the one who wastes everyone's time by breaking the tree when you can help it. :-)

To run Mochitest, first build Mozilla with your changes, then change directory to $(OBJDIR)/_tests/testing/mochitest.

Running all the tests

To run all the Mochitest tests call the 'runtests' script without passing it any command line arguments:

perl runtests.pl

This will open your build with a document containing a "Run Tests" link at the top. To run the tests simply click this link and watch the results being generated. Test pass/fail is reported for each test as they are run.

You should keep focus on the browser window during the tests, as some may fail otherwise (like the one for {{template.Bug(330705)}}).

TODO: mention there is also a python script called runtests.py. Which is the official script? Are they kept in sync? Is/will one of them be deprecated?

Running an individual test or a small group of tests

To run a single test (perhaps a new test you just added) or a subset of the entire Mochitest suite, add a --test-path option pointing to the test or group of tests that you want to run. For example, to run the first test for {{template.Bug(351633)}}, call runtests.pl like this:

perl runtests.pl --test-path=dom/src/jsurl/test/test_bug351633-1.html

To run all the jsurl tests automatically, call it like this:

perl runtests.pl --test-path=dom/src/jsurl/ --autorun

Writing new Mochitest tests

Use testing/mochitest/gen_template.pl to generate a template. This script takes two optional arguments:

  1. -b : a bug number
  2. -type : template type. {html|xhtml|xul}. defaults to html.

Use one or more of the following functions in the inline script:

  • ok(expectedTrueValue, errorMessage) -- tests a value for truthiness
  • is(thingA, thingB, errorMessage) -- compares two values (using ==, which is a bit loose)
  • isnot(thingA, thingB, errorMessage) -- opposite of is()

If your test currently fails, then instead of commenting it out, you should use one of the "todo" equivalents so Tinderbox can notice if it suddenly starts passing:

  • todo(falseButShouldBeTrue, errorMessage)
  • todo_is(thingA, thingB, errorMessage)
  • todo_isnot(thingA, thingB, errorMessage)

Adding new Mochitest tests to the tree

Once you've written a new test you need to add it to the Mozilla source tree and tell the build system about it so that the Mozilla tinderboxes will run it automatically.

New Mochitest tests should go somewhere close to the code they are testing. For example, if you create a new test for some HTML feature, you probably want to put the test in {{template.Source("content/html/content/test")}} or {{template.Source("content/html/document/test")}}. If a test directory does not exist near the code you are testing you can add a new test directory as the patch in {{template.Bug(368531)}} demonstrates.

To tell the build system about your new test you need to add the name of your test file to _TEST_FILES in the test directory's Makefile.in.

If your test spans multiple files files, only name the main one "test_...". This is the one that will show up in the list of testcases to run. The other files should have some other name, but must still be added to the _TEST_FILES in Makefile.in.

Keep in mind that if you're adding chrome tests, you'll need to change the Makefile to install the tests in _tests/testing/mochitest/chrome rather than _tests/testing/mochitest/tests.

Before committing your new test and the Makefile.in changes, do run Mochitest in an up to date trunk build to check that you will not unexpectedly turn the tree orange.

Testing new tests

Before committing a new test you should check that it does actually pass as you expect it to. To check your test, first export it to the Mochitest directory by running the command:

make

in the object directory corresponding to the test file's location in the source tree. Now open Mochitest as described above, but this time, instead of clicking on the "Run Tests" link, search for your test and click on it.

FAQ

How do I get started?

Start by reading the introductory material provided by the article Mochitest, as well as by {{template.Source("testing/mochitest/README.txt", "reading the README")}}.

  • {{template.Source("testing/mochitest")}} should be checked out as part of the normal checkout on trunk.
  • mochitest is built by default (unless --disable-mochitest is specified in mozconfig) and installed into ($OBJ_DIR)/_tests/testing/mochitest
  • To run tests
    • cd $OBJDIR/_tests/testing/mochitest (Note that the working directory matters for runtests.pl!)
    • perl runtests.pl --autorun
  • The README has information on how to create a new mochitest testcase.

How do the comparison functions work?

ok() tests the truth of the first argument.

ok(true == 1, "this passes");
ok(true === 1, "this fails");

is() compares the first argument to the second using the loose-equality == operator.

is(true, 1, "1 equals true");

Here is a typical test failure:

var foo = "abc";
is(foo, "xyz", "foo should hold the last letters of the alphabet");

Mochitest will log the failure to all log listeners (console, file writer, screen, etc).

 > 001: FAIL | Expected "xyz" got "abc" | foo should hold the last letters of the alphabet

What other helper functions are available?

Right now all of Mochikit is available (this will change in {{template.Bug(367393)}}); {{template.Bug(367569)}} added sendChar, sendKey, and sendString helpers. These are available in {{template.Source("testing/mochitest/tests/SimpleTest/EventUtils.js")}}.

How do I avoid typing boilerplate?

Use the quaint {{template.Source("testing/mochitest/gen_template.pl", "gen_template")}} perl script.

Here's enough to be dangerous:

~/firefox/mozilla> cd testing/mochitest/
~/firefox/mozilla/testing/mochitest> perl gen_template.pl -b=123456 > tests/test_bug123456.html
# Or for a chrome XUL testcase:
~/firefox/mozilla/testing/mochitest> perl gen_template.pl -b=123456 --type=xul > tests/test_bug123456.xul

The elements with id 'content' and 'display' in the generated file can be used by your script if you need elements to mess around with.

Where do the tests go?

They go somewhere near the code they're testing, hopefully in the same module, so that ownership of the test cases is clear.

What do the results look like?

The test runner page shows a table with green and red rows for each test page.

Image:testrunner table.png

Where do the results get logged?

There are several possibilities. Right now, calling the test page (http://localhost:8888/tests/index.html) with parameters can make it log to a file and/or to the console, with varying logging levels (INFO/DEBUG/ERROR). See the top of {{template.Source("testing/mochitest/runtests.pl.in", "runtests.pl.in")}} for details.

How do I find an error in the log?

Search for the string "ERROR FAIL" to find unexpected failures. You can also search for "SimpleTest FINISHED" to see the final test summary. This is particularly useful when viewing full Tinderbox logs, since the Mochitest output isn't necessarily at the end of the combined log.

What if my tests have failures in them?

You still have to test that. Mochitest provides a todo() function that is identical to ok(), but is expected to fail. We've also added todo_is() and todo_isnot() to match is() and isnot().

What if my tests aren't done when onload fires?

Call SimpleTest.waitForExplicitFinish() before onload fires. Then, when you're done, call SimpleTest.finish().

What if I need to change a preference to run my test?

netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");
var prefService = Components.classes["@mozilla.org/preferences-service;1"]
                            .getService(Components.interfaces.nsIPrefService);
var domBranch = prefService.getBranch("dom.");
var oldVal = domBranch.getIntPref("max_script_run_time");
domBranch.setIntPref("max_script_run_time", 0);

// do what you need

domBranch.setIntPref("max_script_run_time", oldVal);

Can tests be run under a chrome URL?

Yes, use perl runtests.pl --chrome. Keep in mind that the xpcshell test harness should be your first choice for XPCOM testing. Only use mochitest if you need events, browser features, networking, etc.

How can I get around the error "Permission denied to get property XPCComponents.classes"?

Adding the following line to your test file (and each event handler) will allow full XPCOM usage.

netscape.security.PrivilegeManager.enablePrivilege('UniversalXPConnect');

This approach is obviously inconvenient. That's why we're working on the build hacking necessary to copy tests into a chrome directory for testing.

How do I change the HTTP headers or status sent with a file used in a Mochitest?

Create a text file next to the file whose headers you want to modify. The name of the text file should be the name of the file whose headers you're modifying followed by ^headers^. For example, if you have a file foo.jpg, the text file should be named foo.jpg^headers^. (Don't try to actually use the headers file in any other way in the test, because the HTTP server's hidden-file functionality prevents any file ending in exactly one ^ from being served.) Edit the file to contain the headers and/or status you want to set, like so:

HTTP 404 Not Found
Content-Type: text/html
Random-Header-of-Doom: 17

The first line sets the HTTP status and (optionally a) description associated with the file. This line is optional; you don't need it if you're fine with the normal response status and description. Any other lines in the file describe additional headers which you want to add or overwrite (most typically the Content-Type header, for the latter case) on the response. The format follows the conventions of HTTP, except that you don't need to have HTTP line endings and you can't use a header more than once (the last line for a particular header wins). The file may end with at most one blank line to match Unix text file conventions, but the trailing newline isn't strictly necessary.

How do I test issues which only show up when tests are run across domains?

The Mochitest harness runs one web server to serve tests, but through the magic of proxy autoconfig, all test files are available on a variety of different domains and ports. Tests running on any of these servers (with two exceptions for testing privilege escalation functionality) automatically have the ability to request elevated privileges such as UniversalXPConnect. The full list of domains and ports on which tests are served, all of which serve exactly the same content as http://localhost:8888, is:

  • http://localhost:8888
  • http://example.org:80
  • http://test1.example.org:80
  • http://test2.example.org:80
  • http://sub1.test1.example.org:80
  • http://sub1.test2.example.org:80
  • http://sub2.test1.example.org:80
  • http://sub2.test2.example.org:80
  • http://example.org:8000
  • http://test1.example.org:8000
  • http://test2.example.org:8000
  • http://sub1.test1.example.org:8000
  • http://sub1.test2.example.org:8000
  • http://sub2.test1.example.org:8000
  • http://sub2.test2.example.org:8000
  • http://example.com:80
  • http://test1.example.com:80
  • http://test2.example.com:80
  • http://sub1.test1.example.com:80
  • http://sub1.test2.example.com:80
  • http://sub2.test1.example.com:80
  • http://sub2.test2.example.com:80
  • http://sectest1.example.org:80
  • http://sub.sectest2.example.org:80
  • http://sub1.ält.example.org:8000
  • http://sub2.ält.example.org:80
  • http://exämple.test:80
  • http://sub1.exämple.test:80
  • http://παράδειγμα.δοκιμή:80
  • http://sub1.παράδειγμα.δοκιμή:80
  • http://sectest2.example.org:80 (does not have ability to request UniversalXPConnect and friends)
  • http://sub.sectest1.example.org:80 (does not have ability to request UniversalXPConnect and friends)

Unfortunately, there is currently no support for running tests over non-HTTP protocols such as FTP or HTTPS in ways that are useful for cross-domain testing. This limitation will probably be rectified in the future.

How do I write tests that check header values, method types, etc. of HTTP requests?

To write such a test, you simply need to write an SJS (server-side JavaScript) for it. An SJS is simply a JavaScript file with the extension sjs which is loaded in a sandbox; the global property handleRequest defined by the script is then executed with request and response objects, and the script populates the response based on the information in the request.

Here's an example of a simple SJS:

function handleRequest(request, response)
{
  // avoid confusing cache behaviors
  response.setHeader("Cache-Control", "no-cache", false);

  response.setHeader("Content-Type", "text/plain", false);
  response.write("Hello world!");
}

The exact properties of the request and response parameters are defined in the nsIHttpRequestMetadata and nsIHttpResponse interfaces in {{template.Source("netwerk/test/httpserver/nsIHttpServer.idl", "nsIHttpServer.idl")}}. Note carefully: the browser is free to cache responses generated by your script, so if you ever want an SJS to return different data for multiple requests to the same URL, you should add a Cache-Control: no-cache header to the response to prevent the test from accidentally failing if it's manually run multiple times in the same Mochitest session.

A simple example of an SJS used in reftests is {{template.Source("modules/libpr0n/test/reftest/generic/check-header.sjs", "check-header.sjs")}}.

{{ wiki.languages( { "ja": "ja/Mochitest" } ) }}

Revision Source

<p>
</p>
<h3 name="Introduction"> Introduction </h3>
<p>Mochitest is an automated testing framework built on top of the <a class="external" href="http://mochikit.com/">MochiKit</a> JavaScript libraries. It's just one of the automated regression testing facilities Mozilla developers have at their disposal. Tests report success or failure to the test harness using JavaScript function calls.
</p><p>Mochitest's unique strength is that it runs tests written as webpages in a full browser environment where the tests have chrome (elevated) privileges. This allows JavaScript in the tests to do much, much more than it would otherwise be able to do. In addition to the capabilities a script would normally have (e.g. DOM manipulation), scripts can access XPCOM components and services, and even access the browser itself. This allows a script to, say, simulate user input to the browser's user interface, before examining the browser to verify that the input had the intended results.
</p><p>Mochitest's use of JavaScript function calls to communicate test success or failure can make it unsuitable for certain types of test. Only things that can in some way be tested using JavaScript (with chrome privileges!) can be tested with this framework. Given some creativity, that's actually much more than you might first think, but it's not possible to write Mochitest tests to directly test a non-scripted C++ component, for example.
</p>
<h3 name="Try_not_to_use_Mochitest"> Try not to use Mochitest </h3>
<p>Yes, really. For many things Mochitest is overkill. In general you should always try to use one of the lighterweight testing frameworks. For example, if you only want to test a single XPCOM component then you should use <a href="en/Writing_xpcshell-based_unit_tests">xpcshell</a>. On the other hand there are some things that Mochitest cannot do, or isn't designed to do. For example, for visual output tests you should try to use the <a href="en/Creating_reftest-based_unit_tests">reftest</a> framework. For more information on the different types of automated testing frameworks see <a href="en/Mozilla_automated_testing">Mozilla automated testing</a>.
</p>
<h3 name="Running_Mochitest"> Running Mochitest </h3>
<p>The Mozilla tinderboxes run Mochitest as part of the build process, so we get to know pretty quickly if someone commits a change to the source code that breaks something. However, it is still a good idea to run Mochitest yourself before you commit new code. You don't want to be the one who wastes everyone's time by breaking the tree when you can help it. :-)
</p><p>To run Mochitest, first <a href="en/Build_Documentation">build Mozilla</a> with your changes, then change directory to <code>$(OBJDIR)/_tests/testing/mochitest</code>.
</p>
<h4 name="Running_all_the_tests"> Running all the tests </h4>
<p>To run all the Mochitest tests call the 'runtests' script without passing it any command line arguments:
</p>
<pre class="eval">perl runtests.pl
</pre>
<p>This will open your build with a document containing a "Run Tests" link at the top. To run the tests simply click this link and watch the results being generated. Test pass/fail is reported for each test as they are run.
</p>
<div class="note">You should keep focus on the browser window during the tests, as some may fail otherwise (like the one for {{template.Bug(330705)}}).</div>
<p>TODO: mention there is also a python script called runtests.py. Which is the official script? Are they kept in sync? Is/will one of them be deprecated?
</p>
<h4 name="Running_an_individual_test_or_a_small_group_of_tests"> Running an individual test or a small group of tests </h4>
<p>To run a single test (perhaps a new test you just added) or a subset of the entire Mochitest suite, add a <code>--test-path</code> option pointing to the test or group of tests that you want to run.  For example, to run the first test for {{template.Bug(351633)}}, call runtests.pl like this:
</p>
<pre class="eval">perl runtests.pl --test-path=dom/src/jsurl/test/test_bug351633-1.html
</pre>
<p>To run all the jsurl tests automatically, call it like this:
</p>
<pre class="eval">perl runtests.pl --test-path=dom/src/jsurl/ --autorun
</pre>
<h3 name="Writing_new_Mochitest_tests"> Writing new Mochitest tests </h3>
<p>Use <code>testing/mochitest/gen_template.pl</code> to generate a template.  This script takes two optional arguments:
</p>
<ol><li>  -b : a bug number
</li><li>  -type : template type. {html|xhtml|xul}. defaults to html.
</li></ol>
<p>Use one or more of the following functions <i>in the inline script</i>:
</p>
<ul><li> ok(expectedTrueValue, errorMessage) -- tests a value for truthiness
</li><li> is(thingA, thingB, errorMessage) -- compares two values (using ==, which is a bit loose)
</li><li> isnot(thingA, thingB, errorMessage) -- opposite of is()
</li></ul>
<p>If your test currently fails, then instead of commenting it out, you should use one of the "todo" equivalents so Tinderbox can notice if it suddenly starts passing:
</p>
<ul><li> todo(falseButShouldBeTrue, errorMessage)
</li><li> todo_is(thingA, thingB, errorMessage)
</li><li> todo_isnot(thingA, thingB, errorMessage)
</li></ul>
<h3 name="Adding_new_Mochitest_tests_to_the_tree"> Adding new Mochitest tests to the tree </h3>
<p>Once you've written a new test you need to add it to the Mozilla source tree and tell the build system about it so that the Mozilla tinderboxes will run it automatically.
</p><p>New Mochitest tests should go somewhere close to the code they are testing. For example, if you create a new test for some HTML feature, you probably want to put the test in {{template.Source("content/html/content/test")}} or {{template.Source("content/html/document/test")}}. If a test directory does not exist near the code you are testing you can add a new test directory as the patch in {{template.Bug(368531)}} demonstrates.
</p><p>To tell the build system about your new test you need to add the name of your test file to <code>_TEST_FILES</code> in the test directory's <code>Makefile.in</code>.
</p><p>If your test spans multiple files files, only name the main one "test_...". This is the one that will show up in the list of testcases to run. The other files should have some other name, but must still be added to the <code>_TEST_FILES</code> in <code>Makefile.in</code>.
</p><p>Keep in mind that if you're adding chrome tests, you'll need to change the Makefile to install the tests in <code>_tests/testing/mochitest/<b>chrome</b></code> rather than <code>_tests/testing/mochitest/<b>tests</b></code>.
</p><p>Before committing your new test and the Makefile.in changes, do run Mochitest in an up to date trunk build to check that you will not unexpectedly turn the tree orange.
</p>
<h3 name="Testing_new_tests"> Testing new tests </h3>
<p>Before committing a new test  you should check that it does actually pass as you expect it to. To check your test, first export it to the Mochitest directory by running the command:
</p>
<pre class="eval">make
</pre>
<p>in the object directory corresponding to the test file's location in the source tree. Now open Mochitest as described above, but this time, instead of clicking on the "Run Tests" link, search for your test and click on it.
</p>
<h3 name="FAQ"> FAQ </h3>
<h4 name="How_do_I_get_started.3F"> How do I get started? </h4>
<p>Start by reading the introductory material provided by the article <a href="en/Mochitest">Mochitest</a>, as well as by {{template.Source("testing/mochitest/README.txt", "reading the README")}}.
</p>
<ul><li> {{template.Source("testing/mochitest")}} should be checked out as part of the normal checkout on trunk.
</li><li> mochitest is built by default (unless <code>--disable-mochitest</code> is specified in mozconfig) and installed into <code>($OBJ_DIR)/_tests/testing/mochitest</code>
</li><li> To run tests
<ul><li> <code>cd $OBJDIR/_tests/testing/mochitest</code> (Note that the working directory matters for <code>runtests.pl</code>!)
</li><li> <code>perl runtests.pl --autorun</code>
</li></ul>
</li><li> The README has information on how to create a new mochitest testcase.
</li></ul>
<h4 name="How_do_the_comparison_functions_work.3F"> How do the comparison functions work? </h4>
<p><code>ok()</code> tests the truth of the first argument. 
</p>
<pre class="eval">ok(true == 1, "this passes");
ok(true === 1, "this fails");
</pre>
<p><code>is()</code> compares the first argument to the second using the loose-equality <code>==</code> operator.
</p>
<pre class="eval">is(true, 1, "1 equals true");
</pre>
<p>Here is a typical test failure:
</p>
<pre class="eval">var foo = "abc";
is(foo, "xyz", "foo should hold the last letters of the alphabet");
</pre>
<p>Mochitest will log the failure to all log listeners (console, file writer, screen, etc).
</p>
<pre class="eval"> &gt; 001: FAIL | Expected "xyz" got "abc" | foo should hold the last letters of the alphabet
</pre>
<h4 name="What_other_helper_functions_are_available.3F"> What other helper functions are available? </h4>
<p>Right now all of Mochikit is available (this will change in {{template.Bug(367393)}}); {{template.Bug(367569)}} added <code>sendChar</code>, <code>sendKey</code>, and <code>sendString</code> helpers. These are available in {{template.Source("testing/mochitest/tests/SimpleTest/EventUtils.js")}}.
</p>
<h4 name="How_do_I_avoid_typing_boilerplate.3F"> How do I avoid typing boilerplate? </h4>
<p>Use the quaint {{template.Source("testing/mochitest/gen_template.pl", "gen_template")}} perl script.
</p><p>Here's enough to be dangerous:
</p>
<pre class="eval">~/firefox/mozilla&gt; cd testing/mochitest/
~/firefox/mozilla/testing/mochitest&gt; perl gen_template.pl -b=123456 &gt; tests/test_bug123456.html
# Or for a chrome XUL testcase:
~/firefox/mozilla/testing/mochitest&gt; perl gen_template.pl -b=123456 --type=xul &gt; tests/test_bug123456.xul
</pre>
<p>The elements with id 'content' and 'display' in the generated file can be used by your script if you need elements to mess around with.
</p>
<h4 name="Where_do_the_tests_go.3F"> Where do the tests go?  </h4>
<p>They go somewhere near the code they're testing, hopefully in the same module, so that ownership of the test cases is clear.
</p>
<h4 name="What_do_the_results_look_like.3F"> What do the results look like? </h4>
<p>The test runner page shows a table with green and red rows for each test page.
</p>
<table border="1"><tbody><tr><td><img alt="Image:testrunner table.png" src="File:en/Media_Gallery/Testrunner_table.png"></td></tr></tbody></table>
<h4 name="Where_do_the_results_get_logged.3F"> Where do the results get logged? </h4>
<p>There are several possibilities. Right now, calling the test page (<span class="plain">http://localhost:8888/tests/index.html</span>) with parameters can make it log to a file and/or to the console, with varying logging levels (INFO/DEBUG/ERROR). See the top of {{template.Source("testing/mochitest/runtests.pl.in", "runtests.pl.in")}} for details.
</p>
<h4 name="How_do_I_find_an_error_in_the_log.3F"> How do I find an error in the log? </h4>
<p>Search for the string "ERROR FAIL" to find unexpected failures. You can also search for "SimpleTest FINISHED" to see the final test summary. This is particularly useful when viewing full Tinderbox logs, since the Mochitest output isn't necessarily at the end of the combined log.
</p>
<h4 name="What_if_my_tests_have_failures_in_them.3F"> What if my tests have failures in them? </h4>
<p>You still have to test that. Mochitest provides a <code>todo()</code> function that is identical to <code>ok()</code>, but is expected to fail.  We've also added <code>todo_is()</code> and <code>todo_isnot()</code> to match <code>is()</code> and <code>isnot()</code>.
</p>
<h4 name="What_if_my_tests_aren.27t_done_when_onload_fires.3F"> What if my tests aren't done when onload fires? </h4>
<p>Call <code>SimpleTest.waitForExplicitFinish()</code> before onload fires.  Then, when you're done, call <code>SimpleTest.finish()</code>.
</p>
<h4 name="What_if_I_need_to_change_a_preference_to_run_my_test.3F"> What if I need to change a preference to run my test? </h4>
<pre class="eval">netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");
var prefService = Components.classes["@mozilla.org/preferences-service;1"]
                            .getService(Components.interfaces.nsIPrefService);
var domBranch = prefService.getBranch("dom.");
var oldVal = domBranch.getIntPref("max_script_run_time");
domBranch.setIntPref("max_script_run_time", 0);

// do what you need

domBranch.setIntPref("max_script_run_time", oldVal);
</pre>
<h4 name="Can_tests_be_run_under_a_chrome_URL.3F"> Can tests be run under a chrome URL? </h4>
<p>Yes, use <code>perl runtests.pl --chrome</code>. Keep in mind that the <a href="en/Writing_xpcshell-based_unit_tests">xpcshell test harness</a> should be your first choice for XPCOM testing. Only use mochitest if you need events, browser features, networking, etc.
</p>
<h4 name="How_can_I_get_around_the_error_.22Permission_denied_to_get_property_XPCComponents.classes.22.3F"> How can I get around the error "Permission denied to get property XPCComponents.classes"? </h4>
<p>Adding the following line to your test file (and each event handler) will allow full XPCOM usage.
</p><p><code> netscape.security.PrivilegeManager.enablePrivilege('UniversalXPConnect'); </code>
</p><p>This approach is obviously inconvenient. That's why we're working on the build hacking necessary to copy tests into a chrome directory for testing.
</p>
<h4 name="How_do_I_change_the_HTTP_headers_or_status_sent_with_a_file_used_in_a_Mochitest.3F"> How do I change the HTTP headers or status sent with a file used in a Mochitest? </h4>
<p>Create a text file next to the file whose headers you want to modify.  The name of the text file should be the name of the file whose headers you're modifying followed by <code>^headers^</code>.  For example, if you have a file <code>foo.jpg</code>, the text file should be named <code>foo.jpg^headers^</code>.  (Don't try to actually use the headers file in any other way in the test, because the HTTP server's hidden-file functionality prevents any file ending in exactly one <code>^</code> from being served.)  Edit the file to contain the headers and/or status you want to set, like so:
</p>
<pre class="eval">HTTP 404 Not Found
Content-Type: text/html
Random-Header-of-Doom: 17
</pre>
<p>The first line sets the HTTP status and (optionally a) description associated with the file.  This line is optional; you don't need it if you're fine with the normal response status and description.  Any other lines in the file describe additional headers which you want to add or overwrite (most typically the Content-Type header, for the latter case) on the response.  The format follows the conventions of HTTP, except that you don't need to have HTTP line endings and you can't use a header more than once (the last line for a particular header wins).  The file may end with at most one blank line to match Unix text file conventions, but the trailing newline isn't strictly necessary.
</p>
<h4 name="How_do_I_test_issues_which_only_show_up_when_tests_are_run_across_domains.3F"> How do I test issues which only show up when tests are run across domains? </h4>
<p>The Mochitest harness runs one web server to serve tests, but through the magic of proxy autoconfig, all test files are available on a variety of different domains and ports.  Tests running on any of these servers (with two exceptions for testing privilege escalation functionality) automatically have the ability to request elevated privileges such as UniversalXPConnect.  The full list of domains and ports on which tests are served, all of which serve exactly the same content as <code><span class="plain">http://localhost:8888</span></code>, is:
</p>
<ul><li> <span class="plain">http://localhost:8888</span>
</li><li> <span class="plain">http://example.org:80</span>
</li><li> <span class="plain">http://test1.example.org:80</span>
</li><li> <span class="plain">http://test2.example.org:80</span>
</li><li> <span class="plain">http://sub1.test1.example.org:80</span>
</li><li> <span class="plain">http://sub1.test2.example.org:80</span>
</li><li> <span class="plain">http://sub2.test1.example.org:80</span>
</li><li> <span class="plain">http://sub2.test2.example.org:80</span>
</li><li> <span class="plain">http://example.org:8000</span>
</li><li> <span class="plain">http://test1.example.org:8000</span>
</li><li> <span class="plain">http://test2.example.org:8000</span>
</li><li> <span class="plain">http://sub1.test1.example.org:8000</span>
</li><li> <span class="plain">http://sub1.test2.example.org:8000</span>
</li><li> <span class="plain">http://sub2.test1.example.org:8000</span>
</li><li> <span class="plain">http://sub2.test2.example.org:8000</span>
</li><li> <span class="plain">http://example.com:80</span>
</li><li> <span class="plain">http://test1.example.com:80</span>
</li><li> <span class="plain">http://test2.example.com:80</span>
</li><li> <span class="plain">http://sub1.test1.example.com:80</span>
</li><li> <span class="plain">http://sub1.test2.example.com:80</span>
</li><li> <span class="plain">http://sub2.test1.example.com:80</span>
</li><li> <span class="plain">http://sub2.test2.example.com:80</span>
</li><li> <span class="plain">http://sectest1.example.org:80</span>
</li><li> <span class="plain">http://sub.sectest2.example.org:80</span>
</li><li> <span class="plain">http://sub1.ält.example.org:8000</span>
</li><li> <span class="plain">http://sub2.ält.example.org:80</span>
</li><li> <span class="plain">http://exämple.test:80</span>
</li><li> <span class="plain">http://sub1.exämple.test:80</span>
</li><li> <span class="plain">http://παράδειγμα.δοκιμή:80</span>
</li><li> <span class="plain">http://sub1.παράδειγμα.δοκιμή:80</span>
</li><li> <span class="plain">http://sectest2.example.org:80</span> (does <b>not</b> have ability to request UniversalXPConnect and friends)
</li><li> <span class="plain">http://sub.sectest1.example.org:80</span> (does <b>not</b> have ability to request UniversalXPConnect and friends)
</li></ul>
<p>Unfortunately, there is currently no support for running tests over non-HTTP protocols such as FTP or HTTPS in ways that are useful for cross-domain testing.  This limitation will probably be rectified in the future.
</p>
<h4 name="How_do_I_write_tests_that_check_header_values.2C_method_types.2C_etc._of_HTTP_requests.3F"> How do I write tests that check header values, method types, etc. of HTTP requests? </h4>
<p>To write such a test, you simply need to write an SJS (server-side JavaScript) for it.  An SJS is simply a JavaScript file with the extension <code>sjs</code> which is loaded in a sandbox; the global property <code>handleRequest</code> defined by the script is then executed with request and response objects, and the script populates the response based on the information in the request.
</p><p>Here's an example of a simple SJS:
</p>
<pre class="eval">function handleRequest(request, response)
{
  // avoid confusing cache behaviors
  response.setHeader("Cache-Control", "no-cache", false);

  response.setHeader("Content-Type", "text/plain", false);
  response.write("Hello world!");
}
</pre>
<p>The exact properties of the request and response parameters are defined in the <code>nsIHttpRequestMetadata</code> and <code>nsIHttpResponse</code> interfaces in <code>{{template.Source("netwerk/test/httpserver/nsIHttpServer.idl", "nsIHttpServer.idl")}}</code>.  Note carefully: the browser is free to cache responses generated by your script, so if you ever want an SJS to return different data for multiple requests to the same URL, you should add a <code>Cache-Control: no-cache</code> header to the response to prevent the test from accidentally failing if it's manually run multiple times in the same Mochitest session.
</p><p>A simple example of an SJS used in reftests is <code>{{template.Source("modules/libpr0n/test/reftest/generic/check-header.sjs", "check-header.sjs")}}</code>.
</p>
<div class="noinclude">
</div>
{{ wiki.languages( { "ja": "ja/Mochitest" } ) }}
Revert to this revision