Creating JavaScript tests

  • Revision slug: SpiderMonkey/Creating_JavaScript_tests
  • Revision title: Creating JavaScript tests
  • Revision id: 75955
  • Created:
  • Creator: Jimb
  • Is current revision? No
  • Comment 13 words added

Revision Content

About the JavaScript tests

The JavaScript Test suite has a long history dating back to 1997. As a result, it is quite different from the other test suites available for Mozilla projects or other more modern test suites. Originally the suite was written solely for use with the JavaScript reference implementation shell program for the first edition of the ECMAScript standard and the Netscape specific language features for JavaScript 1.1, and 1.2. Later, additional tests were added for the second and third editions of the ECMAScript standard and later versions of JavaScript. The approaches used in developing new tests changed over time while leaving the older tests using the older approaches. Even later, the ability to run the tests directly in the browser was grafted onto the existing approaches. If the test suite seems to be a hodge-podge of code, historical precedents have been the major driver in determining what approaches new tests have used.

Organization of the JavaScript tests

The JavaScript tests are located in the trunk of the mozilla.org CVS repository and the Mercurial repository at mozilla/js/tests.

The tests are organized into suites by sub-directory

ecma tests for ECMAScript 1 (ECMA-262 1st edition)
ecma_2 tests for ECMAScript 2 (ECMA-262 2nd edition)
ecma_3 tests for ECMAScript 3 (ECMA-262 3rd edition)
ecma_5 tests for ECMAScript 5 (ECMA-262 5th edition)
e4x tests for ECMAScript for XML (ECMA-357)
js1_1 tests for JavaScript 1.1
js1_2 tests for JavaScript 1.2
js1_3 tests for JavaScript 1.3
js1_4 tests for JavaScript 1.4
js1_5 tests for JavaScript 1.5
js1_6 tests for JavaScript 1.6
js1_7 tests for JavaScript 1.7
js1_8 tests for JavaScript 1.8
js1_8_1 tests for JavaScript 1.8.1

Within each suite, the tests are organized into sub-suites. These sub-suites are typically related to the specification feature governing the behavior being tested.

To choose the suite where a test should be located, first decide if the test is governed directly by one of the standards (ECMA-262 or E4X) or is more related to a specific version of mozilla.org's JavaScript implementation.

If the test is governed by the ECMA-262 standard, place it under the ecma_3 directory in the appropriate sub-suite directory.

If the test is governed by the E4X standard, place it in the e4x directory in the appropriate sub-suite directory.

If the test is not directly related to a standard's specification of behavior and is governed by the language version, place it the appropriate suite for that language version. If the test does not require JavaScript 1.6 or higher, place it in the js1_5 suite by default.

You will need to use your own judgement combined with the specifications in order to determine the sub-suite where a new test should be located. If you are not sure where the test should be located, please ask a JavaScript hacker or Bob Clary <bob at bclary.com>.

Some tests are required to be located in specific sub-suites.

sub-suite description
extensions sub-suite used for tests which use mozilla.org only extensions to the ECMAScript language not typically supported by other implementators. An example might be a test which uses the __proto__ property.
decompilation sub-suite used for tests of mozilla.org's decompilation of JavaScript objects into source code.
Regress sub-suite used for tests which are for regressions or for other issues which do not have a clear choice for sub-suite.

Finding needed tests

If you are creating a test for a new language feature, work with the developer and with Eric (Sheppy) Shepherd <eshepherd at mozilla.com> to determine the expected behavior of the feature and determine the code required to test each aspect of the new feature. Typically, new features are worked on in bugs, so you can use the basic bugzilla approach outlined next.

If you are creating a test for a bug, the simplest case is when there is already a testcase described or attached to the bug. The bugzilla keyword testcase is used to flag bugs which already contain example code on which to base a test. The bugzilla flag in-testsuite is used to keep track of which test cases need to be developed and added to the test suite.

flag description
in-testsuite No determination of test status has been made
in-testsuite? A test has been requested
in-testsuite+- A test has been added to CVS/mercurial
in-testsuite-- A test is not possible using the suite.

Normally, JavaScript tests are not added until the bug has been fixed. This helps reduce test case churn for situations where the new behavior has not been completely defined and helps in reducing the need for tracking known failures in the tests.

You can use the in-testsuite flag to query bugzilla for tests which need to be added. A good starting point is a query for fixed bugs filed since 2005 which do not have tests already.

Creating the test case file

Test files in the JavaScript tests are text files with extension js.

Once you have the basic code to be used to perform the test, and have determined where in the tree the test should be located, copy the template.js file (e.g. {{ Source("js/tests/js1_5/template.js") }}) from the suite's directory into the appropriate sub-suite directory. By convention, test files are named for the specification section where the behavior is defined or using the bug number of the issue being fixed. If there is a possibility of more than one test, then append a dash and a two digit sequence number to the file name.

For example, if you are adding the third test for section 15.4.4.1 of the ECMAScript standard, you would copy the file {{ Source("js/tests/ecma_3/template.js") }} to the file named ecma_3/Array/15.4.4.1-03.js.

The historical convention for tests which are not directly related to a specification section is to use regress as the prefix of the file name, followed by the bug number followed by an optional sequence number.

For example, if you are adding the third test for bug 322135 relating to ECMAScript arrays, then you would copy the file js/tests/ecma_3/template.js to the file named {{ Source("js/tests/ecma_3/Array/regress-322135-03.js") }}.

JavaScript test files can include more than one test per file. If a set of tests can be run without interfering with each other, you can include them in a single file.

Tests which should always be separated into different files are tests which modify the JavaScript Object module (e.g. redefining prototypes), or tests which crash, assert, or time out (e.g fail to complete within 8 minutes).

NOTE: Older tests which were developed prior to the introduction of exceptions used a different scheme in order to handle tests which terminated in the shell due to uncaught errors. Tests which expected an error were named with an -n suffix which was interpreted by the jsDriver.pl as meaning that a test which failed due to an uncaught error actually passed. THIS IS NOT RECOMMENDED FOR NEW TESTS. Whenever possible, you should write tests where all errors are caught and test results reported by one of the recommended comparison functions described below.

Customizing the test

To customize the newly created test file:

  1. Record the contributor of the test. Give primary attribution for the test to the person who initially demonstrated the bug through a test case. Add additional contributors as needed.
  2. Modify the value of the gTestfile variable to contain the named of the test file (e.g. 15.4.4.1-03.js).
  3. Modify the value of the BUGNUMBER variable to contain the bug number for the test.
  4. Enter a summary by which to describe the test. Note that it is important that you not include the strings 'Assertion failure:' or ': out of memory' in your summary or in the output from your test.
  5. Insert your test code into the body of the test() function prior to the call to the comparison function (e.g. reportCompare). In some cases, it may be necessary to place the test code outside of a function. If that is the case, simply remove the call to and definition of the function test() and place your test code at the global scope.

NOTE: If at all possible, you should write new tests so that any possible exception is caught and the exception printed. This eliminates the variability in behavior between the shell and browser as well as between the different platforms.

Handling shell or browser specific features

The JavaScript tests can be run in the JavaScript shell or in the browser. Some tests make use of features that are only available in the shell or in the browser. In addition, some tests may make use of Mozilla only features which would cause other browsers such as Opera, Safari or Internet Explorer to fail if they attempted to execute the test. In these cases, be sure to use object detection to make sure that the specific feature the test requires is supported in the execution environment. For example, if the test uses document.write you could write the test so that the shell would not attempt to execute document.write(. Currently, there is no way to report a test was skipped. The convention for skipped tests is to force them to pass.

if (typeof document.write == 'undefined') {
  print('This test is only supported in a browser that supports document.write');
  expect = actual = 'Test skipped';
}
else {
  // do stuff
}
reportCompare(expect, actual, description);

Choosing the comparison function

reportCompare

reportCompare(expected, actual, description) is used to test if an actual value ( a value computed by the test) is equal to the expected value. If necessary, convert your values to strings before passing them to reportCompare. For example, if you were testing addition of 1 and 2, you might write:

expected = 3;
actual   = 1 + 2;
reportCompare(expected, actual, '3==1+2');

reportMatch

reportMatch(expectedRegExp, actual, description) is used to test if an actual value is matched by an expected regular expression. This comparison is used in circumstances where the actual value may vary within a set pattern and also to allow tests to be used both in the C implementation of the JavaScript engine (SpiderMonkey) and the Java implementation of the JavaScript engine (Rhino) which differ in their error messages or when an error message has changed between branches. For example, a test which recurses to death can report Internal Error: too much recursion on the 1.8 branch while reporting InternalError: script stack space quota is exhausted on the 1.9 branch. To handle this you might write:

actual   = 'No Error';
expected = /InternalError: (script stack space quota is exhausted|too much recursion)/;
try {
  f = function() { f(); }
}
catch(ex) {
  actual = ex + '';
  print('Caught exception ' + ex);
}
reportMatch(expected, actual, 'recursion to death');

compareSource

compareSource(expected, actual, description) is used to test if the decompilation of a JavaScript object (conversion to source code) matches an expected value. Note that tests which use compareSource should be located in the decompilation sub-suite of a suite. For example, to test the decompilation of a simple function you could write:

var f  = (function () { return 1; });
expect = 'function () { return 1; }';
actual = f + '';
compareSource(expect, actual, 'decompile simple function');

Handling abnormal test terminations

Some tests can terminate abnormally even though the test has technically passed. Earlier we discussed the deprecated approach of using the -n naming scheme to identify tests whose PASSED, FAILED status is flipped by the post test processing code in jsDriver.pl and post-process-logs.pl. A different approach is to use the expectExitCode(exitcode) function which outputs a string

--- NOTE: IN THIS TESTCASE, WE EXPECT EXIT CODE <exitcode> ---

that tells the post-processing scripts jsDriver.pl or post-process-logs.pl that the test passes if the shell or browser terminates with that exit code. Multiple calls to expectExitCode will tell the post-processing scripts that the test actually passed if any of the exit codes are found when the test terminates.

This approach has limited use however. In the JavaScript shell, an uncaught exception or out of memory error will terminate the shell with an exit code of 3. However an uncaught error or exception will not cause the browser to terminate with a non-zero exit code. To make the situation even more complex, newer C++ compilers will abort the browser with a typical exit code of 5 by throwing a C++ exception when an out of memory error occurs. Simply testing the exit code does not allow you to distinguish the variety of causes a particular abnormal exit may have.

In addition, some tests pass if they do not crash however they may not terminate unless killed by the test driver.

A modification will soon be made to the JavaScript tests to allow an arbitrary string to be output which will be used to post process the test logs to better determine if a test has passed regardless of its exit code.

Performance testing

It is not possible to test all performance related issues using the JavaScript tests. In particular, it is not possible to test absolute timing values for a test. This is due to the varied hardware and platforms upon which the JavaScript tests will be executed. It may be the case that a particular bug improves an operation from 200ms to 50ms on your machine, however it will not be true in general. Tests which measure absolute times of tests belong in other test frameworks such as Talos or Dromaeo.

It is possible to test ratios of times to some extent although it can be tricky to write a test which will not be affected by the host machines performance.

It is possible to test the polynomial time dependency of a test using the BigO function.

Testing polynomial time behavior

To test the polynomial time dependency, follow these steps:

  1. Create a test file as described above.
  2. Add a global variable var data = {X: [], Y: []} which you will use to record the sizes and times for executing a test function.
  3. Create a test function which takes a size argument and which times performing the operations of that size. Each size and time interval should be stored in the data object. Note that in order to reduce the possibility that a garbage collection will affect the timing of your test, you should call the gc() function after completing the timing of each size.
  4. Create a loop which will call your test function for a range of sizes.
  5. Calculate the order of the timing data you have collected by calling BigO(data).
  6. Perform a reportCompare comparison of the calculated order against the expected order.

For example, to test if the "Big O" time dependency of adding a character to a string is less than quadratic you might do something like:

var data = {X: [], Y:[]};

for (var size = 1000; size < 10000; size += 1000)
{
  appendchar(size);
}

var order = BigO(data);

reportCompare(true', order < 2, 'append character BigO < 2');

function appendchar(size)
{
  var i;
  var s = '';

  var start = new Date();
  for (i = 0; i < size; i++)
  {
    s += 'c';
  }
  var stop  = new Date();
  gc();
  
  data.X.push(size);
  data.Y.push(stop - start);
}

Note: The range of sizes and the increment between tests of different sizes can have an important effect on the validity of the test. You should strive to keep the minimum size above a certain value so that the minimum times are not too close to zero.

Testing your test

You must be sure to test your new test locally before checking it into the tree. You should test your new test on each of the supported platforms: Linux, Mac OS X, and Windows using optimized and debug builds of the JavaScript shell and Firefox. It is recommended that you first test the new test using an unpatched shell and browser so that you are guaranteed that the test case reproduces the issues which are being fixed such as crashes or assertions. This will also allow you to record the known failures for previous branches. Then you should test your new test with a patched shell and browser to make sure that the behavior your test expects is what the shell or browser actually do.

Checking in completed tests

Handling non-security sensitive tests

Once the test is complete and has been tested on all three main platforms (Linux, Mac OS X, and Windows), the test should be checked into to both CVS trunk and the Mercurial mozilla-central repository. Typical commit messages should contain a reference to the primary contributor and the bug number.

 

Handling security sensitive tests

Security senstive tests should be not be checked into CVS or mercurial until they have been made public. Instead, attach the test to the bug as an attachment so that others can download it into their local test tree.

in-testsuite bugzilla flag

Once the test has been checked in or attached to the bug, flip the in-testsuite flag to + and copy the revision or changeset information in the bug to record that the bug has a test in the suite.

Revision Source

<h2 name="About_the_JavaScript_tests">About the JavaScript tests</h2>
<p>The JavaScript Test suite has a long history dating back to 1997. As a result, it is quite different from the other test suites available for Mozilla projects or other more modern test suites. Originally the suite was written solely for use with the JavaScript reference implementation shell program for the first edition of the ECMAScript standard and the Netscape specific language features for JavaScript 1.1, and 1.2. Later, additional tests were added for the second and third editions of the ECMAScript standard and later versions of JavaScript. The approaches used in developing new tests changed over time while leaving the older tests using the older approaches. Even later, the ability to run the tests directly in the browser was grafted onto the existing approaches. If the test suite seems to be a hodge-podge of code, historical precedents have been the major driver in determining what approaches new tests have used.</p>
<h2 name="Organization_of_the_JavaScript_tests">Organization of the JavaScript tests</h2>
<p>The JavaScript tests are located in the trunk of the mozilla.org CVS repository and the Mercurial repository at <code>mozilla/js/tests</code>.</p>
<p>The tests are organized into <strong>suites</strong> by sub-directory</p>
<table> <tbody> <tr> <td>ecma</td> <td>tests for ECMAScript 1 (ECMA-262 1<sup>st</sup> edition)</td> </tr> <tr> <td>ecma_2</td> <td>tests for ECMAScript 2 (ECMA-262 2<sup>nd</sup> edition)</td> </tr> <tr> <td>ecma_3</td> <td>tests for ECMAScript 3 (ECMA-262 3<sup>rd</sup> edition)</td> </tr> <tr> <td>ecma_5</td> <td>tests for ECMAScript 5 (ECMA-262 5<sup>th</sup> edition)</td> </tr> <tr> <td>e4x</td> <td>tests for ECMAScript for XML (ECMA-357)</td> </tr> <tr> <td>js1_1</td> <td>tests for JavaScript 1.1</td> </tr> <tr> <td>js1_2</td> <td>tests for JavaScript 1.2</td> </tr> <tr> <td>js1_3</td> <td>tests for JavaScript 1.3</td> </tr> <tr> <td>js1_4</td> <td>tests for JavaScript 1.4</td> </tr> <tr> <td>js1_5</td> <td>tests for JavaScript 1.5</td> </tr> <tr> <td>js1_6</td> <td>tests for JavaScript 1.6</td> </tr> <tr> <td>js1_7</td> <td>tests for JavaScript 1.7</td> </tr> <tr> <td>js1_8</td> <td>tests for JavaScript 1.8</td> </tr> <tr> <td>js1_8_1</td> <td>tests for JavaScript 1.8.1</td> </tr> </tbody>
</table>
<p>Within each suite, the tests are organized into sub-suites. These sub-suites are typically related to the specification feature governing the behavior being tested.</p>
<p>To choose the suite where a test should be located, first decide if the test is governed directly by one of the standards (ECMA-262 or E4X) or is more related to a specific version of mozilla.org's JavaScript implementation.</p>
<p>If the test is governed by the ECMA-262 standard, place it under the <code>ecma_3</code> directory in the appropriate sub-suite directory.</p>
<p>If the test is governed by the E4X standard, place it in the <code>e4x</code> directory in the appropriate sub-suite directory.</p>
<p>If the test is not directly related to a standard's specification of behavior and is governed by the language version, place it the appropriate suite for that language version. If the test does not require JavaScript 1.6 or higher, place it in the js1_5 suite by default.</p>
<p>You will need to use your own judgement combined with the specifications in order to determine the sub-suite where a new test should be located. If you are not sure where the test should be located, please ask a JavaScript hacker or Bob Clary &lt;bob at bclary.com&gt;.</p>
<p>Some tests are required to be located in specific sub-suites.</p>
<table> <tbody> <tr> <th>sub-suite</th> <th>description</th> </tr> <tr> <td>extensions</td> <td>sub-suite used for tests which use mozilla.org only extensions to the ECMAScript language not typically supported by other implementators. An example might be a test which uses the <code>__proto__</code> property.</td> </tr> <tr> <td>decompilation</td> <td>sub-suite used for tests of mozilla.org's decompilation of JavaScript objects into source code.</td> </tr> <tr> <td>Regress</td> <td>sub-suite used for tests which are for regressions or for other issues which do not have a clear choice for sub-suite.</td> </tr> </tbody>
</table><h2 name="Finding_needed_tests">Finding needed tests</h2>
<p>If you are creating a test for a new language feature, work with the developer and with Eric (Sheppy) Shepherd &lt;eshepherd at mozilla.com&gt; to determine the expected behavior of the feature and determine the code required to test each aspect of the new feature. Typically, new features are worked on in bugs, so you can use the basic bugzilla approach outlined next.</p>
<p>If you are creating a test for a bug, the simplest case is when there is already a testcase described or attached to the bug. The bugzilla keyword <code>testcase</code> is used to flag bugs which already contain example code on which to base a test. The bugzilla flag <code>in-testsuite</code> is used to keep track of which test cases need to be developed and added to the test suite.</p>
<table> <tbody> <tr> <th>flag</th> <th>description</th> </tr> <tr> <td>in-testsuite</td> <td>No determination of test status has been made</td> </tr> <tr> <td>in-testsuite?</td> <td>A test has been requested</td> </tr> <tr> <td>in-testsuite+-</td> <td>A test has been added to CVS/mercurial</td> </tr> <tr> <td>in-testsuite--</td> <td>A test is not possible using the suite.</td> </tr> </tbody>
</table>
<p>Normally, JavaScript tests are not added until the bug has been fixed. This helps reduce test case churn for situations where the new behavior has not been completely defined and helps in reducing the need for tracking known failures in the tests.</p>
<p>You can use the <code>in-testsuite</code> flag to query bugzilla for tests which need to be added. A good starting point is a query for <a class="link-https" href="https://bugzilla.mozilla.org/buglist.cgi?query_format=advanced&amp;short_desc_type=allwordssubstr&amp;short_desc=&amp;product=Core&amp;component=JavaScript+Engine&amp;long_desc_type=allwordssubstr&amp;long_desc=&amp;bug_file_loc_type=allwordssubstr&amp;bug_file_loc=&amp;status_whiteboard_type=allwordssubstr&amp;status_whiteboard=&amp;keywords_type=nowords&amp;keywords=narcissus&amp;resolution=FIXED&amp;emailtype1=substring&amp;email1=&amp;emailtype2=substring&amp;email2=&amp;bugidtype=include&amp;bug_id=&amp;votes=&amp;chfieldfrom=2005-01-01&amp;chfieldto=Now&amp;chfield=%5BBug+creation%5D&amp;chfieldvalue=&amp;cmdtype=doit&amp;order=Reuse+same+sort+as+last+time&amp;known_name=needtest-js&amp;query_based_on=needtest-js&amp;field0-0-0=flagtypes.name&amp;type0-0-0=notsubstring&amp;value0-0-0=in-testsuite%252B&amp;field0-1-0=flagtypes.name&amp;type0-1-0=notsubstring&amp;value0-1-0=in-testsuite-">fixed bugs filed since 2005 which do not have tests already</a>.</p>
<h2 name="Creating_the_test_case_file">Creating the test case file</h2>
<p>Test files in the JavaScript tests are text files with extension <code>js</code>.</p>
<p>Once you have the basic code to be used to perform the test, and have determined where in the tree the test should be located, copy the <code>template.js</code> file (e.g. {{ Source("js/tests/js1_5/template.js") }}) from the suite's directory into the appropriate sub-suite directory. By convention, test files are named for the specification section where the behavior is defined or using the bug number of the issue being fixed. If there is a possibility of more than one test, then append a dash and a two digit <em>sequence</em> number to the file name.</p>
<p>For example, if you are adding the third test for section <a class="external" href="http://bclary.com/2004/11/07/#a-15.4.1.1">15.4.4.1</a> of the ECMAScript standard, you would copy the file <code>{{ Source("js/tests/ecma_3/template.js") }}</code> to the file named <code>ecma_3/Array/15.4.4.1-03.js</code>.</p>
<p>The historical convention for tests which are not directly related to a specification section is to use <code>regress</code> as the prefix of the file name, followed by the bug number followed by an optional sequence number.</p>
<p>For example, if you are adding the third test for bug 322135 relating to ECMAScript arrays, then you would copy the file <code>js/tests/ecma_3/template.js</code> to the file named <code>{{ Source("js/tests/ecma_3/Array/regress-322135-03.js") }}</code>.</p>
<p>JavaScript test files can include more than one test per file. If a set of tests can be run without interfering with each other, you can include them in a single file.</p>
<p>Tests which should always be separated into different files are tests which modify the JavaScript Object module (e.g. redefining prototypes), or tests which crash, assert, or time out (e.g fail to complete within 8 minutes).</p>
<p><strong>NOTE</strong>: Older tests which were developed prior to the introduction of exceptions used a different scheme in order to handle tests which terminated in the shell due to uncaught errors. Tests which expected an error were named with an <code>-n</code> suffix which was interpreted by the <code>jsDriver.pl</code> as meaning that a test which failed due to an uncaught error actually passed. <strong>THIS IS NOT RECOMMENDED FOR NEW TESTS</strong>. Whenever possible, you should write tests where all errors are caught and test results reported by one of the recommended <em>comparison</em> functions described below.</p>
<h3 name="Customizing_the_test">Customizing the test</h3>
<p>To customize the newly created test file:</p>
<ol> <li>Record the contributor of the test. Give primary attribution for the test to the person who initially demonstrated the bug through a test case. Add additional contributors as needed.</li> <li>Modify the value of the <code>gTestfile</code> variable to contain the named of the test file (e.g. 15.4.4.1-03.js).</li> <li>Modify the value of the <code>BUGNUMBER</code> variable to contain the bug number for the test.</li> <li>Enter a summary by which to describe the test. Note that it is important that you <strong>not</strong> include the strings <code>'Assertion failure:'</code> or <code>': out of memory'</code> in your summary or in the output from your test.</li> <li>Insert your test code into the body of the <code>test()</code> function prior to the call to the comparison function (e.g. <code>reportCompare</code>). In some cases, it may be necessary to place the test code outside of a function. If that is the case, simply remove the call to and definition of the function <code>test()</code> and place your test code at the global scope.</li>
</ol>
<p><em>NOTE</em>: If at all possible, you should write new tests so that any possible exception is caught and the exception printed. This eliminates the variability in behavior between the shell and browser as well as between the different platforms.</p>
<h3 name="Handling_shell_or_browser_specific_features">Handling shell or browser specific features</h3>
<p>The JavaScript tests can be run in the JavaScript shell or in the browser. Some tests make use of features that are only available in the shell or in the browser. In addition, some tests may make use of Mozilla only features which would cause other browsers such as Opera, Safari or Internet Explorer to fail if they attempted to execute the test. In these cases, be sure to use <em>object detection</em> to make sure that the specific feature the test requires is supported in the execution environment. For example, if the test uses <code>document.write</code> you could write the test so that the shell would not attempt to execute <code>document.write(</code>. Currently, there is no way to report a test was skipped. The convention for skipped tests is to force them to pass.</p>
<pre>if (typeof document.write == 'undefined') {
  print('This test is only supported in a browser that supports document.write');
  expect = actual = 'Test skipped';
}
else {
  // do stuff
}
reportCompare(expect, actual, description);
</pre>
<h3 name="Choosing_the_comparison_function">Choosing the comparison function</h3>
<h4 name="reportCompare">reportCompare</h4>
<p><code>reportCompare(expected, actual, description)</code> is used to test if an actual value ( a value computed by the test) is equal to the expected value. If necessary, convert your values to strings before passing them to reportCompare. For example, if you were testing addition of 1 and 2, you might write:</p>
<pre>expected = 3;
actual   = 1 + 2;
reportCompare(expected, actual, '3==1+2');
</pre>
<h4 name="reportMatch">reportMatch</h4>
<p><code>reportMatch(expectedRegExp, actual, description)</code> is used to test if an actual value is matched by an expected regular expression. This comparison is used in circumstances where the actual value may vary within a set pattern and also to allow tests to be used both in the C implementation of the JavaScript engine (SpiderMonkey) and the Java implementation of the JavaScript engine (Rhino) which differ in their error messages or when an error message has changed between branches. For example, a test which <em>recurses to death</em> can report <code>Internal Error: too much recursion</code> on the 1.8 branch while reporting <code>InternalError: script stack space quota is exhausted</code> on the 1.9 branch. To handle this you might write:</p>
<pre>actual   = 'No Error';
expected = /InternalError: (script stack space quota is exhausted|too much recursion)/;
try {
  f = function() { f(); }
}
catch(ex) {
  actual = ex + '';
  print('Caught exception ' + ex);
}
reportMatch(expected, actual, 'recursion to death');
</pre>
<h4 name="compareSource">compareSource</h4>
<p><code>compareSource(expected, actual, description)</code> is used to test if the decompilation of a JavaScript object (conversion to source code) matches an expected value. Note that tests which use <code>compareSource</code> should be located in the <code>decompilation</code> sub-suite of a suite. For example, to test the decompilation of a simple function you could write:</p>
<pre>var f  = (function () { return 1; });
expect = 'function () { return 1; }';
actual = f + '';
compareSource(expect, actual, 'decompile simple function');
</pre>
<h4 name="Handling_abnormal_test_terminations">Handling abnormal test terminations</h4>
<p>Some tests can terminate abnormally even though the test has technically passed. Earlier we discussed the deprecated approach of using the <code>-n</code> naming scheme to identify tests whose PASSED, FAILED status is flipped by the post test processing code in <code>jsDriver.pl</code> and <code>post-process-logs.pl</code>. A different approach is to use the <code>expectExitCode(exitcode)</code> function which outputs a string</p>
<pre>--- NOTE: IN THIS TESTCASE, WE EXPECT EXIT CODE &lt;exitcode&gt; ---</pre>
<p>that tells the post-processing scripts <code>jsDriver.pl</code> or <code>post-process-logs.pl</code> that the test passes if the shell or browser terminates with that exit code. Multiple calls to <code>expectExitCode</code> will tell the post-processing scripts that the test actually passed if any of the exit codes are found when the test terminates.</p>
<p>This approach has limited use however. In the JavaScript shell, an uncaught exception or out of memory error will terminate the shell with an exit code of 3. However an uncaught error or exception will not cause the browser to terminate with a non-zero exit code. To make the situation even more complex, newer C++ compilers will abort the browser with a typical exit code of <code>5</code> by throwing a C++ exception when an out of memory error occurs. Simply testing the exit code does not allow you to distinguish the variety of causes a particular abnormal exit may have.</p>
<p>In addition, some tests pass if they do not crash however they may not terminate unless killed by the test driver.</p>
<p>A modification will soon be made to the JavaScript tests to allow an arbitrary string to be output which will be used to post process the test logs to better determine if a test has passed regardless of its exit code.</p>
<h3 name="Performance_testing">Performance testing</h3>
<p>It is not possible to test all performance related issues using the JavaScript tests. In particular, it is not possible to test absolute timing values for a test. This is due to the varied hardware and platforms upon which the JavaScript tests will be executed. It may be the case that a particular bug improves an operation from 200ms to 50ms on <em>your</em> machine, however it will not be true in general. Tests which measure absolute times of tests belong in other test frameworks such as Talos or <a class="link-https" href="https://wiki.mozilla.org/Dromaeo" title="https://wiki.mozilla.org/Dromaeo">Dromaeo</a>.</p>
<p>It is possible to test ratios of times to some extent although it can be tricky to write a test which will not be affected by the host machines performance.</p>
<p>It is possible to test the polynomial time dependency of a test using the <code>BigO</code> function.</p>
<h4 name="Testing_polynomial_time_behavior">Testing polynomial time behavior</h4>
<p>To test the polynomial time dependency, follow these steps:</p>
<ol> <li>Create a test file as described above.</li> <li>Add a global variable <code>var data = {X: [], Y: []}</code> which you will use to record the <em>sizes</em> and times for executing a test function.</li> <li>Create a test function which takes a <em>size</em> argument and which times performing the operations of that size. Each <em>size</em> and time interval should be stored in the <code>data</code> object. Note that in order to reduce the possibility that a garbage collection will affect the timing of your test, you should call the <code>gc()</code> function after completing the timing of each size.</li> <li>Create a loop which will call your test function for a range of sizes.</li> <li>Calculate the <em>order</em> of the timing data you have collected by calling <code>BigO(data)</code>.</li> <li>Perform a <code>reportCompare</code> comparison of the calculated order against the expected order.</li>
</ol>
<p>For example, to test if the "Big O" time dependency of adding a character to a string is less than quadratic you might do something like:</p>
<pre>var data = {X: [], Y:[]};

for (var size = 1000; size &lt; 10000; size += 1000)
{
  appendchar(size);
}

var order = BigO(data);

reportCompare(true', order &lt; 2, 'append character BigO &lt; 2');

function appendchar(size)
{
  var i;
  var s = '';

  var start = new Date();
  for (i = 0; i &lt; size; i++)
  {
    s += 'c';
  }
  var stop  = new Date();
  gc();
  
  data.X.push(size);
  data.Y.push(stop - start);
}

</pre>
<p><strong>Note</strong>: The range of sizes and the increment between tests of different sizes can have an important effect on the validity of the test. You should strive to keep the minimum size above a certain value so that the minimum times are not too close to zero.</p>
<h2 name="Testing_your_test">Testing your test</h2>
<p>You must be sure to test your new test locally before checking it into the tree. You should test your new test on each of the supported platforms: Linux, Mac OS X, and Windows using optimized and debug builds of the JavaScript shell and Firefox. It is recommended that you first test the new test using an unpatched shell and browser so that you are guaranteed that the test case reproduces the issues which are being fixed such as crashes or assertions. This will also allow you to record the known failures for previous branches. Then you should test your new test with a patched shell and browser to make sure that the behavior your test expects is what the shell or browser actually do.</p>
<h2 name="Checking_in_completed_tests">Checking in completed tests</h2>
<h3 name="Handling_non-security_sensitive_tests">Handling non-security sensitive tests</h3>
<p>Once the test is complete and has been tested on all three main platforms (Linux, Mac OS X, and Windows), the test should be checked into to <strong>both</strong> CVS trunk and the Mercurial mozilla-central repository. Typical commit messages should contain a reference to the primary contributor and the bug number.</p>
<p> </p>
<h3 name="Handling_security_sensitive_tests">Handling security sensitive tests</h3>
<p>Security senstive tests should be <strong>not</strong> be checked into CVS or mercurial until they have been made public. Instead, attach the test to the bug as an attachment so that others can download it into their local test tree.</p>
<h3 name="in-testsuite_bugzilla_flag">in-testsuite bugzilla flag</h3>
<p>Once the test has been checked in or attached to the bug, flip the <code>in-testsuite</code> flag to <code>+</code> and copy the revision or changeset information in the bug to record that the bug has a test in the suite.</p>
Revert to this revision