mozilla
Your Search Results

    Measuring Code Coverage on Firefox

    What is Code Coverage?

    Code coverage essentially is about measuring how often certain lines are hit, branches taken or conditions met in a program, given some test that you run on it. There are different types of coverage metrics (see also the Wikipedia entry), but when we speak of code coverage here, we usually mean line and branch coverage. This type of coverage is only concerned with hit counts for lines and branches.

    What Code Coverage tells us, and what it doesn't

    The question is not easy to answer comprehensively, but there are two very important things that code coverage can, and cannot tell us:

    • If a certain branch of code is not hit at all while running your tests, then you will never be able to find a bug in this particular piece of the code using these tests.
    • If a certain branch of code is executed (even very often), this still doesn't tell you about the quality of your test. It could well be that a test exercises the code but does not actually check that the code performs correctly.

    As a conclusion, we can use code coverage to find areas that need (more) tests, but we cannot use it to confirm that certain areas are well tested.

    C/C++ Code Coverage on Firefox

    There are several ways to get C/C++ coverage information for mozilla-central, including regularly updated coverage reports and creating your own coverage builds. The next sections describe the available options.

    Coverage Reports for Automated Tests

    There are weekly coverage reports available for the automated tests that the try server runs on mozilla-central. These reports include

    • Tests run by `make check` (e.g. jit-test, compiled tests)
    • All Mochitests (mochitest 1-5, mochitest-other, mochitest-browser-chrome)
    • XPCShellTest
    • Reftest
    • Crashtest
    • JSReftest

    This is particularly interesting for developers who would like to know about the test coverage of the area they're working in, especially if one contributes new code. But also for existing code it makes sense to check the coverage. If a certain part is badly covered, then more automated tests will help to detect regressions earlier.

    Creating your own Coverage Build

    On Linux and Mac OS X it is straightforward to generate a gcov build using GCC. Adding the following lines to your .mozconfig file should be sufficient:

    # Enable code coverage
    export CFLAGS="-fprofile-arcs -ftest-coverage"
    export CXXFLAGS="-fprofile-arcs -ftest-coverage"
    export LDFLAGS="-fprofile-arcs -ftest-coverage -lgcov"
    Note: On Mac OS X, builds are typically done with Clang which did not support gcov until recently. I have not yet tested if and how well the Clang gcov support works by now, so if you are testing this, please update this section with the results.

    You can then create your build as usual. Once the build is complete, you can run any tests/tools you would like to run and the coverage data is written to special files. In order to view/process this data, we recommend using the lcov tool, a tool to manage and visualize gcov results:

    # Change to the objdir of your build
    cd objdir
    
    # See the Warning box below why this might be required
    find . -name jchuff.gcda -delete
    
    # This collects (-c) all the coverage information from the current directory (-d .) and writes it to coverage.info
    lcov -c -d . -o coverage.info
    
    # This creates HTML data from the coverage.info file we just created, and writes HTML files to the coverage/ subdirectory.
    genhtml -o coverage coverage.info

    Once you have created HTML output from the coverage data, you can easily view it by pointing your browser to the output directory. The lcov tool also allows you to reset all coverage data to do multiple runs, combine several coverage files and various other tasks. For more information see man lcov and man genhtml.

    Important: The lcov tool often seems to hang when processing coverage information from jchuff.gcda. I suspect there is some inefficient part of the code exercised heavily just by this file. In order to create coverage information, it might be necessary to delete this file and to do without its coverage data.

    Obtaining Coverage Data from a Try Push

    Generating coverage data from a Try push is a bit more complicated, mainly because we need to somehow download the .gcno and .gcda files from Try. For the .gcno files which are created during the build, it's easy to include them in the build tarball that can be downloaded later. However, the requested tests are run separately (even on separate test slaves), with nothing that can be downloaded except the test log. As a workaround, we can include the coverage data in the respective test logs (it's a hack, but it works ^_^).

    The necessary patch and toolchain is maintained in a github repository, consisting of:

    • m-c.patch - The patch that must be applied to mozilla-central before pushing. It makes the necessary changes to collect all the data on the Try server.
    • collect-try-results.py - This script obtains the tarball containing the .gcno files, as well as all test logs from the Try server.
    • unpack-gcda.py - This script must be called on every test log downloaded. It extracts the .gcno and .gcda data and creates files in the .info format.
    • map-headers.py - While the .info files created by unpack-gcda.py are already usable, the map-headers.py script tries to match header names from dist/ back to their original filenames in the tree, such that coverage is measured properly for header files in the right place.

    The following diagram visualizes how the scripts work and interact with each other:


    This is also exactly the process that is automated to generate the weekly coverage data that is linked in one of the sections above.

    Document Tags and Contributors

    Contributors to this page: ariestiyansyah, decoder, JulianNeal, teoli
    Last updated by: ariestiyansyah,