검색 결과

    How Mozilla's build system works

    아직 자원 봉사자들이 한국어로 현재 문서를 번역하지 않았습니다. 가입해서 이 문서가 번역되는 일에 함께 해 주세요!

    This document is targeted at Mozilla developers who need to work on Mozilla's build system. It explains the basic concepts and terminology of the build system, and how to do common tasks such as compiling components and creating jar files.

    This document is not intended for people who just want to build Mozilla. For that, see the Build Documentation.

    For many people, knowing to type mach build to build the tree is sufficient to work with the source tree. However, for those seeking more, the rabbit hole goes very deep.


    When you type mach build to build the tree, there are 3 high-level phases that occur within the build system:

    1. System detection and validation.
    2. Preparation of the build backend.
    3. Invokation of the build backend.

    Phase 1: configure

    Phase 1 centers around the configure script. The configure script is a bash shell script. The file is generated from a file called which is written in M4 and processed using Autoconf 2.13 to create the final configure script. You don't have to worry about how you obtain a configure file: the build system does this for you.

    The primary job of configure is to determine characteristics of the system and compiler, apply options passed into it, and validate everything looks OK to build. The primary output of the configure script is an executable file in the object directory called config.status. configure also produces some additional files (like However, the most important file in terms of architecture is config.status.

    The existence of a config.status file may be familiar to those who have worked with Autoconf before. However, Mozilla's config.status is different from almost any other config.status you've ever seen: it's written in Python! Instead of having our configure script produce a shell script, we have it generating Python.

    Now is as good a time as any to mention that Python is prevalent in our build system. If we need to write code for the build system, we do it in Python. That's just how we roll.

    config.status contains 2 parts: data structures representing the output of configure and a command-line interface for preparing/configuring/generating an appropriate build backend. (A build backend is merely a tool used to build the tree - like GNU Make or Tup). These data structures essentially describe the current state of the system and what the existing build configuration looks like. For example, it defines which compiler to use, how to invoke it, which application features are enabled, etc. You are encouraged to open up config.status to have a look for yourself!

    Once we have emitted a config.status file, we pass into the realm of phase 2.

    Phase 2: Build Backend Preparation and the Build Definition

    Once configure has determined what the current build configuration is, we need to apply this to the source tree so we can actually build.

    What essentially happens is the automatically-produced config.status Python script is executed as soon as configure has generated it. config.status is charged with the task of tell a tool how to build the tree. To do this, config.status must first scan the build system definition.

    The build system definition consists of various files in the tree. There is roughly one file per directory or pet set of related directories. Each files defines how its part of the build config works. For example it says I want these C++ files compiled or look for additional information in these directories. config.status starts with the main file and then recurses into all referenced files and directories. As the files are read, data structures describing the overall build system definition are emitted. These data structures are then read by a build backend generator which then converts them into files, function calls, etc. In the case of a make backend, the generator writes out Makefiles.

    When config.status runs, you'll see the following output:

    Reticulating splines...
    Finished reading 1096 files into 1276 descriptors in 2.40s
    Backend executed in 2.39s
    2188 total backend files. 0 created; 1 updated; 2187 unchanged
    Total wall time: 5.03s; CPU time: 3.79s; Efficiency: 75%

    What this is saying is that a total of 1096 files were read. Altogether, 1276 data structures describing the build configuration were derived from them. It took 2.40s wall time to just read these files and produce the data structures. The 1276 data structures were fed into the build backend which then determined it had to manage 2188 files derived from those data structures. Most of them already existed and didn't need changed. However, 1 was updated as a result of the new configuration. The whole process took 5.03s. Although, only 3.79s was in CPU time. That means we spent roughly 25% of the time waiting on I/O.

    Phase 3: Invokation of the Build Backend

    When most people think of the build system, they think of phase 3. This is where we take all the code in the tree and produce Firefox or whatever application you are creating. Phase 3 effectively takes whatever was generated by phase 2 and runs it. Since the dawn of Mozilla, this has been make consuming Makefiles. However, with the transition to files, you may soon see non-Make build backends, such as Tup or Visual Studio.

    When building the tree, most of the time is spent in phase 3. This is when header files are installed, C++ files are compiled, files are preprocessed, etc.

    Recursive Make Backend

    The recursive make backend is the tried and true backend used to build the tree. It's what's been used since the dawn of Mozilla. Essentially, there are Makefiles in each directory. make starts processing the Makefile in the root directory and then recursively descends into child directories until it's done.

    If only it were that simple.

    The recursive make backend divides the source tree into tiers. A tier is a grouping of related directories containing Makefiles of their own. For example, there is a tier for the Netscape Portable Runtime (nspr), one for the JavaScript engine, one for the core Gecko platform, one for the XUL app being built, etc.

    The main file defines the tiers and directories in them. In reality, the main files includes other files such as /toolkit/toolkit.mozbuild which define the tiers. They do this via the add_tier_dir() function.

    At build time, the tiers are traversed in the order they are defined. Typically, the traversal order looks something like base, nspr, nss, js, platform, app.

    Each tier consists of 3 sub-tiers: export, libs, and tools. This roughly correspond to the actions of pre-build, main-build, and post-build. Although, that is poor naming because all 3 are really part of the build. export is used to do things like copy headers into place. libs is reserved for most of the work, like compiling C++ source files. tools is used to installing tests and other support tools.

    When make is invoked, it starts at the export sub-tier of the first tier, and traverses all the directories in that tier. Then, it does the same thing for the libs sub-tier. Then the tools sub-tier. It then moves on to the next tier. And so forth until there are no tiers remaining.

    To view information about the tiers, you can execute the following special make targets:

    Command Effect
    make echo-tiers Show the final list of tiers.
    make echo-dirs Show the list of non-static source directories to iterate over, as determined by the tier list.
    make echo-variable-STATIC_DIRS Show the list of static source directories to iterate over, as determined by the tier list. Files files are how each part of the source tree defines how it is integrated with the build system. You can think of each file as a data structure telling the build system what to do.

    During the build backend generation, all files relevant to the current build configuration are read and converted into files and actions used to build the tree (such as Makefiles). In this section, we'll talk about how files actually work.

    An individual file is actually a Python script. However, they are unlike most Python scripts you will ever see. The execution environment is highly controlled so files can only perform a limited set of operations. Essentially, files are limited to performing the following actions:

    1. Calling functions that are explicitly made available to the environment.
    2. Assigning to a well-defined set of variables whose name is UPPERCASE.
    3. Creating new variables whose name is not UPPERCASE (this includes defining functions).

    It's worth calling out what files cannot do:

    • Import modules.
    • Open files.
    • Use the print statement or function.
    • Reference many of Python's built-in/global functions (they are not made available to the execution environment).

    The most important actions of files are #1 and #2 from the above list. These are how the execution of a file tells the build system what to do. For example, you can assign to the DIRS list to define which directories to traverse into looking for additional files.

    The output of the execution of an individual file is a Python dictionary. This dictionary contains the UPPERCASE variables directly assigned to as well as special variables indirectly assigned to by calling functions exported to the execution environment. When we said you can think of files as data structures, this is what we were referring to. UPPERCASE Variables and Functions

    The set of special symbols available to files is centrally defined and is under the purview of the build config module. To view the variables and functions available in your checkout of the tree, run the following:

    mach mozbuild-reference

    Or, you can view the raw file at /python/mozbuild/mozbuild/frontend/

    How Processing Works

    For most people, it is sufficient to just know that files are Python scripts that are executed and emit Python dicts describing the build config. If you insist on knowing more, this section is for you.

    All the code for reading files lives under /python/mozbuild/mozbuild/frontend/. mozbuild is the name of our Python package that contains most of the code for defining how the build system works. Yes, it's easy to confuse with files. Sorry about that. contains code for a generic Python sandbox. This is the code used to restrict the environment files are executed under. contains the code that defines the actual sandbox (the MozbuildSandbox class) and the code for traversing a tree of files (by following DIRS and TIERS variables). The latter is the BuildReader class. A BuildReader is instantiated with a configuration and then is told to read the source tree. It emits a stream of MozbuildSandbox instances corresponding to the executed files.

    The stream of MozbuildSandbox produced by the BuildReader is typically fed into the TreeMetadataEmitter class from The role of TreeMetadataEmitter is to convert the low-level MozbuildSandbox dictionaries into higher-level function-specific data structures. These data structures are the classes defined in Each class defines a specific aspect of the build system, such as directories to traverse, C++ files to compile, etc. The output of TreeMetadataEmitter is a stream of instances of these classes.

    The stream of build system describing class instances emitted from TreeMetadataEmitter is then fed into a build backend. A build backend is simply an instance of a child class of BuildBackend from (in the mozbuild.backend package now, not mozbuild.frontend). The child class implements methods for processing individual class instances as well as common hook points, such as processing has finished. See for an implementation of a BuildBackend.

    While we call the base class BuildBackend, the class doesn't need to be focused with building at all. If you wanted to create a consumer that performed a line count of all C++ files or generated a Clang compilation database, for example, this would be an acceptable use of a BuildBackend.

    Technically, we don't need to feed TreeMetadataEmitter's output into a BuildBackend: it's possible to create your own consumer. However, BuildBackend provides a common framework from which to author consumers. Along the same vein, you don't need to use TreeMetadataEmitter to consume MozbuildSandbox instances. Nor do you need to use BuildReader to traverse the files! This is just the default framework we've established for our build system.

    Legacy Content


    Makefile basics

    Makefiles can be quite complicated, but Mozilla provides a number of built-in rules that should enable most Makefiles to be quite simple. Complete documentation for make is beyond the scope of this document, but is available here.

    One concept you will need be familiar with is variables in make. Variables are defined by the syntax VARIABLE = VALUE, and the value of a variable is referenced by writing $(VARIABLE). All variables are strings.

    All files in Mozilla have the same basic format:

    DEPTH           = ../../../..
    topsrcdir       = @top_srcdir@
    srcdir          = @srcdir@
    VPATH           = @srcdir@
    include $(DEPTH)/config/

    # ... Main body of Makefile goes here ...
    include $(topsrcdir)/config/
    # ... Additional rules go here ...
    • The DEPTH variable should be set to the relative path from your to the toplevel Mozilla directory.
    • topsrcdir is substituted in by configure, and points to the toplevel mozilla directory.
    • srcdir is also substituted in by configure, and points to the source directory for the current directory. In source tree builds, this will simply point to "." (the current directory).
    • VPATH is a list of directories where make will look for prerequisites (i.e. source files).

    One other frequently used variable not specific to a particular build target is DIRS. DIRS is a list of subdirectories of the current directory to recursively build in. Subdirectories are traversed after their parent directories. For example, you could have:

    DIRS = \
      public \
      resources \
      src \

    This example demonstrates another concept, continuation lines. A backslash as the last character on a line allows the variable definition to be continued on the next line. The extra whitespace is compressed. The terminating $(NULL) is a method for consistency; it allows you to add and remove lines without worrying about whether the last line has an ending backslash or not.

    Makefile examples

    Building libraries

    There are three main types of libraries that are built in Mozilla:

    • Components are shared libraries (except in static builds) which are installed to dist/bin/components. Components are not linked against by any other library.
    • Non-component shared libraries includes libraries such as libxpcom, libmozjs. These libraries are installed to dist/bin and are linked against. You will probably not need to create a new library of this type.
    • Static libraries are often used as intermediate steps to building a shared library, if there are source files from several directories that are part of the shared library. Static libraries may also be linked into an executable.

    Non-component shared libraries

    A non-component shared library is useful when there is common code that several components need to share, and sharing it through XPCOM is not appropriate or not possible. As an example, let's look at a portion of the Makefile for libmsgbaseutil, which is linked against by all of the mailnews components:

     DEPTH           = ../../..
     topsrcdir       = @top_srcdir@
     srcdir          = @srcdir@
     VPATH           = @srcdir@
     include $(DEPTH)/config/
     MODULE          = msgbaseutil
     LIBRARY_NAME    = msgbaseutil
     SHORT_LIBNAME   = msgbsutl

    Notice that the only real change from the component example above is that IS_COMPONENT is not set. When this is not set, a shared library will be created and installed to dist/bin.

    Static libraries

    As mentioned above, static libraries are most commonly used as intermediate steps to building a larger library (usually a component). This allows you to spread out the source files in multiple subdirectories. Static libraries may also be linked into an executable. As an example, here is a portion of the Makefile from layout/base/src:

     DEPTH           = ../../..
     topsrcdir       = @top_srcdir@
     srcdir          = @srcdir@
     VPATH           = @srcdir@
     include $(DEPTH)/config/
     MODULE          = layout
     LIBRARY_NAME    = gkbase_s
     # REQUIRES and CPPSRCS omitted here for brevity #
     # we don't want the shared lib, but we want to force the creation of a static lib.
     include $(topsrcdir)/config/

    The key here is setting FORCE_STATIC_LIB = 1. This creates libgkbase_s.a (on unix) and gkbase_s.lib on Windows, and copies it to dist/lib. Now, let's take a look at how to link several static libraries together to create a component:

     DEPTH           = ../..
     topsrcdir       = @top_srcdir@
     srcdir          = @srcdir@
     VPATH           = @srcdir@
     include $(DEPTH)/config/
     MODULE          = layout
     LIBRARY_NAME    = gklayout
     IS_COMPONENT    = 1
     MODULE_NAME     = nsLayoutModule
     CPPSRCS         = \
                     nsLayoutModule.cpp \
                     $(DIST)/lib/$(LIB_PREFIX)gkhtmlbase_s.$(LIB_SUFFIX) \
                     $(DIST)/lib/$(LIB_PREFIX)gkhtmldoc_s.$(LIB_SUFFIX) \
                     $(DIST)/lib/$(LIB_PREFIX)gkhtmlforms_s.$(LIB_SUFFIX) \
                     $(DIST)/lib/$(LIB_PREFIX)gkhtmlstyle_s.$(LIB_SUFFIX) \
                     $(DIST)/lib/$(LIB_PREFIX)gkhtmltable_s.$(LIB_SUFFIX) \
                     $(DIST)/lib/$(LIB_PREFIX)gkxulbase_s.$(LIB_SUFFIX) \
                     $(DIST)/lib/$(LIB_PREFIX)gkbase_s.$(LIB_SUFFIX) \
                     $(DIST)/lib/$(LIB_PREFIX)gkconshared_s.$(LIB_SUFFIX) \
                     $(DIST)/lib/$(LIB_PREFIX)gkxultree_s.$(LIB_SUFFIX) \
                     $(DIST)/lib/$(LIB_PREFIX)gkxulgrid_s.$(LIB_SUFFIX) \
     include $(topsrcdir)/config/

    SHARED_LIBRARY_LIBS is set to a list of static libraries which should be linked into this shared library. Note the use of LIB_PREFIX and LIB_SUFFIX to make this work on all platforms.

    Building jar files

    Jar files are used for packaging chrome files (XUL, JavaScript, and CSS). For more information on Jar packaging, you can read this document. Here we will only cover how to set up a Makefile to package jars. Here is an example:

     DEPTH           = ../../../..
     topsrcdir       = @top_srcdir@
     srcdir          = @srcdir@
     VPATH           = @srcdir@
     include $(DEPTH)/config/
     include $(topsrcdir)/config/

    That's right, there are no extra variables to define. If a file exists in the same directory as this, it will automatically be processed. Although the common practice is to have a resources directory that contains the and chrome files, you may also put a in a directory that creates a library, and it will be processed.

    See the glossary of Makefile variables for information about specific variables and how to use them.

    Original Document Information

    • Author: Brian Ryner
    • Copyright Information: Portions of this content are © 1998–2006 by individual contributors; content available under a Creative Commons license


    문서 태그 및 공헌자

    최종 변경: jimblandy,