MDN’s new design is in Beta! A sneak peek:

この記事はまだボランティアによって 日本語 に翻訳されていません。ぜひ MDN に参加して翻訳を手伝ってください!
この記事を English (US) で読むこともできます。

Tuning pageload can be beneficial if you know what you're doing. Firefox (and all Mozilla Products/Projects which do page-loading) ship with what are considered the "best" settings for the most cases. This document explains which preferences to tweak to affect your pageload time. You use the URL about:config or the file prefs.js in your profile to change these.

To best understand what the following preferences affect, there are a few things you should know.

  • The data flows in Gecko as follows: network -> necko -> parser -> content sink -> content model -> rendering model -> layout -> painting.
This is a preference that specifies a delay, in milliseconds, after the data from the server has started coming in.
During this delay, the page that's coming in is not painted, unless it ends up fully loaded before the delay expires. The idea here is twofold.
  • This reduces ugly visual jitter as the new page comes in by not starting painting till after we have a bunch of the data.
  • This makes overall page load time shorter by not doing extra repaints very early on.
This preference, when true, means that the content sink can tell the parser to stop for now and return to the event loop, which allows layout and painting to happen.
If the parser gets a large chunk of data, it will try to parse it all, building the corresponding content model. Since layout and painting happen asynchronously, while the parser is working there is no layout or painting. So this preference is used to increase responsiveness, especially on cached loads (where data comes into the parser in large chunks)
Controls how often the sink interrupts the parser.
The parser is interrupted at least every content.max.tokenizing.time microseconds, if it can be interrupted at all; bug 76722 may have more details on this part.
Determines how often we switch content sink modes.
There are two modes. In mode A we interrupt the parser every content.max.tokenizing.time microseconds. In mode B we interrupt the parser every 3000 microseconds. Every content.switch.threshold microseconds, we decide whether we should be in mode A or mode B. The decision is based on whether there were any user events on the relevant widget in the last content.switch.threshold microseconds. As in, if the user is moving the mouse or typing in that window, we'll be more responsive; if there is no user activity, we will aim for less parser interruption and less responsiveness but lower overall load time.
Controls the information flow from content sink to rendering model.
In particular, the way things work right now is that the parser and content sink construct the DOM; then every so often, the content sink lets the rendering model constructor (nsCSSFrameConstructor) know that there are new DOM nodes. The reason for this is that nsCSSFrameConstructor is most efficient when doing a bunch of stuff at once instead of constructing rendering objects for one DOM node at a time. specifically - content.notify.ontimer controls whether the frame constructor is notified off a timer at all content.notify.backoffcount controls how many times that happens for a given page (the default is arbitrarily many times). Once the backoff count is reached, no more rendering model construction till after the whole page is parsed.
Controls the maximum length of data in a TextNode.
So if you have more than content.maxtextrun characters of text in a row, we'll create multiple textnodes for it. This is an optimization designed to prevent long text from ending up being O(N^2). It's also really a bug per the DOM spec and we should stop doing it...


 このページの貢献者: Dolphinling, Dria, NickolayBot, Ppuryear, Waldo, Callek, Bzbarsky
 最終更新者: Dolphinling,