Performance

  • Revision slug: Storage/Performance
  • Revision title: Performance
  • Revision id: 60939
  • Created:
  • Creator: BrettWilson
  • Is current revision? No
  • Comment explain query plan example

Revision Content

mozStorage uses sqlite as the database backend. It has generally good performance for a small embedded database. However, many things cause various database operations to be slow.

Transactions

There is overhead associated with each transaction. When you execute a SQL statement in isolation, an implicit transaction is created around that statement. When transactions are committed, sqlite does journaling which requires syncing data to disk. This operation is extremely slow. Therefore, if you are doing many transactions in a row, you will get significant performance improvements by surrounding them in a transaction.

If you are not using the advanced caching discussed below, the database cache in memory is cleared at the end of each transaction. This is another reason to use transactions, even if you are only executing read operations.

The Asynchronous writes discussed below removes most of the immediate penalty of a commit, so you will not notice the problem as much. However, there is still overhead, and using a transaction will still be faster. One major problem is that the queue of file operations will get very long if there are many transactions. Some operations require walking this queue to see what operations have been pending, and they will be slower. If the user shuts down the browser soon after you do this type of operation, you can delay shutdown (possibly for many tens of seconds for large numbers of transactions and slow disks), making it look like the browser is hung.

Queries

Careful reordering of the SQL statement, or creating the proper indices, can often improve performance. See the sqlite optimizer overview on the sqlite web site for information on how sqlite uses indices and executes statements.

You might also want to try to the "explain" feature on your statements to see if they are using the indices you expect. Type "explain" followed by your statement to see the plan. For example, explain select * from moz_history; The results are hard to understand, but you should be able to see whether it is using indices. A simpler output that will give you a higher level overview is "explain query plan". For example

sqlite> explain query plan select * from moz_historyvisit v join moz_history h
        on v.page_id = h.id where v.visit_date > 1000000000;

0|0|TABLE moz_historyvisit AS v WITH INDEX moz_historyvisit_dateindex
1|1|TABLE moz_history AS h USING PRIMARY KEY

This tells us that it will first look up in moz_historyvisit using an index, and will then look up in moz_history using the primary key. Both of these are "good" because they use indices and primary keys, which are fast.

sqlite> explain query plan select * from moz_historyvisit where session = 12;

0|0|TABLE moz_historyvisit

In this example, you can see that it is not using an index, so this query would be slow.

You can download the command line tool from the sqlite download page. Be sure you have a version of the command line tool that is at least as recent as what Mozilla uses. As of April 10, 2006, Mozilla uses sqlite 3.3.4, but the latest precompiled version of the command line tools is not available for all platforms. This will lead to errors that say "database is encrypted" because the tool is not able to recognise the file format. You may want to check the SQLITE_VERSION definition in {{template.Source("db/sqlite3/src/sqlite3.h")}} for the current version if you are having problems.

Caching

Sqlite has a cache of database pages in memory. It keeps pages associated with the current transaction so it can roll them back, and it also keeps recently used ones so it can run faster.

By default, it only keeps the pages in memory during a transaction (if you don't explicitly open a transaction, one will be opened for you enclosing each individual statement). At the end of a transaction, the cache is flushed. If you subsequently begin a new transaction, the pages you need will be re-read from disk (or hopefully the OS cache). This makes even simple operations block on OS file reads, which can be prohibitive on some systems with bad disk caches or networked drives.

You can control the size of the memory cache using the cache_size pragma. This value controls the number of pages of the file that can be kept in memory at once. The page size can be set using the page_size pragma before any operations have been done on the file. You can see an example of setting the maximum cache size to be a percentage of memory in nsNavHistory::InitDB in {{template.Source("browser/components/places/src/nsNavHistory.cpp")}}.

Keeping the cache between transactions

If your application uses many small transactons, you can get a significant performance improvement by keeping the cache live between transactions. This is done by using an extra "dummy" connection to the same database with a special flag set to cause the caches to be shared between the two connections. The dummy connection keeps a perpetually open transaction which locks the cache in memory. Since the cache is shared with the main connection, the cache never expires.

The dummy transaction must be one that locks a page in memory. A simple BEGIN TRANSACTION statement doesn't do this because sqlite does the locking lazily. Therefore, you must have a statement that modifies data. It might be tempting to run a statement on the sqlite_master which contains the information on the tables and indices in the database. However, if your application is initializing the database for the first time, this table will be empty and the cache won't be locked. nsNavHistory::StartDummyStatement creates a dummy table with a single element in it for this purpose.

It is important to note that when a statement is open, the database schema cannot be modified. This means that when the dummy transaction is running, you cannot create or modify any tables or indices, or vacuum the database. You will have to stop the dummy transaction, do the schema-modifying operation, and restart it.

Priming the cache

On startup, the cache is empty and pages are brought in on demand. This causes many disk seeks because the pages are read in essentially random order. With the default page size of 1K, this means that many disk seeks are required and many operations will be slow at startup.

Mozilla has added the Preload() function to mozStorageConnection to load data into the cache in bulk. Because the data is loaded from disk in one chunk, there are no disk seeks and performance can be improved, even if much more data is loaded into memory at once.

This function must be called after the pager is open. This means that you have to have done at least one read or a write on the connection, and still have an open transaction (this might be the dummy transaction discussed above). It loads data from the disk up to the maximum size of the cache you have configured or the size of the file, whichever is smaller.

It reads this data starting from the beginning of the file and reads the pages in order. If your database is much larger than the size of the cache, this may not work very well because no pages from the end of the file are brought into memory. It is possible some functionality can be added to preload the pages previously in the cache on the last run.

Disk writes

Sqlite provides the basic ACID rules of database theory:

  • Atomicity: each transaction is atomic and cannot be "partially" committed.
  • Consistency: the database won't get corrupted.
  • Isolation: multiple transactions do not affect each other.
  • Durability: once a commit has gone through, the data is guaranteed to be committed.

The problem is that these requirements make some operations, especially commits, very slow. For each commit, sqlite does two disk syncs among many other file operations (see the "Atomic Commit" section of http://www.sqlite.org/php2004/slides-all.html for more information on how this works). These disk syncs are very slow and limit the speed of a commit to the rotational speed of the mechanical disk.

For the browser history, this overhead is unacceptably high. On some systems, the cost of committing a new page to the history database was as high as downloading the entire page (from a fast nearby page load test server) and rendering the page to the screen. As a result, Mozilla has implemented a lazy sync system.

Lazy writing

Mozilla has relaxed the ACID requirements in order to speed up commits. In particular, we have dropped durability. This means that when a commit returns, you are not guaranteed that the commit has gone through. If the power goes out right away, that commit may (or may not) be lost. However, we still support the other (ACI) requirements. This means that the database will not get corrupted. If the power goes out immediately after a commit, the transaction will be like it was rolled back: the database will still be in a consistent state.

Higher commit performance is achieved by writing to the database from a separate thread (see {{template.Source("storage/src/mozStorageAsyncIO.cpp")}} which is associated with the storage service in {{template.Source("storage/src/mozStorageService.cpp")}}). The main database thread does everything exactly as it did before. However, we have overridden the file operations and everything comes through the AsnycIO module. This file is based on test_async.c from the sqlite distribution.

The AsyncIO module packages writes up in messages and puts them on the write thread's message queue. This write thread waits for messages and processes them as fast as it can. This means that writes, locking, and most importantly, disk syncs, only block the AsyncIO thread. Reads are done synchronously, taking into account unwritten data still in the buffer.

Shutdown

If you are doing many writes, the AsnycIO thread will fall behind. Hopefully, the application will give enough time for this thread to flush before exiting. If there are still items in the write queue on shutdown, the storage service will block until all data has been written. It then goes into single-threaded mode where all operations are synchronous. This enables other services to still use the database after the storage service has gotten the shutdown message.

Durable transactions

There is currently no way to ensure durability for particularly important transactions where speed is less of an issue. A flush command to guarantee data has been written to disk may be added in the future.

Vacuuming and zero-fill

Sqlite has a VACUUM command to compress unused space from the database. Sqlite works like a memory manager or a file system. When data is deleted, the associated bytes are marked as free but are not removed from the file. This means that the file will not shrink, and some data may still be visible in the file. The way to work around this is to run the VACUUM command to remove this space.

Vacuuming is very slow. The vacuum command is essentially the same as the command line sqlite3 olddb .dump | sqlite3 newdb; mv newdb olddb. On some networked drives, vacuuming a 10MB database has been timed at over one minute. Therefore, you should avoid vacuuming whenever possible.

Some items in databases are privacy sensitive, such as deleted history items. Users have the expectation that deleting items in their history will remove the traces of that from the database. As a result, Mozilla enables the SQLITE_SECURE_DELETE preprocessor flag in {{template.Source("db/sqlite3/src/Makefile.in")}}. This flag causes deleted items to be filled with 0s on disk. This eliminates the need to vacuum except to reclaim disk space, and makes many operations much faster.

Zero-filling can have significant performance overhead in some situations. For example, the history service used to delete many database items at shutdown when expiring old history items. This operation is not necessarily slow, but writing 0s to disk in an "ACI" database is still slow. This made shutdown very slow because the AsyncIO thread would block shutdown ({{template.Bug(328598)}}). Shutdown times of more than 30 seconds were seen. As a result, this bug introduced incremental history expiration eliminating the need to write many 0s to disk on shutdown.

Unfortunately, this operation cannot be controlled on a per-transaction or per-connection basis. Some operations will benefit, while others will be hurt.

Revision Source

<p>
</p><p>mozStorage uses sqlite as the database backend. It has generally good performance for a small embedded database. However, many things cause various database operations to be slow.
</p>
<h3 name="Transactions"> Transactions </h3>
<p>There is overhead associated with each transaction. When you execute a SQL statement in isolation, an implicit transaction is created around that statement. When transactions are committed, sqlite does journaling which requires syncing data to disk. This operation is extremely slow. Therefore, if you are doing many transactions in a row, you will get significant performance improvements by surrounding them in a transaction.
</p><p>If you are not using the advanced caching discussed below, the database cache in memory is cleared at the end of each transaction. This is another reason to use transactions, even if you are only executing read operations.
</p><p>The Asynchronous writes discussed below removes most of the immediate penalty of a commit, so you will not notice the problem as much. However, there is still overhead, and using a transaction will still be faster. One major problem is that the queue of file operations will get very long if there are many transactions. Some operations require walking this queue to see what operations have been pending, and they will be slower. If the user shuts down the browser soon after you do this type of operation, you can delay shutdown (possibly for many tens of seconds for large numbers of transactions and slow disks), making it look like the browser is hung.
</p>
<h3 name="Queries"> Queries </h3>
<p>Careful reordering of the SQL statement, or creating the proper indices, can often improve performance. See the <a class="external" href="http://www.sqlite.org/optoverview.html">sqlite optimizer overview</a> on the sqlite web site for information on how sqlite uses indices and executes statements.
</p><p>You might also want to try to the "explain" feature on your statements to see if they are using the indices you expect. Type "explain" followed by your statement to see the plan. For example, <code>explain select * from moz_history;</code> The results are hard to understand, but you should be able to see whether it is using indices. A simpler output that will give you a higher level overview is "explain query plan". For example
</p>
<pre>sqlite&gt; explain query plan select * from moz_historyvisit v join moz_history h
        on v.page_id = h.id where v.visit_date &gt; 1000000000;

0|0|TABLE moz_historyvisit AS v WITH INDEX moz_historyvisit_dateindex
1|1|TABLE moz_history AS h USING PRIMARY KEY</pre>
<p>This tells us that it will first look up in moz_historyvisit using an index, and will then look up in moz_history using the primary key. Both of these are "good" because they use indices and primary keys, which are fast.
</p>
<pre>sqlite&gt; explain query plan select * from moz_historyvisit where session = 12;

0|0|TABLE moz_historyvisit</pre>
<p>In this example, you can see that it is <i>not</i> using an index, so this query would be slow.
</p><p>You can download the command line tool from the <a class="external" href="http://www.sqlite.org/download.html">sqlite download page</a>. Be sure you have a version of the command line tool that is at least as recent as what Mozilla uses. As of April 10, 2006, Mozilla uses sqlite 3.3.4, but the latest precompiled version of the command line tools is not available for all platforms. This will lead to errors that say "database is encrypted" because the tool is not able to recognise the file format. You may want to check the SQLITE_VERSION definition in {{template.Source("db/sqlite3/src/sqlite3.h")}} for the current version if you are having problems.
</p>
<h3 name="Caching"> Caching </h3>
<p>Sqlite has a cache of database pages in memory. It keeps pages associated with the current transaction so it can roll them back, and it also keeps recently used ones so it can run faster.
</p><p>By default, it only keeps the pages in memory during a transaction (if you don't explicitly open a transaction, one will be opened for you enclosing each individual statement). At the end of a transaction, the cache is flushed. If you subsequently begin a new transaction, the pages you need will be re-read from disk (or hopefully the OS cache). This makes even simple operations block on OS file reads, which can be prohibitive on some systems with bad disk caches or networked drives.
</p><p>You can control the size of the memory cache using the <code>cache_size</code> <a class="external" href="http://www.sqlite.org/pragma.html">pragma</a>. This value controls the number of pages of the file that can be kept in memory at once. The page size can be set using the <code>page_size</code> pragma before any operations have been done on the file. You can see an example of setting the maximum cache size to be a percentage of memory in nsNavHistory::InitDB in {{template.Source("browser/components/places/src/nsNavHistory.cpp")}}.
</p>
<h4 name="Keeping_the_cache_between_transactions"> Keeping the cache between transactions </h4>
<p>If your application uses many small transactons, you can get a significant performance improvement by keeping the cache live between transactions. This is done by using an extra "dummy" connection to the same database with a special flag set to cause the caches to be shared between the two connections. The dummy connection keeps a perpetually open transaction which locks the cache in memory. Since the cache is shared with the main connection, the cache never expires.
</p><p>The dummy transaction must be one that locks a page in memory. A simple <code>BEGIN TRANSACTION</code> statement doesn't do this because sqlite does the locking lazily. Therefore, you must have a statement that modifies data. It might be tempting to run a statement on the <code>sqlite_master</code> which contains the information on the tables and indices in the database. However, if your application is initializing the database for the first time, this table will be empty and the cache won't be locked. nsNavHistory::StartDummyStatement creates a dummy table with a single element in it for this purpose.
</p><p>It is important to note that when a statement is open, the database schema cannot be modified. This means that when the dummy transaction is running, you cannot create or modify any tables or indices, or vacuum the database. You will have to stop the dummy transaction, do the schema-modifying operation, and restart it.
</p>
<h4 name="Priming_the_cache"> Priming the cache </h4>
<p>On startup, the cache is empty and pages are brought in on demand. This causes many disk seeks because the pages are read in essentially random order. With the default page size of 1K, this means that many disk seeks are required and many operations will be slow at startup.
</p><p>Mozilla has added the <code>Preload()</code> function to <code>mozStorageConnection</code> to load data into the cache in bulk. Because the data is loaded from disk in one chunk, there are no disk seeks and performance can be improved, even if much more data is loaded into memory at once.
</p><p>This function must be called after the pager is open. This means that you have to have done at least one read or a write on the connection, and still have an open transaction (this might be the dummy transaction discussed above). It loads data from the disk up to the maximum size of the cache you have configured or the size of the file, whichever is smaller.
</p><p>It reads this data starting from the beginning of the file and reads the pages in order. If your database is much larger than the size of the cache, this may not work very well because no pages from the end of the file are brought into memory. It is possible some functionality can be added to preload the pages previously in the cache on the last run.
</p>
<h3 name="Disk_writes"> Disk writes </h3>
<p>Sqlite provides the basic ACID rules of database theory:
</p>
<ul><li> Atomicity: each transaction is atomic and cannot be "partially" committed.
</li><li> Consistency: the database won't get corrupted.
</li><li> Isolation: multiple transactions do not affect each other.
</li><li> Durability: once a commit has gone through, the data is guaranteed to be committed.
</li></ul>
<p>The problem is that these requirements make some operations, especially commits, very slow. For each commit, sqlite does two disk syncs among many  other file operations (see the "Atomic Commit" section of http://www.sqlite.org/php2004/slides-all.html for more information on how this works). These disk syncs are very slow and limit the speed of a commit to the rotational speed of the mechanical disk.
</p><p>For the browser history, this overhead is unacceptably high. On some systems, the cost of committing a new page to the history database was as high as downloading the entire page (from a fast nearby page load test server) and rendering the page to the screen. As a result, Mozilla has implemented a lazy sync system.
</p>
<h4 name="Lazy_writing"> Lazy writing </h4>
<p>Mozilla has relaxed the ACID requirements in order to speed up commits. In particular, we have dropped durability. This means that when a commit returns, you are not guaranteed that the commit has gone through. If the power goes out right away, that commit may (or may not) be lost. However, we still support the other (ACI) requirements. This means that the database will not get corrupted. If the power goes out immediately after a commit, the transaction will be like it was rolled back: the database will still be in a consistent state.
</p><p>Higher commit performance is achieved by writing to the database from a separate thread (see {{template.Source("storage/src/mozStorageAsyncIO.cpp")}} which is associated with the storage service in {{template.Source("storage/src/mozStorageService.cpp")}}). The main database thread does everything exactly as it did before. However, we have overridden the file operations and everything comes through the AsnycIO module. This file is based on <a class="external" href="http://www.sqlite.org/cvstrac/rlog?f=sqlite/src/test_async.c">test_async.c</a> from the sqlite distribution.
</p><p>The AsyncIO module packages writes up in messages and puts them on the write thread's message queue. This write thread waits for messages and processes them as fast as it can. This means that writes, locking, and most importantly, disk syncs, only block the AsyncIO thread. Reads are done synchronously, taking into account unwritten data still in the buffer.
</p>
<h5 name="Shutdown"> Shutdown </h5>
<p>If you are doing many writes, the AsnycIO thread will fall behind. Hopefully, the application will give enough time for this thread to flush before exiting. If there are still items in the write queue on shutdown, the storage service will block until all data has been written. It then goes into single-threaded mode where all operations are synchronous. This enables other services to still use the database after the storage service has gotten the shutdown message.
</p>
<h5 name="Durable_transactions"> Durable transactions </h5>
<p>There is currently no way to ensure durability for particularly important transactions where speed is less of an issue. A flush command to guarantee data has been written to disk may be added in the future.
</p>
<h2 name="Vacuuming_and_zero-fill"> Vacuuming and zero-fill </h2>
<p>Sqlite has a VACUUM command to compress unused space from the database. Sqlite works like a memory manager or a file system. When data is deleted, the associated bytes are marked as free but are not removed from the file. This means that the file will not shrink, and some data may still be visible in the file. The way to work around this is to run the VACUUM command to remove this space.
</p><p>Vacuuming is very slow. The vacuum command is essentially the same as the command line <code>sqlite3 olddb .dump | sqlite3 newdb; mv newdb olddb</code>. On some networked drives, vacuuming a 10MB database has been timed at over one minute. Therefore, you should avoid vacuuming whenever possible.
</p><p>Some items in databases are privacy sensitive, such as deleted history items. Users have the expectation that deleting items in their history will remove the traces of that from the database. As a result, Mozilla enables the <code>SQLITE_SECURE_DELETE</code> preprocessor flag in {{template.Source("db/sqlite3/src/Makefile.in")}}. This flag causes deleted items to be filled with 0s on disk. This eliminates the need to vacuum except to reclaim disk space, and makes many operations much faster.
</p><p>Zero-filling can have significant performance overhead in some situations. For example, the history service used to delete many database items at shutdown when expiring old history items. This operation is not necessarily slow, but writing 0s to disk in an "ACI" database is still slow. This made shutdown very slow because the AsyncIO thread would block shutdown ({{template.Bug(328598)}}). Shutdown times of more than 30 seconds were seen. As a result, this bug introduced incremental history expiration eliminating the need to write many 0s to disk on shutdown.
</p><p>Unfortunately, this operation cannot be controlled on a per-transaction or per-connection basis. Some operations will benefit, while others will be hurt.
</p>
Revert to this revision