There is overhead associated with each transaction. When you execute a SQL statement in isolation, an implicit transaction is created around that statement. When transactions are committed, sqlite does journaling which requires syncing data to disk. This operation is extremely slow. Therefore, if you are doing many transactions in a row, you will get significant performance improvements by surrounding them in a transaction.
The Asynchronous writes discussed below removes most of the immediate penalty of a commit, so you will not notice the problem as much. However, there is still overhead, and using a transaction will still be faster. One major problem is that the queue of file operations will get very long if there are many transactions. Some operations require walking this queue to see what operations have been pending, and they will be slower. If the user shuts down the browser soon after you do this type of operation, you can delay shutdown (possibly for many tens of seconds for large numbers of transactions and slow disks), making it look like the browser is hung.
Careful reordering of the SQL statement, or creating the proper indices, can often improve performance. See the sqlite optimizer overview on the sqlite web site for information on how sqlite uses indices and executes statements.
You might also want to try to the "explain" feature on your statements to see if they are using the indices you expect. Type "explain" followed by your statement to see the plan. For example,
explain select * from moz_history; The results are hard to understand, but you should be able to see whether it is using indices. A command that will give you a higher level overview is "explain query plan". For example
sqlite> explain query plan select * from moz_historyvisit v join moz_history h on v.page_id = h.id where v.visit_date > 1000000000; 0|0|TABLE moz_historyvisit AS v WITH INDEX moz_historyvisit_dateindex 1|1|TABLE moz_history AS h USING PRIMARY KEY
This tells us that it will first look up in moz_historyvisit using an index, and will then look up in moz_history using the primary key. Both of these are "good" because they use indices and primary keys, which are fast.
sqlite> explain query plan select * from moz_historyvisit where session = 12; 0|0|TABLE moz_historyvisit
In this example, you can see that it is not using an index, so this query would be slow.
You can download the command line tool from the sqlite download page. Be sure you have a version of the command line tool that is at least as recent as what Mozilla uses. As of April 10, 2006, Mozilla uses sqlite 3.3.4, but the latest precompiled version of the command line tools is not available for all platforms. This will lead to errors that say "database is encrypted" because the tool is not able to recognize the file format. You may want to check the SQLITE_VERSION definition in
db/sqlite3/src/sqlite3.h for the current version if you are having problems.
Sqlite has a cache of database pages in memory. It keeps pages associated with the current transaction so it can roll them back, and it also keeps recently used ones so it can run faster.
In the past, it flushed the cache at the end of a transaction (by default). In order to keep the cache active between transactions, you needed to have a second connection with a 'dummy' (and schema changing) transaction. That is not needed now.
You can control the size of the memory cache using the
cache_size pragma. This value controls the number of pages of the file that can be kept in memory at once. The page size can be set using the
page_size pragma before any operations have been done on the file. You can see an example of setting the maximum cache size to be a percentage of memory in nsNavHistory::InitDB in
Sharding the cache between connections
By default, Storage enables the sqlite shared-cache mode which makes multiple connections to the same database share the same cache. The cache is threadsafe. All you need to do is use Services.storage.openDatabase(file). Some features (virtual tables, full text indexes) are not compatible with shared cache - then you need to use Services.storage.openUnsharedDatabase(file), which doesn't share the cache. See
Sqlite provides the basic ACID rules of database theory:
- Atomicity: each transaction is atomic and cannot be "partially" committed.
- Consistency: the database won't get corrupted.
- Isolation: multiple transactions do not affect each other.
- Durability: once a commit has gone through, the data is guaranteed to be committed.
The problem is that these requirements make some operations, especially commits, very slow. For each commit, sqlite does two disk syncs among many other file operations (see the "Atomic Commit" section of http://www.sqlite.org/php2004/slides-all.html for more information on how this works). These disk syncs are very slow and limit the speed of a commit to the rotational speed of the mechanical disk.
Vacuuming and zero-fill
Sqlite has a VACUUM command to compress unused space from the database. Sqlite works like a memory manager or a file system. When data is deleted, the associated bytes are marked as free but are not removed from the file. This means that the file will not shrink, and some data may still be visible in the file. The way to work around this is to run the VACUUM command to remove this space.
Vacuuming is very slow. The vacuum command is essentially the same as the command line
sqlite3 olddb .dump | sqlite3 newdb; mv newdb olddb. On some networked drives, vacuuming a 10MB database has been timed at over one minute. Therefore, you should avoid vacuuming whenever possible.
Some items in databases are privacy sensitive, such as deleted history items. Users have the expectation that deleting items in their history will remove the traces of that from the database. As a result, Mozilla enables the
SQLITE_SECURE_DELETE preprocessor flag in
db/sqlite3/src/Makefile.in. This flag causes deleted items to be filled with 0s on disk. This eliminates the need to vacuum except to reclaim disk space, and makes many operations much faster.
Zero-filling can have significant performance overhead in some situations. For example, the history service used to delete many database items at shutdown when expiring old history items. This operation is not necessarily slow, but writing 0s to disk in an "ACI" database is still slow. This made shutdown very slow because the AsyncIO thread would block shutdown (bug 328598). Shutdown times of more than 30 seconds were seen. As a result, this bug introduced incremental history expiration eliminating the need to write many 0s to disk on shutdown.
Unfortunately, this operation cannot be controlled on a per-transaction or per-connection basis. Some operations will benefit, while others will be hurt.