This article is in need of a technical review.
This article documents future directions in functionality, design, and coding practices for SpiderMonkey. It can be read as something like an "ideal future state" for the engine. That means the code as it is today won't match this document, and that's OK. Whenever practical, new code and changes should move code closer to the ideal future. And of course, the idea of the future is always evolving in time as things change and we learn more.
Generational and compacting GC
We are converting SpiderMonkey to generational GC with exact rooting. This requires pervasive changes to the internals and the APIs. We expect GGC to have big benefits in reducing nontrivial (>1ms) GC pauses and heap usage, so we're willing to change everything to get it done. We also want to do compacting for further heap usage reductions.
The big change is in new rooting mechanisms. Currently, we use conservative scanning for the stack, which isn't a good fit with moving GC. In the new version, the internals use a new internal rooting API. Final decisions about the external API haven't been made yet, but it will need to change for moving GC.
Most machines that run SpiderMonkey have multiple cores, so parallelism is almost certainly part of our future. It will take some experimentation to build the right set of parallel facilities. During that experimentation, we must take care to apply parallelism with conceptual clarity and discipline.
River Trail (bug 801869 and others) is an API for parallel programming JS based on a ParallelArray type and a set of operations, including map and fold. The parallel operation functions take JS functions as callbacks, and run the callback functions in parallel, using SIMD operations (compiling the callbacks in a special mode), threads, or both. In the SpiderMonkey River Trail threading model:
- The basic threading service is the thread pool. The thread pool has a fixed thread count and provides very simple MT execution--no complicated scheduling. See vm/ThreadPool.h.
- The ParallelArray MT implementation uses the higher-level fork-join facility. The fork-join facility is designed for tasks that can be split up into N chunks (where the system has N execution threads: N-1 in the thread pool plus the main thread) and simply runs them directly on the thread pool.
- Tasks run MT must not touch the runtime. They should only touch thread-local data.
- TODO: explain what memory those tasks will use for JS values.
The SpiderMonkey API is C++.
For the near future, we do not intend to provide a stable API, because too many things need to change for projects like GGC. Also, we will someday want a more modern C++ API. Going forward from now, the API will likely be semi-stable, but there is no guarantee.
There is not yet a clear direction on which files contain the API and what demarcates official API from Gecko backdoors. More discussion is needed.
Practical coding bits
Experimental language features
New language features (ES6+) and APIs often get added to SpiderMonkey while the spec is still being drafted, or before the feature has stabilized. This is good, because it helps the spec work. For these features:
- Do not add vendor prefixes. See http://hsivonen.iki.fi/vendor-prefixes/ for an extended argument.
- Disable the feature before the Beta channel. Nightly and Aurora are the good channels for experimental work.
The plan is to make SpiderMonkey not depend on NSPR. The main issue is that SpiderMonkey will need another way to do multithreading.
JS_THREADSAFE is deprecated. The only reason we have it now is so that shells can build without NSPR. Once the NSPR dependency is removed, JS_THREADSAFE should not be used.