JSON.stringify is a core JavaScript function for serializing data. Its performance directly affects common operations across the web, from serializing data for a network request to saving data to localStorage . A faster JSON.stringify translates to quicker page interactions and more responsive applications. That’s why we’re excited to share that a recent engineering effort has made JSON.stringify in V8 more than twice as fast. This post breaks down the technical optimizations that made this improvement possible.
A Side-Effect-Free Fast Path #
The foundation of this optimization is a new fast path built on a simple premise: if we can guarantee that serializing an object will not trigger any side effects, we can use a much faster, specialized implementation. A "side effect" in this context is anything that breaks the simple, streamlined traversal of an object.
This includes not only the obvious cases like executing user-defined code during serialization, but also more subtle internal operations that might trigger a garbage collection cycle. For more details on what exactly can cause side effects and how you can avoid them, see Limitations.
As long as V8 can determine that serialization will be free from these effects, it can stay on this highly-optimized path. This allows it to bypass many expensive checks and defensive logic required by the general-purpose serializer, resulting in a significant speedup for the most common types of JavaScript objects that represent plain data.
Furthermore, the new fast path is iterative, in contrast to the recursive general-purpose serializer. This architectural choice not only eliminates the need for stack overflow checks and allows us to quickly resume after encoding changes, but also allows developers to serialize significantly deeper nested object graphs than was previously possible.
Handling different String Representations #
Strings in V8 can be represented with either one-byte or two-byte characters. If a string contains only ASCII characters, they are stored as a one-byte string in V8 that uses 1 byte per character. However if a string contains just a single character outside of the ASCII range, all characters of the string use a 2 byte representation, essentially doubling the memory utilization.
To avoid the constant branching and type checks of a unified implementation, the entire stringifier is now templatized on the character type. This means we compile two distinct, specialized versions of the serializer: one completely optimized for one-byte strings and another for two-byte strings. This has an impact on binary size, but we think the increased performance is definitely worth it.
The implementation handles mixed encodings efficiently. During serialization, we must already inspect each string's instance type to detect representations we can’t handle on the fast path (like ConsString , which might trigger a GC during flattening) that require a fallback to the slow path. This necessary check also reveals whether a string uses one-byte or two-byte encoding.
... continue reading