Do browsers parse javascript on every page load?

These are the details that I’ve been able to dig up. It’s worth noting first that although JavaScript is usually considered to be interpreted and run on a VM, this isn’t really the case with the modern interpreters, which tend to compile the source directly into machine code (with the exception of IE).


Chrome : V8 Engine

V8 has a compilation cache. This stores compiled JavaScript using a hash of the source for up to 5 garbage collections. This means that two identical pieces of source code will share a cache entry in memory regardless of how they were included. This cache is not cleared when pages are reloaded.

Source


Update – 19/03/2015

The Chrome team have released details about their new techniques for JavaScript streaming and caching.

  1. Script Streaming

Script streaming optimizes the parsing of JavaScript files. […]

Starting in version 41, Chrome parses async and deferred scripts on a separate thread as soon as the download has begun. This means that parsing can complete just milliseconds after the download has finished, and results in pages loading as much as 10% faster.

  1. Code caching

Normally, the V8 engine compiles the page’s JavaScript on every visit, turning it into instructions that a processor understands. This compiled code is then discarded once a user navigates away from the page as compiled code is highly dependent on the state and context of the machine at compilation time.

Chrome 42 introduces an advanced technique of storing a local copy of the compiled code, so that when the user returns to the page the downloading, parsing, and compiling steps can all be skipped. Across all page loads, this allows Chrome to avoid about 40% of compile time and saves precious battery on mobile devices.


Opera : Carakan Engine

In practice this means that whenever a script program is about to be
compiled, whose source code is identical to that of some other program
that was recently compiled, we reuse the previous output from the
compiler and skip the compilation step entirely. This cache is quite
effective in typical browsing scenarios where one loads page after
page from the same site, such as different news articles from a news
service, since each page often loads the same, sometimes very large,
script library.

Therefore JavaScript is cached across page reloads, two requests to the same script will not result in re-compilation.

Source


Firefox : SpiderMonkey Engine

SpiderMonkey uses Nanojit as its native back-end, a JIT compiler. The process of compiling the machine code can be seen here. In short, it appears to recompile scripts as they are loaded. However, if we take a closer look at the internals of Nanojit we see that the higher level monitor jstracer, which is used to track compilation can transition through three stages during compilation, providing a benefit to Nanojit:

The trace monitor’s initial state is monitoring. This means that
spidermonkey is interpreting bytecode. Every time spidermonkey
interprets a backward-jump bytecode, the monitor makes note of the
number of times the jump-target program-counter (PC) value has been
jumped-to. This number is called the hit count for the PC. If the hit
count of a particular PC reaches a threshold value, the target is
considered hot.

When the monitor decides a target PC is hot, it looks in a hashtable
of fragments to see if there is a fragment holding native code for
that target PC. If it finds such a fragment, it transitions to
executing mode. Otherwise it transitions to recording mode.

This means that for hot fragments of code the native code is cached. Meaning that will not need to be recompiled.
It is not made clear is these hashed native sections are retained between page refreshes. But I would assume that they are. If anyone can find supporting evidence for this then excellent.

EDIT:
It’s been pointed out that Mozilla developer Boris Zbarsky has stated that Gecko does not cache compiled scripts yet. Taken from this SO answer.


Safari : JavaScriptCore/SquirelFish Engine

I think that the best answer for this implementation has already been given by someone else.

We don’t currently cache the bytecode (or the native code). It is an
option we have considered, however, currently, code generation is a
trivial portion of JS execution time (< 2%), so we’re not pursuing
this at the moment.

This was written by Maciej Stachowiak, the lead developer of Safari. So I think we can take that to be true.

I was unable to find any other information but you can read more about the speed improvements of the latest SquirrelFish Extreme engine here, or browse the source code here if you’re feeling adventurous.


IE : Chakra Engine

There is no current information regarding IE9’s JavaScript Engine (Chakra) in this field. If anyone knows anything, please comment.

This is quite unofficial, but for IE’s older engine implementations, Eric Lippert (a MS developer of JScript) states in a blog reply here that:

JScript Classic acts like a compiled language in the sense that before any JScript Classic program runs, we fully syntax check the code, generate a full parse tree, and generate a bytecode. We then run the bytecode through a bytecode interpreter. In that sense, JScript is every bit as “compiled” as Java. The difference is that JScript does not allow you to persist or examine our proprietary bytecode. Also, the bytecode is much higher-level than the JVM bytecode — the JScript Classic bytecode language is little more than a linearization of the parse tree, whereas the JVM bytecode is clearly intended to operate on a low-level stack machine.

This suggests that the bytecode does not persist in any way, and thus bytecode is not cached.

Leave a Comment