It does if you ignore the overhead of JIT compilation itself. However, my understanding is that writing a JIT implementation that performs better than a good interpreter is surprisingly difficult. You have to have a lot of complicated logic for tracking hotspots and using JIT judiciously in short-running scripts.
It's the instruction dispatch overhead that's the real unavoidable problem. LuaJIT, for example, uses a bunch of tricks to minimize it in the bytecode VM, and it's significantly faster than the standard Lua VM but still far, far slower than basic JIT compilation.
Lua JIT is one of the most sophisticated dynamic language JITs out there, so it's hardly evidence that a simple implementation of a JIT will perform better than a good bytecode interpreter.
The problem is less acute for server side apps because the programs run for a long time, so that the initial compilation overhead is insignificant. However, there's a reason that you need a JIT to make Ruby fast rather than an ahead of time compiler. Ruby has so few compile-time guarantees that you need to do a lot of dynamic specialization to get really significant performance improvements. So compilation might still be triggered even after a script has been running for a long time.
I'd add that PyPy, which is also very sophisticated, is often not much faster than CPython, and in fact is slower for some types of code. Writing good JIT-based implementations for dynamic languages is really a tough problem. See e.g. the following post for some explanation of why:
Yes.
> Lua JIT is one of the most sophisticated dynamic language JITs out there, so it's hardly evidence that a simple implementation of a JIT will perform better than a good bytecode interpreter.
I meant that even a basic JIT can offer the same speedup as LuaJIT's interpreter, and a lot more work went into the latter.
> The problem is less acute for server side apps because the programs run for a long time, so that the initial compilation overhead is insignificant. However, there's a reason that you need a JIT to make Ruby fast rather than an ahead of time compiler. Ruby has so few compile-time guarantees that you need to do a lot of dynamic specialization to get really significant performance improvements. So compilation might still be triggered even after a script has been running for a long time.
The initial results of MJIT for simply removing the instruction dispatch overhead and doing some basic optimizations are a 30-230% performance increase on a small but real-world benchmark. No type specialization and specular optimization required.
> I'd add that PyPy, which is also very sophisticated, is often not much faster than CPython, and in fact is slower for some types of code. Writing good JIT-based implementations for dynamic languages is really a tough problem. See e.g. the following post for some explanation of why:
Most of the discussion about PyPy is completely irrelevant for the discussion about MJIT. PyPy isn't a method JIT. PyPy traces the interpreter itself and tries to produce a specialized interpreter. It works even worse at optimizing Ruby code via Topaz.