But it is AOT! That means we could at least have some widely adopted and supported AOT solution.
The performance of Truffle is amazing when it works, but I think the compatability issue is going to hamstring it for a while.
There was JS Engine performance war before Chrome v8 came around. From JavaScriptCore to whole bunch of monkeys [1] from Mozilla. Google v8 just makes the competition super heated and every one of them were working around the clock trying to out compete each other on the latest benchmarks. ( That was the dark era when Browser only cares about JS benchmarks scores and nothing in real world )
It would not be an exaggeration to say all the man hours on JS JIT is more than Perl, Ruby and PHP's VM combined. That is why I often said Ruby is the only Top 10 language that gets little to no funding and backing of FAANG. So both YJIT and Sorbet are much needed contribution from Stipe and Shopify. Along with help from Github and Gitlab ( hopefully :) ).
The VMs that gets more resources than JS would be JVM and .Net. And JVM is a monster on its own. Easily a multi billion dollar of investment over all these years. Or something using less man hours and resources like LuaJIT. But then Mike Pall is a super human.
[2] https://github.com/sagemathinc/JSage/tree/main/packages/jpyt...
I think one could radically change the way Python objects work internally, and have the C foreign function interface (FFI) wrap every object passed to a C extension in an API/ABI-preserving facade (which itself would wrap any objects returned from its methods). However, this would probably greatly slow down C extensions, which are often performance-critical sections of Python applications. It's also possible that there are portions of the C extension API that expose enough details of object internals to even make such facades herculean to implement. (I've only written some small simple C extensions and am not very familiar with the API.)
V8 didn't have to deal with API/ABI compatibility with any preexisting C extensions that may have made too many abstraction-violating assumptions about how objects and the VM worked.
Breaking too many important C extensions would almost certainly send Python the way of Perl 6.
Edit: as an aside, a big difficulty with JS is that objects can have their prototype changed arbitrarily at runtime. Even with Metaclass programming in Python, the class of an object can't be changed after creation, making it much easier to cache/memoize dynamic method dispatch. On the other hand, high performance implementation of Python's bound methods require a bit more flow analysis than you need in JS. In Python, if you write f = x.y, f is a "bound method" (a closure that ensure x is passed as "self" to y). It's expensive to create closures for each and every method invocation, so a high performance implementation would need to do a bit of static analysis to identify which method look-ups are used purely for invocation, and which look-ups need to create the closures because the method itself is passed around or stored in a variable.
Other languages struggle in this regard. Comparatively I imagine far fewer developers working full time on Ruby/Python, not to mention they would have budget constraints to hire and retain talent.
Source: Computer Languages Shootout
The are 2 main reasons for that.
1. For a good decade - from ~2005 to ~2015 - Ruby was among the most often used tech stacks at startups, and any performance work on Ruby (of which two major fronts were GC and JIT compilation) were perceived as extremely impactful and attractive.
2. Ruby is actually one of the most if not the most dynamic and complex programming languages out there and thus is one of the most challenging to build a runtime for and optimize. It became a de-facto benchmark for JIT research (among the more conventional Java). Over the years many vendors invested into Ruby compilers to push their underlying VM technology. Microsoft sponsored IronRuby to improve their .NET runtime, Sun sponsored JRuby and Oracle sponsors TruffleRuby.
As for progress, the biggest roadblock to Ruby JITs adoption has always being Rails. Rails uses a lot of Ruby features and pushes the language pretty far. Thus, you can't run Rails without your Ruby implementation being very complete and very MRI-compatible (MRI is the default Ruby implementation).
Plus, Rails uses Ruby in a way that is defeats virtually all best practices to produce a JIT-friendly code. Thus, there's no JIT compiler that offers any performance improvements for Rails apps - and in truth there might not be one ever. In fact, YJIT is exciting because it's the first JIT-compiler that seems to offer some speedups for at least a fey Rails benchmarks. People follow it closely, because these speedups might be a fluke, and as the compiler becomes more compliant, they may disappear (that happened in past with some JITs).
Other people tackle this problem by switching away from Rails to other frameworks. AFAIK Stripe themselves don't use Rails in most of their code, so they might benefit a lot from this work even without big improvements for common Ruby benchmarks (language shootout, Techempower, Discource benchmarks).
They're not. YJIT is really 100% compatible with regular interpreter MRI and already run a small % of production traffic at Shopify as well as fully pass the gigantic test suite of Shopify's 10+ years old monolith as well as GitHub's test suite.
This is not a fluke, that's what you get by building a JIT directly inside MRI rather than starting from scratch. It's harder and slower, but you get full compatibility from day 1.
Rails uses Ruby in a way that is defeats virtually
all best practices to produce a JIT-friendly code.
I've seen this mentioned a lot, and certainly the history of compiled Ruby + Rails benchmarks indicates this is true.However, I've never quite understood -- why exactly is this the case?
Is it just the sheer size of Rails? Or is it (ab)using Ruby in weird ways?
edit:
This is the more or less definitive answer, I guess, though it's a bit over my head!
https://k0kubun.medium.com/ruby-3-jit-can-make-rails-faster-...
I’m giving a talk about the history of compiling Ruby at RubyConf.
- 2 Rubinius attempts (Gnu Lightning and LLVM)
- MacRuby
- JRuby (do you count InDy separately?)
- IronRuby (do you count DLR separately? They didn't start with DLR, as far as I recall)
- MagLev - I think they hoped to GemStone would JIT user's Ruby code with bits of interpreter.
- TruffleRuby
- RubyOMR
- Vladimir's MJIT
- Koichi's MJIT
- YJIT
Do we count HotRuby and Opal? - both compile to JS.
I miss a few.
EDIT: mRuby, duh
Even for PHP the new VMs and speed improvements might be too late.