Also, David Beazley has a great way of making difficult concepts approachable.
Why Ruby and PyPy looks similar in this benchmark?
GIL: The global interpreter lock, basically a single lock that threads running in Python or Ruby synchronize on frequently to sync up, which causes problems in multithreaded applications.
Priority inversion: I don't think I'm qualified to explain this clearly, so I'll defer to wikipedia: https://en.wikipedia.org/wiki/Priority_inversion
The later slide where he showed PyPy having terrible performance on a benchmark where the other two code bases worked fine was an unrelated problem.
His point was that whereas he was able to dig into the Ruby implementation and figure out why it was running slow, it was much harder to find the reason why PyPy was not performing as it should due to the implementation complexity.
It is basically tying back to the analogy of the VW versus the Porsche. He could often fix his VW using a pocket knife, which would be very difficult to do with a Porsche.
http://blip.tv/rupy-strongly-dynamic-conference/david-beazly...
Purpose wise cython is for optimizing pieces of code, rpython is for writing interpreters. RPython is quite faster than Cython, but it's also much more restricted, making it a bit unusable for the purpose of "just" speeding up pieces of code (you can't have PyObject equivalents).