If you assume two completely separate implementations where there is an #ifdef every 10 lines and atomics and locking only occur with --disable-gil, there is no slowdown for the --enable-gil build.
I don't think that is entirely the case though!
If the --enable-gil build becomes the default in the future, then peer pressure and packaging discipline will force everyone to use it. Then you have the OBVIOUS slowdown of atomics and of locking the reference counting and in other places.
The advertised figures were around 20%, which would be offset by minor speedups in other areas. But if you compare against Python 3.8, for instance, the slowdowns are still there (i.e., not offset by anything). Further down on the second page of this discussion numbers of 30-40% have been measured by the submitter of this blog post.
Actual benchmarks of Python tend to be suppressed or downvoted, so they are not on the first page. The Java HotSpot VM had a similar policy that forbid benchmarks.
^ read. The OP responds in the thread.
tldr, literally what I said:
> It also makes everything slower (arguable where that ends up, currently significantly slower) overall.
longer version:
If there was no reason for it to be slower, it would not be slower.
...but, implementing this stuff is hard.
Doing a zero cost implementation is really hard.
It is slower.
Where it ends up eventually is still a 'hm... we'll see'.
To be fair, they didn't lead the article here with:
> Right now there is a significant single-threaded performance cost. Somewhere from 30-50%.
They should have, because now people have a misguided idea of what this wip release is... and that's not ideal; because if you install it, you'll find its slow as balls; and that's not really the message they were trying to put out with this release. This release was about being technically correct.
...but, it is slow as balls right now, and I'm not making that shit up. Try it yourself.
/shrug