Agreed - this is mostly due to age. Mersenne Twister appearing so many places is terrible since it has bad mixing properties (especially around 0, but the same thing shows up elsewhere), is error prone to seed as a result, is huge and slow.
I doubt default C++ rand() ever uses MT - it's far too slow. It appeared in Boost, then in the std (and like all C++ stuff, appeared a decade after it was obsolete, unfortunately).
The xo* styles ones are decent, but PCG seems to outperform them at just about everything someone needs for fast PRNG with good statistical properties.
>The JavaScript one decent (only because it generates floats using it).
I can almost guarantee they make the usual errors trying to convert to float. :)
Converting PRNG output to float is a notoriously error-prone minefield, so if a language makes it into a float, you can almost certainly assume it is not a good RNG value.
The first problem many make is that the underlying source better be uniform as integers, and I've ran across many language implementation that were not. The PRNGS with period 2^N-1 are a common source of error compared to those with 2^N periods. This PRNG should have lots of other nice properties such as k-equidistribution for high enough k. Many more fail that.
EDIT: ha ha - called it. Here's [1] Javascript (V8) rng functions. They at least admit the double [0,1) is not uniform :) They start off just as I claimed will happen, using a 2^N-1 period, push this non-uniform value into the mantissa as simply bits (ensuring already non-uniformity by anything downstream), then in other places in the code they take this at most 53 bits mantissa, mult by a 64 bit value, then use that as if it's truly 64 bits. What a mess...... This is the state of most libraries when I inspect them.... This type of crap shows up when you run large numerical simulations (weather, nuke testing, giant finance, physics/planetary sims...) and the underlying bias craps out your results. It's hard to test for events occurring once in trillions when the underlying code is so flaky.
Then, and here is the great part. They quite often simply divide this by 2^N as a float, which means lots of possible floating point values are not represented, and they're certainly not represented with the frequency one needs them to be. For example, there are twice as many float in [0,1/2) as [1/2,1) and so on, so the division left out lots of possible values. A float has a 23 bit mantissa, a double has 53. Then a person often simply multiplies this [0,1) float back into an integer range, and suddenly you lost tons of of properties you wanted: uniform distribution, all values equally likely, etc.
So the float version is nearly always a bad choice.
Whenever I have to provide some sources of (P)RNG for a library, I always make a few that do the common tasks so people don't roll their own: uniform 32 and 64 bit (if needed), uniform(M) for [0,M), uniform for [A,B), a proper float for [A,B) (since the get [0,1) then scale loses values, a proper float [0,1) and [0,1] (which are different things), etc., and hopefully people looking at the ways to get random numbers use these. They save a lot of issues.
[1] https://github.com/v8/v8/blob/main/src/base/utils/random-num...