#include <algorithm>
#include <array>
#include <random>
std::mt19937 InitializeRng() {
std::array<unsigned int, 624> seed_data;
std::random_device dev;
std::generate_n(seed_data.data(), seed_data.size(), std::ref(dev));
std::seed_seq seq(std::begin(seed_data), std::end(seed_data));
return std::mt19937(seq);
}
This generates seed_seq with 19968 bits of random data, which is enough for the 19937 bits of Mersenne Twister internal state.Note that 19668 bits of random data is overkill; something like 256 or 128 bits would probably enough for practical purposes. But I believe there is no real need to limit the amount of data extracted from a random source. Modern operating systems are pretty good at generating large amounts of random data quickly. But if this is a concern, just change 624 to 4/8/16/32 for 128/256/512/1024 bits of entropy. In practice, I don't think you'll notice a difference either in randomness or initialization speed.
edit: also, if performance is a concern, consider changing mt19937 to mt19937_64, which is the 64-bit variant of mt19937 that is incompatible (generates different numbers) but is almost twice as fast on 64-bit platforms (i.e. most platforms today).
There are several high quality alternatives that people use.
(Not cryptographically secure, but passes all statistical tests you throw at it).
it's not a bug in mt19937 itself, it's how random_device (or libc randomness) works differently across environments. makes cross platform tests flaky even when logic is rock solid
>>
std::random_device rd; // might differ per platform
std::mt19937 gen(rd()); // seed depends on rd output
std::uniform_int_distribution<> dist(1, 100);
int random_number = dist(gen); // different on linux vs windows tho same code