> It fails for any program that calls chroot(2) and chroots to a location that hasn't had /dev/urandom constructed, which is... almost every program that sandboxes by calling chroot(2).
Yeah. Those programs fail if they try to do anything with the file system they weren't set up for. Running external commands via the shell? Better have /bin/sh. Running executables? Better have the libraries set up. Resolving hostnames? Better have the files in /etc and nsswitch and others. Probably want a /dev/tty. /dev/null. /proc mounted. /sys mounted. Or, if you are really a super bare bones app which knows it doesn't need any of that stuff, you pre-read /dev/urandom before you chroot(2) or you use the system call.
None of that means /dev/urandom is unreliable.
> You've probably never used ulimits or login groups then.
Sure I have. But I have never run anything that needed more than 32 open files at a time. If your limits are pathologically low, you can come up with any scenario where trivial programs fail to work. But such a discussion isn't productive.
> [bunch of nonsense]
Returning an error code seems to be a fine option if your library is asked to perform a task and it can not. I see nothing in LibreSSL which indicates that this is a bad idea.
What IS a bad idea is falling back to some nonsense like getuid() + getpid() + time() instead of returning some error. If you can't access a random number generator and you need a random number (and especially if you are a cryptography library) then return a failure error code.
But that's orthogonal to this discussion (why did you bring up LibreSSL?)
> /dev/urandom is not a high-availability randomness source
Well by your definition, nothing is high-availability, so everything is unreliable. So I guess you are right, in your world, but your argument just became meaningless.
A system call is better, but /dev/urandom is fine.