That's not quite true. The problem is more insidious that that.
Think of the memory requirements of the fork() system call. It clones the current process, making an entire copy of it. Sure, there's lots of copy-on-write optimisation going on, but if you want guaranteed, confirmed memory, you need to have a backing store for that new process. The child process has every right to adjust all of its process memory.
So if a 4GB process calls fork(), you will suddenly need to reserve 4GB of RAM or new swap space for it. Or if you can't allocate that, you will have to make the fork() fail.
This can be terrible for users, since most often a process is going to fork() and then exec() a very small program. And it seems nonsensical for fork() to fail with ENOMEM when it appears that there is lots of free memory left. But to ensure memory is strictly handled, that's what you have to do.
The alternative, which most distributions use, is to optimistically allocate memory. Let the fork() and other calls succeed. But you run the risk of the child process crashing at any point in the future when it touches its own 'confirmed' memory and the OS being unable to allocate space for it. So the memory failures are not discovered at system call points. There's no return code to spot the out of memory condition. The OS can't even freeze up until memory is available because there's no guarantee that memory will become available.