In linux (in sane configurations) allocations are just preorders.
EDIT: I can't reply below due to rate limiting:
I'd argue that overcommit just makes the difference between allocation and backing very stark.
Your memory IS in fact allocated in the process VMA, it's just the anonymous pages cannot necessarily be backed.
This differs, obviously, in other OSes as pointed out. Also differs if you turn overcommit off but since so much in linux assumes it your system will soon break if you try it.
An example of a split-the-difference approach is macOS, which AFAIU implements overcommit but also dynamically instantiates swap so that overcommit-induced OOM killing won't occur until your disk is full.
Also, it's worth mentioning that on all these systems process limits (see, e.g., setrlimit(2)) can still result in malloc returning NULL.
Not sure what you mean by this - I don't think Windows has overcommit in any form, whether opt-in or opt-out. What it does have is virtual address space reservation, but that's separate from commitment; reserved virtual memory is not backed by any page, no matter how much free RAM you have, until you explicitly tell the system to commit physical memory to it.
In fact I'm not even sure 'opt-in' to overcommit is possible in principle. Because if you opt-in to overcommit, you jeopardize other applications' integrity—who likely did not opt-in.
Are failures when accessing the allocated pointer due to overcommit substantially different than failures due to ECC errors or other hardware failure, with regard to what is specified in the c standard?
(FWIW I don't particularly like overcommit-by-default either)
I agree, reliance on overcommit has resulted in stability problems in Linux. But IME stability problems aren't induced by disabling overcommit, they're induced by disabling swap. The stability problems occur precisely because by relying on magical heuristics to save the day, we end up with an overall MM architecture that reacts extremely poorly under memory pressure. Whether or not overcommit is enabled, physical memory is a limited resource, and when Linux can't relieve physical memory pressure by dumping pages to disk, bad things happen, especially when under heavy I/O load (e.g. the buffer cache can grab pages faster than the OOM killer can free them).
Linux overcommitting memory and especially chrome/firefox beeing big-fat-memory-hogs are the problem. In fact every application which doesn't cope malloc beeing out of memory or assuming everybody has multiple gigs of memory to spare should "reevaluate".