Memory can be allocated beyond RAM size, so by the time a failure occurs your program really should crash and return its resources.
Embedded systems have fewer resources and some will not have virtual memory and so the situation will be different. But unless you know better, the best practice is still to not check the return from allocators. Running out of memory in a program intended for an embedded platform should be considered a bug.
Of course, there is no reason to not do all your allocation through a wrapper function which does check and abort on failure. I think the point was that surviving malloc failures is a dubious approach - instead go all in, or if it's a long-running service, provide a configurable max memory cap and assume that much will be available.
I'll take the clear error message.
In the general case, however, if allocating 100 bytes fails, reporting that error is also likely to fail. An actual memory allocation failure on a modern computer running a modern OS is a very rare and very bad situation. It's rarely recoverable.
It's not bad to handle allocation failures, but in the vast majority of cases it's very unreasonable to do so. You can write code for it if you want, have fun.
And just to be completely clear, I am ONLY talking about calls to malloc, new, realloc, etc. NOT to OS pools or anything like that. Obviously, if you allocate a 4Mb buffer for something (or the OS does for you), you expect that you might run out. This is ONLY in regards to calls to lower level heap allocators.
I don't think you'll find any experienced programmer recommending that you always check the return from malloc. That's completely absurd. There are always exceptions to the rule, however.
I call BS on this. First of all, it's not the 100 byte allocation that is likely to fail; chances are it's going to be bigger than 100 bytes and the 100 byte allocation will succeed. (Though that is not 100% either.) Second, the thing you're going to do in response to an allocation failure? You're going to unwind the stack, which will probably lead to some temporary buffers being freed. That already gets you more space to work with. (It's also untrue that you can't report errors without allocating memory but that's a whole other story...)
I suspected when I wrote in this thread that I'd see some handwavy nonsense about how it's impossible to cleanly recover from OOM, but the fact is I've witnessed it happening. I think some people would just rather tear down the entire process than have to think about handling errors, and they make up these falsities about how there's no way to do it in order to self justify... Although, when I think back to a time in which I shared your attitudes, I think the real problem was that I hadn't yet seen it being done well.
The space I work in deals with phones and tablets, as well as other embedded systems (TVs, set-top boxes, etc.) that tend to run things people think of as "modern" (recentish Linux kernels, userlands based on Android or centered around WebKit), while having serious limits on memory and storage. My desktop calendar application uses more memory than we have available on some of these systems.
In these environments, it is essential to either avoid any possibility of memory exhaustion, or have ways to gracefully deal with the inevitable. This is often quite easy in theory -- several megabytes of memory might be used by a cached data structure that can easily be re-loaded or re-downloaded at the cost of a short wait when the user backs out of whatever screen they're in.
But one of the consequences of this cavalier attitude to memory allocation is that even in these constrained systems, platform owners have mandated apps sit atop an ever-growing stack of shit that makes it all but impossible for developers to effectively understand and manage memory usage.