For many years it has been standard practice to test that you get the memory you ask for, but it has all be a huge waste of time. Operating systems get in on the act before you have a chance to do anything about it.
We try to write code that behaves well - or most of us do. One particular catastrophe that we have all been schooled in avoiding is running out of memory. A C/C++ programmer uses the malloc function to allocate memory. The function usually returns a pointer to the memory requested, but if there isn't enough memory it returns a NULL. So we all generally have been writing
Malloc almost never returns a NULL, even if there is no memory available! In short, all that error-handling code is wasted code.
The point is that operating systems are in the business of allocating memory and they monitor the entire global system. Your program running out of memory is a small consideration as it means the entire machine, the operating system and all of the programs it is looking after are at risk. The answer to the problem is the OOM - Out Of Memory - killer. This is a monitor process that checks to see if an application is about to use more memory than the system has. If this is the case then it kills the process and hence frees up memory to keep the whole thing going.
OOM killers generally use heuristics to work out which processes to kill along with the one that actually precipitated the crisis. Usually memory-hungry programs and low-priority programs are selected, but it is difficult to predict the collateral damage from an OOM killer. This in itself is claimed to be a disadvantage of the approach in that the heuristic isn't designed to be fair in any sense.