Quoting Linux manuals:
By default, Linux follows an optimistic memory allocation strategy. This means that when
malloc()
returns non-NULL
there is no
guarantee that
the memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more
processes will be
killed by the infamous OOM killer. In case Linux is employed under circumstances where it would be less desirable to suddenly lose
some randomly
picked processes, and moreover the kernel version is sufficiently recent, one can switch off this overcommitting behavior
using a command like:# echo 2 > /proc/sys/vm/overcommit_memory
You ought to check for NULL
return, especially on 32-bit systems, as the process address space could be exhausted far before the RAM: on 32-bit Linux for example, user processes might have usable address space of 2G – 3G as opposed to over 4G of total RAM. On 64-bit systems it might be useless to check the malloc
return code, but might be considered good practice anyway, and it does make your program more portable. And, remember, dereferencing the null pointer kills your process certainly; some swapping might not hurt much compared to that.
If malloc
happens to return NULL
when one tries to allocate only a small amount of memory, then one must be cautious when trying to recover from the error condition as any subsequent malloc
can fail too, until enough memory is available.
The default C++ operator new
is often a wrapper over the same allocation mechanisms employed by malloc()
.