== Overcommitting virtual memory is perfectly sensible Every so often people get quite grumpy about the fact that some systems allow you to allocate more virtual memory than the system can possibly provide if you actually try to use it all. Some of these people go so far as to say that this overcommitting is only being allowed because of all of those sloppy, lazy programmers who can't be bothered to properly manage memory, and so ask for (far) more memory than they'll actually use. I'm afraid that these people are living in a dream world. I don't mean that in [[a pragmatic sense ../linux/NonStrictOvercommitDefault]]; I mean that in the sense that they are ignoring the fact that we pervasively trade memory for speed all through computing, from programming practices all the way down to CPUs. Over-use and over-allocation of memory is pervasive, and once you put enough of that together what you get is virtual memory that is allocated but never used. It's impossible to be comprehensive because once you start looking wasted space is everywhere, but here are some examples. Essentially every mainstream memory allocator rounds up memory allocation sizes or otherwise wastes space. [[Slab allocation https://en.wikipedia.org/wiki/Slab_allocation]] leaves space unused at the end of pages, for example, and can round the size you ask for up to a commonly used size to avoid having too many slabs. Most user-level allocators request space from the operating system in relatively large chunks in order to amortize the cost of expensive operating system calls, rather than go back to the OS every time they need another page's worth of memory, and very few of them do anything like release a single free page's worth of space back to the OS, again because of the overhead. (There used to be memory allocators that essentially never returned space to the operating system. These days it's more common to give free space back eventually if enough of it accumulates in one spot, but on Unix that's because how we get memory from the operating system has changed in ways that make it more convenient to free it up again.) It's extremely common for data structures like hash tables to not be sized minimally and to be resized in significant size jumps. We accept that a hash table normally needs a certain amount of empty space in order to keep performing well for insertions, and when entries are deleted from a large hash we often delay resizing the hash table down, even though we're once again wasting memory. Sometimes this is explicitly coded as part of the application; at other times it is hidden in standard language runtimes or generic support libraries. Speaking of things in standard language runtimes, delayed garbage collection is of course a great way to have wasted memory and to have more memory allocated from the operating system than you actually need. Yet garbage collection is pervasively seen as a valid programming technique in large part because we've accepted a tradeoff of higher memory 'use' in exchange for faster and more reliable program development, although it can also be faster under some circumstances. And finally there's object alignment in memory, which generally goes right down to the CPU to some degree. Yes, it's (only) bytes, but again we've shown that we're willing to trade memory for speed, and there are ripple effects as everything grows just a little bit bigger and fits just a little bit less well into memory pages and so on. There are environments where people very carefully watch every little scrap of space and waste as little of it as possible, but they are not mainstream ones. In mainstream computing, trading memory for speed is a common and widely accepted thing; the only question is generally how much memory is worth trading for how much speed. === Sidebar: Memory waste and APIs On Unix, one of the tradeoffs that has been made from the very start is that there are some simple, general, and flexible (and powerful) APIs that strictly speaking may commit the kernel to supplying massive amounts of memory. I am talking here primarily of [[_fork()_ ../unix/ForkVsOvercommit]] and especially the _fork()_/_exec()_ model for starting new processes. I maintain that [[_fork()_ is a good API ../unix/ForkGoodAPI]] even though it very strongly pushes you towards a non-strict handling of memory overcommit. In this it illustrates that there can be a tradeoff between memory and power even at the API level.