fork()
versus strict virtual memory overcommit handling
Unix's fork()
is in many ways a wonderful API, but people who want
strict virtual memory overcommit handling find
it deeply problematic. This is because, as I've written before, fork()
is probably the leading way on Unix systems to
(theoretically) allocate a lot of memory that you will never use.
The more allocated but unused memory you have, the stupider strict
overcommit gets (cf); it
increasingly denies allocations purely for accounting reasons, instead
of from any danger that the system will actually run out of RAM. The
corollary is that it's hard to argue that strict overcommit should be
the default if systems routinely have significant amounts of allocated
but unused memory.
(Why fork()
is a good API is another entry. The short version is
that fork()
is a kernel API, not necessarily a user one.)
It's possible to argue that many instances of unused memory are bad
programming practices (or mistakes), and so can at least in theory
be discounted when advocating for strict overcommit. This argument
is much harder to make with fork()
. Straightforward use of fork()
followed by exec()
can be replaced by APIs like the much more
complicated posix_spawn()
but there are plenty of other uses of fork()
that cannot be
(even some uses of fork-then-exec, since posix_spawn()
can't do
things like change process permissions).
(In the extreme, the arguments of the strict overcommit crowd then boil
down to 'well, fork()
complicates our life too much so you shouldn't
be allowed to use it'. This may sound harsh but it's really what it
means to say that historic and natural uses of fork()
are now bad
practice, at least without a really good reason why.)
PS: vfork()
is a hack. Really.
|
|