Limiting a process's memory usage on Linux
Due to recent events I have become interested in this issue, so I have been poking around and doing some experiments. Unfortunately, while Linux has a bewildering variety of memory related per-process resource limits that you can set, most of them don't work or don't do you any good.
What you have, in theory and practice:
ulimit -m
, the maximum RSS, doesn't do anything; the kernel maintains the number but never seems to use it for anything.ulimit -d
, the maximum data segment size, is effectively useless since it only affects memory that the program obtains throughbrk(2)/sbrk(2)
. These days, these aren't used very much; GNU libc does most of its memory allocation usingmmap()
, especially for big blocks of memory.ulimit -v
, the maximum size of the address space, works but affects allmmap()
s, even of things that will never require swap space, such asmmap()
ing a big file.
What I really want is something that can effectively limit a process's
'committed address space' (to use the term that /proc/meminfo
and the
kernel documentation on swap overcommit use). I don't care if a process
wants to mmap()
a 50 gigabyte file, but I care a lot if it wants 50G
of anonymous, unbacked address space, because the latter is what will
drive the system into out-of-memory.
Unfortunately I can imagine entirely legitimate reasons to want to
mmap()
huge files (especially huge sparse files) on a 64-bit machine,
so any limit on the total process address space on our compute servers
will have to be a soft limit.
Since the Linux kernel already tracks committed address space information for the whole system, it's possible that it would not be too much work to extend it to a per-process limit. (The likely fly in the ointment is that memory regions can be shared between processes, which complicates the accounting and raises questions about what you do when a process modifies a virtual memory region in a way that is legal for it but pushes another process sharing the VMA over its limit.)
|
|