== Using cgroups to limit something's RAM consumption (a practical guide) Suppose, not entirely hypothetically, that you periodically rebuild Firefox from source and that you've discovered that [[parts of this build process kill your machine https://twitter.com/thatcks/status/344132548332490753]] due to memory (over)use. You would like to fix this by somehow choking off the total amount of RAM that the whole Firefox build process uses. The relatively simple tool to use for this is a [[cgroup http://en.wikipedia.org/wiki/Cgroups]]. There is probably lots of documentation on the full details of cgroups floating around. This is a practical guide instead. First we need a cgroup that will actually apply memory limits. Ad-hoc cgroups are created on the fly with _cgcreate_ (which is run as _root_): .pn prewrap on > cgcreate -t cks -a cks -g memory,cpu,blkio:confine I'm doing some overkill here; in theory we only need to limit memory usage. But who cares. As I found out, it's important to specify both _-t_ and _-a_ here; _-a_ lets us set limits, _-t_ lets us actually put something into the new cgroup. The easiest way to set limits is by writing to files in _/sys/fs/cgroup/~~controller~~/~~path~~_. Here we have controllers for memory, cpu, and blkio and our path under them is _confine_, so: > cd /sys/fs/cgroup > # This is 3 GB > echo 3221225472 >memory/confine/memory.limit_in_bytes > > # only gets half of contended CPU and disk bandwidth > # (in theory) > echo 512 >cpu/confine/cpu.shares > echo 500 >blkio/confine/blkio.weight (Since we gave ourselves permissions with _-a_, we can set all of these limits directly without being _root_.) What parameters controllers take is established partly by poking around in _/sys/fs/cgroup_, partly by experimentation, partly by Internet searches, and sometimes from [[the official kernel documentation https://www.kernel.org/doc/Documentation/cgroups/]]. Where limits exist (and work) they may have side effects; for example, limiting total RAM here is going to force a memory-hungry program to swap, using up a bunch of disk IO bandwidth. (If you want this cgroup and its settings to persist over reboot you can make a suitable entry in _/etc/cgconfig.conf_. On Fedora you may also need to make sure that the _cgconfig_ service is enabled.) Finally we need to actually run our _make_ or whatever so that it is put into our new '_confine_' cgroup and it and its children have their total RAM usage limited the way we want. This is done on the fly with _cgexec_ (run as ourselves): > cgexec -g memory,cpu,blkio:confine --sticky make You don't need _--sticky_ in various common situations, for example if you're not running the cgroups automatic classification daemon. But I don't think it does any harm to supply it and anyways you may well want to wrap this magic command up in a script so you don't have to remember it. You can check to see that _cgexec_ is properly putting things into cgroups by looking at _/proc/~~pid~~/cgroup_ to see what cgroups a suitable process is part of. In this case you would expect to see _memory:/confine_ among the list. Testing whether your actual cgroup controller settings are working and doing what you want is beyond the scope of this entry. The good news is [[that this seems to work for me https://twitter.com/thatcks/status/410807326426144769]]. My Firefox build process has been significantly tamed. (I've looked at [[fair share scheduling with cgroups CGroupsPerUser]] before, which certainly helped here. People have written all of this information down in bits and pieces and partial explanation and Stack Overflow answers and so on, but since I put it together I want to write it down all in one place for later use. (I'm sure there'll be later use.))