A small drawback of 64-bit machines

September 10, 2007

It used to be that on a large memory 32-bit compute server, no single process could run away and exhaust all of the machine's memory. On an eight or sixteen gigabyte machine, processes ran into the 3 gigabyte (max) or so limit on per-process virtual address space well before they could run the machine itself into the ground.

(On a large enough machine you could survive a couple of such processes.)

This is no longer true on 64-bit large memory compute servers, as I noticed today; it is now possible for a single runaway process to take even a 32 gigabyte machine into an out of memory situation. I am now a bit nervous of what the kernel's OOM handling will do to us, since these are shared machines that can be running jobs for several people at once.

(Adding more swap space is probably not the solution.)

I have to say that the kernel OOM log messages are a beautiful case of messages being logged for developers instead of sysadmins. As a sysadmin, I would like a list of the top few processes by OOM score, with information like their start time, total memory usage, and their recent growth in memory usage if that information is available.

(And on machines with lots of CPUs, the kernel OOM messages get rather verbose. I hate to think what they will be like on our 16-core machine.)


Comments on this page:

From 130.217.250.13 at 2007-09-11 20:59:22:

There's always ulimits. Having a low soft ulimit, and a high (but not machine killing) hard limit seems to work well.

And on machines that aren't supposed to be doing any real long term processing a ulimit on cpu time to 24 hours seems to be a great way to get rid of hung firefox's and other dodgy processes.

By cks at 2007-09-13 00:36:04:

We've found out the hard way that killing things that have just used up a bunch of CPU time on multiuser machines can be dangerous, because people do do moderately crazy things like run month-long Xvnc sessions that they use to control long-running jobs on compute servers.

From 72.143.180.234 at 2007-09-15 10:43:19:

If you figure this one out, I'd definitely like to know. Our Altix kept going into swapthrashing til it died because of what I will generously call end-user mistakes.

I finally "fixed" it by removing all swap.

(This is a 64p 192GB machine, and yep, oom is very noisy.)

MikeP

By cks at 2007-09-19 08:58:04:

We have only a little swap on our compute servers (comparatively), which may be saving us from some ugly fates. It's possible that some of our problems will be solved by turning on strict overcommit mode in the VM system (the vm.overcommit_memory sysctl), although it's not clear what the overcommit memory ratio should be; my guess is a value that allows the system to use about as much total committed memory as there is RAM in the machine.

By cks at 2007-10-17 22:43:33:

We wound up turning on strict overcommit mode on the compute servers; our experiences to date are in OvercommitExperience.

Written on 10 September 2007.
« Rethinking my views of Fibrechannel
Why I dislike ATX power supplies »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Mon Sep 10 23:36:12 2007
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.