2009-12-04
Learning from Unicorn: the accept()
thundering herd non-problem
For a long time, one of my acquired articles of faith has been that
you needed to worry about and avoid
the thundering herd accept()
problem. (This is the problem where if
you have a bunch of processes all waiting in accept()
and a single
connection comes in, many kernels will wake all of the processes up only
to have most of them immediately go back to sleep.)
One of the things that Unicorn (via
Ryan Tomayko and others)
has taught me about preforking servers is that the thundering herd
problem doesn't matter (under sane circumstances), because it is only
a problem when the system is idle and when the system is idle you
generally don't care about the extra overhead. When the system is busy,
almost all of your worker processes are busy working, not sitting in
accept()
; the more busy your system, the more you care about the extra
potential overhead but the less workers are sitting in accept()
and
so the less overhead there actually is. At the limit your system is so
loaded that there is always a connection waiting for accept()
and you
have no herd at all, no matter how many worker processes you have.
(And this assumes that your kernel is susceptible to the thundering herd problem at all. The kernel is perfectly capable of only waking one process per pending connection, instead of waking them all and letting the scheduler pick a winner.)
Now, this does depend on how many workers you have and how many of them wind up idle. However, there's two things that mitigate this. First, generally you don't want to have many more workers than you have processors, so on most systems you're not going to have many processes in general. Second, modern systems already have relatively low overheads for a full thundering herd situation.
Sidebar: some numbers on the overhead
You might ask how low is relatively low. I did a quick test, and on a
reasonably modern but not spectacular 64-bit dual core Linux machine,
my Python test harness ran at most only 26 microseconds slower per
accept()
call when there were 256 waiting processes. On older,
slower FreeBSD hardware I also have access to, the overhead for 256
waiting processes rose to a whole 62 microseconds.
The test harness has a server program that forks N copies, each of which
accepts()
and then immediately closes the resulting socket, and a
client program that repeatedly creates a socket, connect()
s to the
server, and then does a recv()
on the socket (which will exit when the
server closes it). I timed how long the client program took for various
numbers of server processes and worked from there.