A thought on web server capacity measurement
The traditional way of measuring how much load your web server can take
is to use a tool such as
ab to see how many requests a second you
can handle and how fast the requests get handled. People throw around
numbers such as their web server serves pages in 0.03 seconds under a
load of 50 simultaneous requests, for example.
(Sometimes they just say 0.03 seconds without mentioning important information like what sort of pages, on what sort of a system, and with how many simultaneous requests.)
In a way this is misleading, because real world load doesn't generally behave this way. In the real world, you don't have a certain number of simultaneous requests; instead you get a certain number of new requests every second, even if old requests haven't yet been dealt with.
In other words, the really interesting question is how many requests a second your website can handle, not how many it can handle at once. (Although how many it can handle at once is part of what determines how well it deals with load surges.)
The question I find myself mulling over is whether this makes any
practical difference most of the time. In a sense I think
ab is a
worst case, but only up to a point, and working out what that point is
seems a bit complicated.
ab does at least give you a requests per
second figure, so if you're using it the best approach is probably
to try a fairly large number of simultaneous connections and see if
the requests/second number still looks good.
The one thing such tools clearly can't do is tell you what happens when requests start arriving faster than you can process them and you go into an overload situation. Web servers can do a lot of different things under overload: they can go into a death spiral, dragging the entire machine down with them; they can start refusing connections; they can try to show visitors an error message. It's not necessarily predictable in advance, and what your web server actually does will determine how gracefully you handle sudden load surges.