== What limits the number of concurrent connections to a server Suppose that you have a socket-based server/service of some sort and you would like to know something about its load limits. An easy but artificial limit is how many true concurrent connections your server can support before would-be clients (whatever they are) start getting connection errors. The obvious but wrong answer is 'the number of worker processes (or threads) that you have'. This is what I automatically believed for years (with the result that I generally set very high limits on worker processes for things that I configured). In fact it turns out that there are two different answers for two different situations. If the initial arrival of all of these concurrent connections is distributed over enough time that your worker processes have a little bit of time to run code, in particular if they have enough time to get around to running _accept()_, the limit on the number of concurrent connections is the number of workers plus your socket's _listen(2)_ backlog, [[whatever that is in reality ListenBacklogMeaning]]. ~~You don't need a lot of workers in order to 'handle' a lot of concurrent connections, you just need a big enough _listen(2)_ backlog~~. If you're running into this, don't configure more workers, just increase the listen backlog. The time to configure more workers is if there's more work that can be done in parallel, ie if your CPU or disks or whatever are not already saturated. (This doesn't apply to situations where the worker processes are basically just ways to wait on slow external events such as DNS lookups.) If you really have N clients connecting to you concurrently at the exact same moment, the real safe limit is only the _listen(2)_ backlog. This is because if all of the clients connect fast enough, they will overwhelm the kernel-level backlog before your code gets a chance to _accept()_ some of them and open up more backlog room. It follows that if you really care about this, configure your _listen(2)_ backlog as high as possible. Of course this doesn't mean that you can actually service all of those concurrent connections very fast, or even at all. If your worker processes are slow enough the clients may time out on you before a worker actually _accept()_s their connection. However this will be a user level protocol timeout; the clients should not normally experience _connect()_ timeouts because their connection will have been accepted by the kernel well before you call _accept()_. In my view it follows that your software should default to using very large _listen()_ backlogs unless you have a strong reason to do otherwise. My previous habit of picking a small random number out of a hat was, in retrospect, a bad mistake (and leaves me with a bunch of code to slowly audit and fix up). (Since I've been making this mistake for years this is clearly something I need to write down so I grind it into my head once and for all.) === Sidebar: why concurrent connections is artificial I call concurrent connections an artificial limit because in the real world you generally don't have a pool of N clients repeatedly connecting to you over and over but instead a constant flood of new clients connecting (and sometimes) reconnecting at what you can simplify down to a fixed N-per-second arrival rate. 100 new connections every second is not the same thing at all as 100 concurrent connections; the former is much more demanding. How many arrivals per second you can handle is fundamentally set by how fast (in parallel and on aggregate) you can service clients. If you can service 200 connections per second, you can stand up to an arrival rate of 100 per second; if you can handle only 50 connections a second, no feasible concurrent connections limit will let you handle a sustained surge of 100 connections per second. (In this situation your connection backlog builds up at 50 connections a second. In ten seconds you have a 500 connection backlog, assuming that none of them time out. In a minute your backlog is up to 3,000 connections, assuming both that your backlog can go this high and that some of the clients haven't started timing out. Both assumptions are probably false in practice.)