The uncertain question of how much RAM our servers need

November 29, 2022

We're in the tail end of upgrading a lot of our servers to Ubuntu 22.04, where 'upgrade' means 'build a new version on different hardware and swap it into place' (which has sped up our server turnover a bit). Since we're building up new hardware, one of the questions has been if we should put more memory in any of these servers or if the current baseline of 8 GBytes is enough.

Even with a metrics system, this isn't a straightforward question to answer, and in fact we haven't tried to do any sort of systematic assessment. A lot of the time we've shoved more memory in servers more as a precaution than anything else; at our scale, it's not particularly expensive to overshoot on a few servers, while if we undershoot we'll have to have a server downtime to add more RAM. The initial deployment is the one time when the choice of RAM amount is basically free.

(At least on Linux, there are various memory statistics that will give you some idea at the user level, but they can be hard to interpret. More generally, they don't tell you how much memory you may want under unusual or extraordinary conditions. For some servers you may care much more about having them stay up under high load rather than minimizing RAM under normal load.)

On the one hand, many of these servers spent a lot of time with less than 8 GBytes of RAM, in their previous incarnations; our past generations of servers had baselines of much lower memory usage. On the other hand, new versions of software seem to keep growing their memory usage and we're doing that when we move to Ubuntu 22.04. Plus, lack of memory in servers may have been hampering performance in ways that weren't entirely obvious, for example by forcing too little disk cache. On the third hand, there's a part of me that still feels that 8 GBytes of RAM is a huge amount that normal servers shouldn't need.

(One reason I'm thinking of this now is that I'm about to deploy a 22.04 version of our main web server, which now will only serve plain pages (hopefully with the more efficient Apache event MPM instead of the prefork one and run a few user CGIs every now and then. Does this need more than the baseline 8 GBytes of RAM? Probably not, but then we could have a crush of CGI usage one day, if someone writes a memory heavy CGI and it gets linked to from somewhere popular.)

PS: My impression is that people who use containers probably have a much better handle on their workload's memory usage than we do, because I believe that modern container systems want you to configure that reasonably accurately for each container. Presumably people have developed ways of testing their containers to determine their memory usage under load and so on. For us this is too much work, especially since we only have a few reasonable options for RAM sizes (in our current generation, basically 8 GB, 16 GB, and 32 GB, with 64 GB a possibility if we really need it).


Comments on this page:

My benchmark is 1GB per gigahertz per core, except for database servers where I basically max out the chassis. Thus a modern 4GHz 8-core/16-thread server would get at least 32GB to keep CPU and RAM balanced. Perhaps you could do with fewer servers holding more RAM and run at higher CPU utilization.

By Barry at 2022-12-01 20:24:59:

I had trouble getting past "8 GBytes of RAM is a huge amount". Some phones have that!

Written on 29 November 2022.
« The annoying question of Intel CPU support for XMP RAM profiles
Using Dovecot 2.3's 'events' system to generate log messages »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Tue Nov 29 23:09:32 2022
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.