Wandering Thoughts archives

2023-11-14

Amanda and deciding between server and client compression

We use Amanda for our backups, which delegates the actual creation of backup blobs (tar archives or what have you) and their restoration to other programs like tar (although it can be clever about dealing with them). One of the things that Amanda can do with these blobs is compress them. This compression can be done with different compressors, and it can be performed on either the Amanda backup server or on the client that is being backed up (provided that you have the necessary programs on each of them). We back up most of our filesystems uncompressed, but we have a single big filesystem that we compress; it's almost all text, so it compresses very well (probably especially the bits of email that are encoded in base64).

When we started compressing the backups of this filesystem, we did it on the Amanda server for an assortment of reasons (including that the filesystem then lived on one of our shared fileservers, which at the time we started this was actually one of our first-generation Solaris 10 fileservers). Recently we switched Amanda's compression to being done on the client instead, and doing so has subtly improved our backup system, due to some of the tradeoffs involved. Specifically, switching to client compression has improved how fast we can restore things from this backup, which is now limited basically by the speed of the HDDs we have our Amanda backups on.

In isolation, the absolute speed of compressing or decompressing a single thing is limited by CPU performance, generally single-core CPU performance. During backups (and also during restores), you may not be operating in isolation; there are often other processes running, and you might even be compressing several different backup streams at once on either the server or the client. Our current Amanda backup servers have single Intel Pentium D1508s, which have a maximum turbo speed of 2.6 Ghz and a base speed of 2.2 Ghz. By contrast, our current /var/mail server has a Xeon E-2226G, with a single core turbo speed of 4.7 Ghz and a base speed of 3.4 Ghz. So one obvious consideration of whether to do compression on the server or the client is which one will be able to do it faster, given both the raw CPU speeds and how loaded the CPU may be at the time.

The CPUs on our backup servers were fast enough that the time it took to back up and compress this filesystem wasn't a problem. But that's because we have a lot of freedom with how long our backups take, as long as they're fast enough (they start in the late evening and just need to finish by morning; these days they only take a few hours in total).

However, things are different during restores, especially selective restores. In an Amanda restore of only a few things from a compressed backup, Amanda will spend most of its time reading through your compressed archive. The faster it can do this, the better, and you may well want restores to finish as fast as possible (we certainly do here). By moving decompression of the backups from the Amanda server (with a slow CPU) to the Amanda client (with a fast CPU), we changed the bottleneck from how fast the Amanda server could decompress things (which was not too fast) to how fast it could read data off the HDDs.

(As a side effect we reduced the amount of data flowing over the network during both restores and backups, since we're now sending the compressed backup back and forth instead of the uncompressed one. In some environments this might be important all on its own; in our environment, both systems have 10G-T and are not network limited for backups and restores.)

Beyond speeding up restores of filesystems with compressed backups, there are some other considerations for where you might want to do compression (mostly focused on backups). First, CPU performance is only an issue if compression is the limiting factor, ie you can both feed it data from the filesystem fast enough and write out its compressed output at full speed. If your bottleneck is elsewhere, even a slow CPU may be fast enough to keep up on backups. If you're compressing the backups of multiple filesystems, you probably care about how many cores (or CPUs) you have and where you have them. If you have fifty filesystems from fifty different backup clients to compress, you're probably going to want to do that on the clients, because you probably don't have that many cores on your backup server.

If you have network bandwidth limits, compressing (and decompressing) on the client reduces the amount of data transferred between it and the server. If the client CPU is slow, this will also naturally further throttle the bandwidth used (although it won't change the total amount of data transferred).

As far as I know, Amanda does all compression on the fly before anything is written to the Amanda holding disk or to 'tape', so if creating the backups, sending them over the network, and compressing them (not necessarily in that order) are all fast enough, where the compression is done doesn't reduce the bandwidth you want for your holding disk. Just as with network bandwidth, slow compression (on either the client or the server) may naturally reduce bandwidth demands on the holding disk.

sysadmin/AmandaServerVsClientCompression written at 23:04:37; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.