The cause of our slow Amanda backups and our workaround

September 10, 2014

A while back I wrote about the challenges in diagnosing slow (Amanda) backups. It's time for a followup entry on that, because we found what I can call 'the problem' and along with it a workaround. To start with, I need to talk about how we had configured our Amanda clients.

In order to back up our fileservers in a sensible amount of time, we run multiple backups on each them at once. We don't really try to do anything sophisticated to balance the load across multiple disks both because this is hard in our environment (especially given limited Amanda features) and because we've never seen much evidence that reducing overlaps was useful in speeding things up; instead we just have Amanda run three backups at once on each fileserver ('maxdumps 3' in Amanda configuration). For historical reasons we were also using Amanda's 'auth bsd' style of authentication and communication.

As I kind of mentioned in passing in my entry on Amanda data flows, 'auth bsd' communication causes all concurrent backup activity to flow through a single master amandad process. It turned out that this was our bottleneck. When we had a single amandad process handling sending all backups back to the Amanda server and it was running more than one filesystem backup at a time, things slowed down drastically and we experienced our problem. When an amandad process was only handling a single backup, things went fine.

We tested and demonstrated this in two ways. The first was we dropped one fileserver down to one dump at a time and then it ran fine. The more convincing test was to use SIGSTOP and SIGCONT to pause and then resume backups on the fly on a server running multiple backups at once. This demonstrated that network bandwidth usage jumped drastically when we paused two out of the three backups and tanked almost immediately when we allowed more than one to run at once. It was very dramatic. Further work with a DTrace script provided convincing evidence that it was the amandad process itself that was the locus of the problem and it wasn't that, eg, tar reads slowed down drastically if more than one tar was running at once.

Our workaround was to switch to Amanda's 'auth bsdtcp' style of communication. Although I initially misunderstood what it does, it turns out that this causes each concurrent backup to use a separate amandad process and this made everything work fine for us; performance is now up to the level where we're saturating the backup server disks instead of the network. Well, mostly. It turns out that our first-generation ZFS fileservers probably also have the slow backup problem. Unfortunately they're running a much older Amanda version and I'm not sure we'll try to switch them to 'auth bsdtcp' since they're on the way out anyways.

I call this a workaround instead of a solution because in theory a single central amandad process handling all backup streams shouldn't be a problem. It clearly is in our environment for some reason, so it sort of would be better to understand why and if it can be fixed.

(As it happens I have a theory for why this is happening, but it's long enough and technical enough that it needs another entry. The short version is that I think the amandad code is doing something wrong with its socket handling.)

Written on 10 September 2014.
« Does init actually need to do daemon supervision?
How not to do IO multiplexing, as illustrated by Amanda »

Page tools: View Source, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Wed Sep 10 23:14:34 2014
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.