2014-09-10
The cause of our slow Amanda backups and our workaround
A while back I wrote about the challenges in diagnosing slow (Amanda) backups. It's time for a followup entry on that, because we found what I can call 'the problem' and along with it a workaround. To start with, I need to talk about how we had configured our Amanda clients.
In order to back up our fileservers
in a sensible amount of time, we run multiple backups on each them
at once. We don't really try to do anything sophisticated to balance
the load across multiple disks both because this is hard in our
environment (especially given limited Amanda features) and because
we've never seen much evidence that reducing overlaps was useful
in speeding things up; instead we just have Amanda run three backups
at once on each fileserver ('maxdumps 3
' in Amanda configuration).
For historical reasons we were also using Amanda's 'auth bsd
' style
of authentication and communication.
As I kind of mentioned in passing in my entry on Amanda data flows, 'auth bsd
' communication causes all
concurrent backup activity to flow through a single master amandad
process. It turned out that this was our bottleneck. When we had a
single amandad
process handling sending all backups back to the
Amanda server and it was running more than one filesystem backup
at a time, things slowed down drastically and we experienced our
problem. When an amandad
process was only handling a single backup,
things went fine.
We tested and demonstrated this in two ways. The first was we dropped
one fileserver down to one dump at a time and then it ran fine. The
more convincing test was to use SIGSTOP
and SIGCONT
to pause and then resume backups
on the fly on a server running multiple backups at once. This
demonstrated that network bandwidth usage jumped drastically when
we paused two out of the three backups and tanked almost immediately
when we allowed more than one to run at once. It was very dramatic.
Further work with a DTrace script
provided convincing evidence that it was the amandad
process
itself that was the locus of the problem and it wasn't that, eg,
tar
reads slowed down drastically if more than one tar
was
running at once.
Our workaround was to switch to Amanda's 'auth bsdtcp
' style of
communication. Although I initially misunderstood what it does, it
turns out that this causes each concurrent backup to use a separate
amandad
process and this made everything work fine for us;
performance is now up to the level where we're saturating the
backup server disks instead of the network.
Well, mostly. It turns out that our first-generation ZFS fileservers probably also have the slow backup
problem. Unfortunately they're running a much older Amanda version
and I'm not sure we'll try to switch them to 'auth bsdtcp
' since
they're on the way out anyways.
I call this a workaround instead of a solution because in theory a
single central amandad
process handling all backup streams shouldn't
be a problem. It clearly is in our environment for some reason, so it
sort of would be better to understand why and if it can be fixed.
(As it happens I have a theory for why this is happening, but it's
long enough and technical enough that it needs another entry. The short version is that
I think the amandad
code is doing something wrong with its socket
handling.)
Does init actually need to do daemon supervision?
Sure, init has historically done some sort of daemon supervision (or at least starting and stopping them) and I listed it as one of init's jobs. But does it actually need to do this? This is really two questions and thus two answers.
Init itself, PID 1, clearly does not have to be the process that does daemon supervision. We have a clear proof of this in Solaris, where SMF moves daemon supervision to a separate set of processes. SMF is not a good init system but its failures are failures of execution, not of its fundamental design; it does work, it's just annoying.
Whether the init system as a whole needs to do daemon supervision is a much more philosophical question and thus harder to answer. However I believe that on the whole the init system is the right place for this. The pragmatics of why are simple: the init system is responsible for booting and shutting down the system and doing this almost always needs at least some daemons to be started or stopped in addition to more scripted steps like filesystem checks. This means that part of daemon supervision is at least quite tightly entwined with booting, what I called infrastructure daemons when I talked about init's jobs. And since your init system must handle infrastructure daemons it might as well handle all daemons.
(In theory you could define an API for communication between the init system and a separate daemon supervision system in order to handle this. In practice, until this API is generally adopted your init system is tightly coupled with whatever starts and stops infrastructure daemons for it, ie you won't be able to swap one infrastructure daemon supervision system for another and whichever one your init system needs might as well be considered part of the init system itself.)
I feel that the pragmatic argument is also the core of a more philosophical one. There is no clear break between infrastructure daemons and service daemons (and in fact what category a daemon falls into can vary from system to system), which makes it artificial to have two separate daemon supervision systems. If you want to split the job of an init system apart at all, the 'right' split is between the minimal job of PID 1 and the twin jobs of booting the system and supervising daemons.
(This whole thing was inspired by an earlier entry being linked to by this slashdot comment, and then a reply to said comment arguing that the role of init is separate from a daemon manager. As you can see, I don't believe that it is on Unix in practice.)
Sidebar: PID 1 and booting the system
This deserves its own entry to follow all of the threads, but the simple version for now: in a Unix system with (only) standard APIs, the only way to guarantee that a process winds up as PID 1 is for the kernel to start it as such. The easiest way to arrange for this is for said process to be the first process started so that PID 1 is the first unused PID. This naturally leads into PID 1 being responsible for booting the system, because if it wasn't the kernel would have to also start another process to do this (and there would have to be a decision about what the process is called and so on).
This story is increasingly false in modern Unix environments which do various amounts of magic setup before starting the final real init, but there you have it.