An awkward confession and what we should do about it
I have an awkward confession.
At this point, we have been running
Ubuntu machines for at least nine years or so, starting with Ubuntu
6.06 and moving forward from there. In all of that time, one of the
things I haven't done (and I don't think we've done generally) is
really dive in and learn about Debian packaging and package management.
Oh sure, we can fiddle around with
apt-get and a number of other
superficial things, we've built modified preseeded install environments,
and I've learned enough to modify existing Debian packages and
rebuild them. But that's all. That leaves vast oceans of both dpkg
and APT usage that we have barely touched, plus all of the additional
tools and scripts around the Debian package ecosystem (some of which
have been mentioned here by commentators).
I don't have a good explanation for why this has happened, and in particular why I haven't dug into Debian package (because diving into things is one of the things that I do). I can put together theories (including me not being entirely fond of Ubuntu even from the start), but it's all just speculation and if I'm honest it's post-facto excuses and rationalization.
But what it definitely is embarrassing and, in the long run, harmful.
There are clearly things in the whole Debian package ecology that
would improve our lives if we knew them (for example, I only recently
--with-new-pkgs option). Yet what I can
only describe as my stubborn refusal to dig into Debian packaging
is keeping me from this stuff. I need to fix that. I don't necessarily
need to know all of the advanced stuff (I may never create a Debian
package from scratch), but I should at least understand the big
picture and the details that matter to us.
(It should not be the case that I still know much more about RPM and yum/dnf than I do about the Debian equivalents.)
My goal is to be not necessarily an expert but at least honestly knowledgeable, both about the practical nuts and bolts operation of the system and about how everything works conceptually (including such perennial hot topics for me as the principles of the debconf system).
With all of that said, I have to admit that as yet I haven't figured out where I should start reading. Debian has a lot of documentation, but in the past my experience has been that much of it assumes a certain amount of initial context that I don't have yet. Possibly I should start by just reading through all of the APT and dpkg related manpages, trying to sort everything out, and keeping notes about things that I don't understand. Then I can seek further information.
(As is traditional on Wandering Thoughts, I'm writing this partly to spur myself into action.)
Why I don't think upgrading servers would save us much power
The other consideration for upgrading low-utilization servers is power (and thus cooling) efficiency. [...]
Although I haven't gone out and metered this with our current server fleet, my impression is that we wouldn't save very much here, if anything. Of course my impression may be wrong, so it's time to go exploring.
Almost all of our potentially upgradeable servers are basic entry level 1U servers (all except the one or two remaining Dell 2950s). My strong impression is that like 1U server prices and hard disk prices, 1U server power usage has basically stayed flat for the past relatively decent amount of time. What's changed is instead how much computation you got for your N watts.
(To a large extent this isn't too surprising, since a lot of the power usage is driven by the CPUs and at least Intel has been on fixed power budgets for their CPUs for some time. Of course, as time goes on you can get more performance from the lower powered CPUs instead of needing the high-power ones, assuming they're offered in servers you're interested in.)
Am I right about this? Well, the evidence is mixed. Our primary server generations today are SunFire X2100s and X2200s versus Dell R210 IIs and Dell R310 IIs. The SunFires are so old that it's hard to find power usage information online, but various sources seem to suggest that they idle at around 88 watts (and back in 2006 I measured one booting as using 130 watts). A Dell R210 II is claimed to idle at a measured 43.7 watts, which would be about half the power of a SunFire. On the other hand, an R310 seems to idle at almost 80 watts (per here, but that may not match our configuration), very close to the SunFire power at idle.
(I care about power at idle because most of our servers are idling most of the time.)
All of this assumes a one for one server replacement program, where we change an old server into a new, perhaps more power efficient server. Of course if you really want to save power, you need to consolidate servers. You can do this through virtualization or containers, or you can just start co-locating multiple services on one server; in either case, you'll win by converting some number of idle servers (all with their individual baseline idle power draws) into one almost idle server with a power draw hopefully only somewhat higher than a single old server.
Could we do this? Yes, to some extent, but it would take a sea change in our opinions about service isolation and management. Right now we put different services on different physical services both to isolate failures and to make our lives easier by decoupling things like server OS upgrade cycles for different services. Re-joining services (regardless of the means) would require changing this, although things like containers could somewhat deal with the server OS upgrade cycle issue.