Wandering Thoughts archives

2012-02-05

My view on what will kill 'traditional' system administration

Phil Hollenback recently wrote DevOps Is Here Whether You Like It Or Not, in which he writes that traditional system administration is dying. While I sort of agree with him about the death, I don't think it's necessarily for the reasons that Phil points to.

Fundamentally, there has always been a divide between small systems and large systems. Large systems have had to automate and when that automation involved applications, it involved the developers; small systems did not have to automate, and often do not automate because the costs of automation are larger than the costs of doing everything by hand. Moving to virtualization doesn't change that (at least for my sort of system administration, which has always had very little to do with shoving actual physical hardware around); if you have only a few virtualized servers and services, you can perfectly well keep running them by hand and it will probably be easier than learning Chef, Puppet, or CFEngine and then setting up an install.

(If you're future-proofing your career you want to learn Chef or Puppet anyways, so go ahead and use them even in a small environment.)

There are two things that I think will change that, and Phil points to one of them. Heroku is not just a virtualization provider; they are what I'll call a deployment provider, where if you write your application to their API you can simply push it to them without having to configure servers directly. We've seen deployment providers before (eg Google App Engine), but what distinguishes Heroku is how unconstrained and garden variety your API choices are. You don't need to write to special APIs to build a Heroku-ready application; in many cases, if you build an application in a sensible way it's automatically Heroku-ready. This is very enticing to developers because (among other things) it avoids lockin; if Heroku sucks for you, you can easily take your application elsewhere.

(This has historically not been true of other deployment providers, which makes writing things to, say, the Google AppEngine API a very big decision that you have to commit to very early on.)

Deployment providers like Heroku remove traditional system administration entirely. There's no systems or services to configure, and the developers are deeply involved in deployment because a non-developer can't really take a random application and deploy it for the developers. If there is an operations group, it's one that worries about higher level issues such as production environment performance and how to control the movement of code from development to production.

The other thing is general work to reduce the amount of knowledge you need to set up a Chef or Puppet-based environment with certain canned configurations. Right now my impression is that we're still at the stage where someone with experience has to write the initial recipe to configure all N of your servers correctly, and you might as well call that person a sysadmin (ie, they understand Apache config files, package installation on Ubuntu, etc). However it's quite possible that this is going to change over time to the point where we'll see programs shipped with Chef or Puppet recipes to install them in standard setups. At that point you won't need any special knowledge to go from, say, writing a Django-based application to installing it on the virtualization environment of your choice. This really will be the end of developers needing conventional sysadmins in order to get stuff done.

The general issue of the amount of hardware in a small business (and virtualizing the hardware) ties into a larger question of how much hardware the business of the future is going to need or want, but that's a different entry. I will just observe that the amount of servers that you need for a given amount of functionality has been steadily shrinking for years.

Sidebar: what virtualization does change now

I think that plain virtualization does mark a sea change today in one way: it moves sysadmins away from a model of upgrading OSes to a model of recreating their customizations on top of a new version of the OS. Possibly it moves away from upgrading software versions in general to 'build new install with new software versions from scratch, then configure'.

This is partly because the common virtualization model is 'provide base OS version X image, you customize from there' and partly because most virtualization makes it easy to build new server instances. It's much easier to start a new Ubuntu 12.04 image than it is to find a spare server to use as your 12.04 version of whatever.

(Note that virtualization may not make it any easier to replace your Ubuntu 10.04 server with a new 12.04 server; there are a host of low level practical issues that you can still run into unless you already have a sophisticated management environment built up.)

I don't think that this is a huge change for system administration, partly because this is pretty how much we've been doing things here for years. We basically never upgrade servers in place; we always build new servers from scratch. Among other things, it's much cleaner and more reproduceable that way.

sysadmin/WhatWillKillSysadmin written at 23:56:02; Add Comment

Link: Filenames.WTF

In Filenames.WTF, Daniel Rutter runs down the reasons first why paying attention to file extensions is ridiculous, and then the reasons why it's still the best solution to the problem that we have. Spoiler: it's because people have spent decades creating file formats that suck.

(Via philliph on Twitter.)

links/OnFileExtentsions written at 19:27:58; Add Comment

What five years of PC technology changed for me

This fall I got a new home machine, just a bit over exactly five years after I got my previous home machine. It happens that I saved the invoice for my five year old machine, so I dug it out today in order to do a comparison about what five years of progress in PC technology did and didn't change for me.

First off, the progress of five years got me much better prices. My recent home machine cost me only about 60% of what my old home machine did. By itself, this is pretty impressive. Apart from that, running down the major components:

  • CPU: AMD dual core versus much faster Intel quad core. The Intel CPU was cheaper but not by a substantial amount; I think the AMD was probably closer to the high end at the time. I don't know what the benchmark results are, but I got a substantial performance improvement.

  • RAM: This is perhaps the most striking change on a purely numerical level; in 2006 I got 2GB of RAM for more than twice as much as what 16 GB of RAM cost me in 2011. Even in 2006, 2 GB was clearly economizing (I remember debating with myself over 2 GB versus the extra money for 4 GB and deciding that 2 GB should be good enough). In 2011, 16 GB is as much as the motherboard will take with current DIMM densities.

    In short, desktop RAM has become stupid cheap.

    (One index of the change is that in 2006, the 2 GB of RAM cost more than the CPU and was the most expensive single component. In 2011, the 16 GB cost only a bit over half of the CPU.)

  • motherboard: the modern era features more SATA, less IDE, more USB, and not even one external serial port. Motherboards are unexciting. Even in 2006 the motherboard had onboard sound and gigabit Ethernet. The 2011 motherboard probably has better onboard sound, but in practice this doesn't matter to me; my sound needs are modest.

    (The 2006 motherboard was a bit cheaper than the 2011 motherboard, but neither were particularly expensive or advanced ones.)

  • Hard drives changed only moderately at one level; in 2006 I got 320 GB drives for somewhat over twice what 2011's 500 GB drives cost me. In 2011, 500 GB drives are nowhere near state of the art; in 2006, 320 GB drives were not that far out of it.

    (This was before the floods in Thailand.)

    On another level, they changed a lot. The 320 GB hard drives of 2006 were my only storage. The 500 GB drives of 2011 are only for the operating system; my data lives on a pair of 1.5 TB drives (that I had upgraded to some time ago). 500 GB is way overkill for the OS, but there's no real point in using drives that are any smaller; it's not like I'd have saved any significant amount of money.

  • Video card: ATI X800 GT versus ATI HD 5450 with double the memory for less than a third of the price. Toms Hardware theoretically puts these two cards in almost the same performance category, although I'm not sure that's really true. In practice, what happened between 2006 and 2011 is that graphics cards shifted to the point where a basic passively cooled card was clearly more than good enough for what I was doing, even for driving dual displays digitally.

    (I don't yet have dual displays at home, but I do at work and my work machine uses the same card. In fact, my work machine is now a clone of my home machine, just as it was in 2006.)

  • optical drives: in 2006 a DVD burner cost about four times what it did in 2011, and I thought I would listen to CDs enough to justify having a separate CD/DVD reader (rather than put wear and tear on an expensive burner).

    (I was wrong; my CD listening had already dropped off a cliff in early 2006 and never recovered. I still kind of miss that sometimes.)

  • Power supply: in 2006 I didn't trust the power supply that came with the case to really be a good solid one that delivered enough power so I bought a separate one as well. In 2011 I couldn't find any reason to worry about it so I didn't; the power supply you get with a decent quiet case these days is going to be quite good, more than you need (for a PC like the kind I build), and efficient.

In 2006, the most expensive components were the RAM, the CPU, the two hard drives together, and then the video card. In 2011, the most expensive components were the CPU, the motherboard, and the case (more or less tying with the RAM). Another way to put it is that in 2011, the video card, the DVD burner, the hard drives, and pretty much the RAM were all what I considered trivial expenses in the overall machine.

tech/FiveYearsPCChanges written at 02:28:22; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.