How old our servers are (as of 2016)
I was recently asked how old our servers are. This is a good question because, as noted, universities are cheap; as a result we probably run servers much longer than many people do. In fact we can basically wind up running servers into the ground, and we actually are somewhat doing that right now.
The first necessary disclaimer is that my group here only handles general departmental infrastructure on the non-undergraduate side of things (this split has its roots way back in history and is another story entirely). Also, while we try to have some reasonably powerful machines for people who need compute servers, we don't have the budget to have modern, big, up to date ones. Most professors who need heavy duty computation for their research programs buy machines themselves out of grant funding, and my understanding is that they turn over such compute servers much faster than we do (sometimes they then donate the old servers to us).
Over my time here, we've effectively had three main generations of general servers. When I started in 2006, we were in the middle of a Dell PowerEdge era; we had primarily 750s, 1650s, and 2950s. Most of these are now two generations out of productions, but we still have some in-production 2950s, as well as some 1950s that were passed on to us later.
(It's worth mentioning that basically all of these Dells have suffered from capacitor plague on their addon disk controller boards. They've survived in production here only because one of my co-workers is the kind of person who can and will unsolder bad capacitors to replace them with good ones.)
Starting in 2007 and running over the next couple of years we switched to SunFire X2100s and X2200s as our servers of choice, continuing more or less through when Oracle discontinued them. A number of these machines are still in production and not really planned for replacement soon (and we have a few totally unused ones for reasons beyond the scope of this entry). Since they're all at least five or so years old we kind of would like to turn them over to new hardware, but we don't feel any big rush right now.
(Many of these remaining SunFire based servers will probably get rolled over to new hardware once Ubuntu 16.04 comes out and we start rebuilding our remaining 12.04 based machines on 16.04. Generally OS upgrades are where we change hardware, since we're (re)building the machine anyways, and we like to not deploy a newly rebuilt server on ancient hardware that has already been running for N years.)
Our most recent server generation is Dell R210 IIs and R310 IIs (which, yes, are now sufficiently outdated that they're no longer for sale). This is what we consider our current server hardware and what we're using when we upgrade existing servers or deploy new ones. We still have a reasonable supply of unused and spare ones for new deployments, so we're not looking to figure out a fourth server generation yet; however, we'll probably need to in no more than a couple of years.
(We buy servers in batches when we have the money, instead of buying them one or two at a time on an 'as needed' basis.)
In terms of general lifetime, I'd say that we expect to get at least five years out of our servers and often we wind up getting more, sometimes substantially more (some of our 2950s have probably been in production for ten years). Server hardware seems to have been pretty reliable for us so this stuff mostly keeps running and running, and our CPU needs are usually relatively modest so old servers aren't a bottleneck there. Unlike with our disks, our old servers have not been suffering from increasing mortality over time; we just don't feel really happy running production servers on six or eight or ten year old hardware when we have a choice.
(Where the old servers actually tend to suffer is their RAM capacity; how much RAM we want to put in servers has been rising steadily.)
Some of our servers do need computing power, such as our departmental compute servers. There, the old servers look increasingly embarrassing, but there isn't much we can do about it until (and unless) we get the budget to buy relatively expensive new ones. In general, keeping up with compute servers is always going to be expensive, because the state of the art in CPUs, memory, and now GPUs keeps turning over relatively rapidly. In the mean time, our view is that we're at least providing some general compute resources to our professors, graduate students, and researchers; it's probably better than nothing, and people do use the servers.
PS: Our fileservers and their backends were originally SunFires in the first generation fileservers (in fact they kicked off our SunFire server era), but they're now a completely different set of hardware. I like the hardware they use a lot, but it's currently too expensive to become our generic commodity 1U server model.
Sidebar: What we do with our old servers
The short answer right now is 'put them on the shelf'. Some of them get used for scratch machines, in times when we just want to build a test server. I'm not sure what we'll do with them in the long run, especially the SunFire machines (which have nice ILOMs that I'm going to miss). We recently passed a number of our out of service SunFires on to the department's undergraduate computing people, who needed some more servers.
Wayland and graphics card uncertainty
Yesterday I wrote about several points of general technology churn that make me reluctant to think about a new PC. But as it happens there's an additional Linux specific issue that I worry about, and that's Wayland. More exactly, it's what Wayland is likely to require from graphics cards.
I use a very old fashioned X based environment, which means that all I need is old fashioned basic X. I think I run some things that like to see OpenGL, but probably not very many of them, and there's basic OpenGL support in even basic X these days. This has left me relatively indifferent to graphics cards and graphics card support levels; even what was a low end card at the time was (and is) good enough for my desktop.
I would like to keep on using my basic X environment for the lifetime of my next machine, but the forces behind Wayland are marching on with sufficient force that I don't think I can assume that any more. People are really trying to ship Wayland based desktops within the next couple of years on major Linux distributions (in particular, on Fedora), and once that happens I suspect that my sort of X environment will only have a few more years of viable life before toolkits and major programs basically stop supporting it.
(Oh, they'll technically 'support' X. But probably no one will be actively maintaining the code and so steadily increasing numbers of pieces will break.)
At that point, switching to Wayland will be non-optional for me (even if it results in upending my environment and makes me unhappy). In turn that means I'll need a graphics system that can handle Wayland well. Wayland is a modern system, so as far as I know it really wants things like hardware composition, hardware alpha-blending, and so on. Using Wayland without OpenGL-level hardware acceleration may be possible, but it's not likely to be pleasant.
What sort of graphics card (or integrated graphics) does this call for? I have no real idea, and I'm not sure anyone knows yet what you'll want to have for a good Wayland experience. That uncertainty makes me want to postpone buying a graphics card until we know more, which will probably need some brave Linux distribution to enable Wayland by default so that a lot of people run it.
(Of course I may be overthinking this as part of mostly not wanting to replace my current machine.)