Wandering Thoughts archives

2021-08-27

What my first Linux was, and its context

Over on Twitter, there was a meme going around about your first Linux. As I sometimes do, I rose to the bait:

My first Linux was a version of Red Hat in 1999, as we refreshed some undergraduate labs from SGI Indys and X terminals to x86 machines running Linux. Why Red Hat? It's what the university book store had in stock.

We were moving from the SGI/X terminal setup to x86 Linux because in 1999 no Unix workstation vendor could give us competitive hardware any more. General x86s just crushed the cost/performance of Sun, SGI, etc for basic workstations. (Sun's basic x86 workstation was extra sad.)

I was using Unix well before my first Linux, as you can tell from the history of my SGI Indy (which I kept running until 2006, and might have kept running longer in different circumstances). But 1999 was when we needed to refresh the hardware in the undergraduate labs that I was then involved in, and when we went to evaluate the various hardware on offer nothing could beat the low cost and solid performance of general x86 hardware (we wound up with branded PCs from DEC). This was my real introduction to the relentless march of inexpensive x86 machines and one of things that informs my views that PCs can be Unix workstations.

(For reasons out of scope for this entry, we knew that you could get good results from Unix on x86 hardware, but we hadn't known that you could get it with relatively entry level PCs with things like IDE drives.)

The university was on the Internet, so we certainly could have downloaded whatever Linux distribution that was then available, gotten it on to some media, and installed it. But Red Hat was available on CD-ROM in the university book store just down the street from my cubicle, it meant that we didn't have to wrestle with downloading what was then a big file and figuring out how to write it to media, and we took the fact that the university book store carried it as a certain marker of quality. It all worked out fine, and it probably would have worked out just as fine with Debian or other distributions. By 1999, Linux was a solid choice in general.

(In 1999, I only had a few GB total of storage on my SGI Indy and I'm not sure that it would have been easy to deal with a 500 MB+ ISO image. Nor can I remember if I had a CD burner on that machine at the time.)

Both my first generation of x86 PC hardware and my first Red Hat install are long gone, but parts of the install live on in my office workstation, which was first installed in 2006 as a successor and significant duplicate of that first machine and its Red Hat installation.

(I keep Linux installs much longer than I keep my hardware. More exactly, I try to never reinstall them, because reinstalling is a pain. I did it once at home, and never at work since my forced migration in 2006.)

linux/MyFirstLinux written at 22:29:30; Add Comment

Using our metrics system when I test systems before deployment

Years ago I wrote that I should document my test plans for our systems and their results, and I've somewhat managed to actually do that (and then the documentation's been used later, for example). Recently it struck me that our metrics system has a role to play in this.

To start with, if I add my test system to our metrics system (even with a hack), our system will faithfully capture all sorts of performance information for it over the test period. This information isn't necessarily as fine-grained as I could gather (it doesn't go down to second by second data), but it's far more broad and comprehensive than I would gather by hand. If I have questions about some aspect of the system's performance when I write up test plan results, it's quite likely that I can get answers for them on the spot by looking in Prometheus (without having to re-run tests while keeping an eye on the metric I've realized is interesting).

(As a corollary of this, looking at metrics provides an opportunity to see if anything is glaringly wrong, such as a surprisingly slow disk.)

In addition, if I'm testing a new replacement for an existing server, having metrics from both systems gives me some opportunity to compare the performance of the two systems. This comparison will always be somewhat artificial (the test system is not under real load, and I may have to do some artificial things to the production system as part of testing), but it can at least tell me about relatively obvious things, and it's easy to look at graphs and make comparisons.

Our current setup keeps metrics for as long as possible (and not downsampling them, which I maintain is a good thing). To the extent that we can keep on doing this, having metrics from the servers when I was testing them will let us compare their performance in testing to their performance when they (or some version of them) is in production. This might turn up anomalies, and generally I'd expect it to teach us about what to look for in the next round of testing.

To get all of this, it's not enough to just add test systems to our metrics setup (although that's a necessary prerequisite). I'll also need to document things so we can find them later in the metrics system. At a minimum I'll need the name used for the test system and the dates it was in testing while being monitored. Ideally I'll also have information on the dates and times when I ran various tests, so I don't have to start at graphs of metrics and reverse engineer what I was doing at the time. A certain amount of this is information that I should already be capturing in my notes, but I should be more systematic about recording timestamps from 'date' and so on.

sysadmin/TestingAndMetricsSystem written at 00:04:32; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.