2011-10-26
How to fail at useful cryptography: bad error messages
I'm in the process of upgrading my office workstation from Fedora 14 to Fedora 15 (since I've already piloted Fedora 15 on my home machine). I'm doing it with a yum upgrade, since that's the sane way to do it if you don't want to be without your machine for hours. And I got the following (fatal) error:
warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID 069c8460: NOKEY
The GPG keys listed for the "Fedora 15 - x86_64" repository are already installed but they are not correct for this package.
Check that the correct key URLs are configured for this repository.
Now, notice some important omissions from this error message. It doesn't tell me which package is having problems. It doesn't identify the bad key that the package is signed by (at least not in any clear way; it may be key ID 069c8460 but it's hard to tell). It doesn't identify what it thinks are the GPG keys of the repository. It doesn't even really tell me what's wrong, not directly.
(See below for what I think was actually wrong.)
Here is today's pop quiz: what is even a well-intentioned person going to do when they are presented with an error message like this?
The answer is that they are going to do a web search for the error message to see if someone has already hit this and found a solution, and then they are going to shrug and turn off GPG package verification. Because for everyone except people who are very familiar with yum's insides, what this error message really translates to is 'something went wrong with the innards of yum GPG package verification, you're out of luck', and in that situation all they can do is either give up entirely or override the broken innards. And very few people are going to give up on what they wanted to do.
This is a terrible result. An unusable error message has just convinced people to entirely disable your protective cryptography, even in the face of a potential extreme danger. The reality is that useful cryptography requires useful, clear error messages. If you want people to pay attention to your cryptography when something is wrong, you must explain as clearly as possible what is wrong and give them the best tools possible for diagnosing the failure. If people can't clearly see what's wrong and how to at least investigate further, they have no real choice but to override your crypto and lose any safety you might have been providing them.
(You also want to give people minimal ways to work around the problem. For example, yum only gives me the option to turn off GPG package verification entirely; it has no visible '--skip-bad-signed' option.)
Sidebar: what was wrong
For the benefit of anyone doing a web search on this error message, I believe that there are two possible causes for it.
In my case, I think that the error message was wrong and I had not in fact imported the Fedora 15 GPG keys (although I thought I had; I think I accidentally imported the Fedora 16 keys instead). I don't know why yum thought I had. I can't be entirely sure of my diagnosis, but key ID 069c8460 is the Fedora 15 key and the problem went away after I (re)imported it, although I also did one other thing in the mean time.
(You can tell what key ID is which on either the Fedora
keys page, where the key ID
the bit after the '/' in the pub line, or by inspecting the
file name in the key's URL in the Fedora yum upgrade page.)
Web searches suggest that the other case is a package labeled as being for Fedora 15 that's signed with the key of another Fedora release. Apparently this happens sometimes and yum normally forbids this, probably for good reasons.
2011-10-22
Fedora 15 versus me
To perhaps the relief of some people, I am no longer running Fedora 8 on my home machine; I've now pretty much finished migrating to Fedora 15 on my new hardware. I did the actual migration the way I'd planned; I installed Fedora 15 from scratch on the new machine's system disks, transplanted the old machine's mirrored disks into the new machine as additional data disks, and then mounted my home directory filesystem and so on.
(It's bit annoying that Fedora 16 is so close to release, since I'm just going to get to upgrade my machine in a month or so. I did consider installing a Fedora 16 Beta instead of Fedora 15, but in the end I decided that I didn't want to take the risk of problems during a Beta to release upgrade. And, you know, it's a beta.)
That 'and so on' handwaves a lot. The tedious and time consuming portions of this migration were reconfiguring the more or less stock Fedora 15 install to work in my peculiar ways and updating my usual environment to work on Fedora 15. The latter was vastly helped by the fact that Fedora 15 is quite close to Fedora 14 (as far as my environment is concerned), which I'm running at work, so I could simply copy a lot of stuff from my work setup.
In the process of doing all of this I have some random notes:
- if pressed, I can actually work in Gnome 3 and get things done. It's merely annoying (but not sufficiently annoying to get me to set up XFCE for strictly temporary usage).
- Gnome 3 really doesn't expect you to have more than a small handful
of local users. The gdm login widget really doesn't like it (and
in Gnome 3.0, excluding users from being shown in the login window
is apparently an unimplemented feature).
- I gave SELinux a try, I really did, but in the end I once again turned
it off at about the time I couldn't start X as myself because
something in my home directory had the wrong SELinux permissions
(or no SELinux permissions, since I've run Fedora 8 for years
with it turned off). I'm sure that I could have fixed all of the
issues with enough work, but it just wasn't worth it and I was
in a hurry.
- since Fedora 15 has migrated to Gnome 3, the standard Gnome 2
volume control applet has vanished. I didn't find an alternate
volume control applet that I liked; in the end I just copied
the Fedora 14 binary from my work machine and fortunately it worked.
(This is one of the drawbacks of Gnome 3; alternate environments can no longer use Gnome 3 widgets, because they're very tightly tied to the Gnome 3 shell.)
- I'd like to say that while I wasn't looking Liferea turned from a
nice little feed reader into a buggy crash-prone monster, but
that's not really true. First, I had advance warning from things
that Pete Zaitcev had written,
and second if I'm being honest I always patched Liferea a bit
myself and did other peculiar things (like running a 1.2 version
well after it became obsolete even in Fedora 8).
Update: I unfairly maligned Liferea here; it turns out the crashes weren't its fault.
- I have acquired a grim hate of NetworkManager, or at least of trying
to mix NM with any by-hand network configuration. As it happens my
home machine needs a lot of by-hand network setup, so in the end
I had to turn NM off entirely. I'm happy to report that this is
not too difficult to do with systemd.
(The bit that drove me over the edge was how when NM was running but not managing my DSL link, applications like Firefox and Liferea started up thinking that the system was offline. I can see the chain of logic that gets the system to that point, but it's wrong.)
I don't want to say anything about systemd right now, because it's new to me and I just haven't had the time to look into it and get familiar with it. My current off the cuff reactions are from a position of ignorance.
Okay, I'll say one thing: even allowing for better hardware than my old machine, it feels like my new machine boots startlingly fast (although I think that part of this is the wall of text that flows past in a text mode boot).
- it feels kind of liberating to know that I've thrown out a lot
of random strange customizations in the move, since I installed
Fedora 15 from scratch (on the new system drives). I've copied
some customizations from the old system filesystems in order
to get things the way I want them, but it's a much more minimal
set than I used to have.
(If I'd had lots of spare time to do the migration I would have kept careful notes and then set up Puppet or Chef to apply all of my customizations automatically, but not this time.)
- I've taken the advice made in comments here
and simplified my clutter of system filesystems down to
/boot, swap, and/, with/usrand/varrolled into the root filesystem. It's really not worth being more picky any more and it's nice not to have to worry about one piece running out of space when there's lots left elsewhere (and I made the root filesystem 100 GB just to really overkill the issue).(Unfortunately I didn't think to make
/be LVM on top of RAID-1, which would have let me use LVM snapshots.)(My jumble of user filesystems continues unabated, since I'm using them as-is; I literally just brought up the software RAID arrays, activated LVM, and started adding things to
/etc/fstab.)
I have some things to say about the experience of having a new, modern machine, but that's a subject for another entry; this one is already long and rambling enough.
2011-10-20
The kind of computer usage I think Gnome 3 is targeting
Back in a comment on Gnome3Out I mentioned feeling that the Gnome 3 developers are targeting people who use their computer in a significantly different way than I do. Because I've been using Gnome 3 again recently, I feel like trying to explain that today.
While Gnome 3 has a number of interface differences from Gnome 2 and other desktop environments, two of them stand out very prominently (at least to me). First, a great many interface elements are built around the assumption that you only want one window for any particular program. Working with multiple instances of the terminal or Firefox is awkward for reasons well beyond that you have to shift-click the appropriate icon to start the second one.
(For instance, once you have all instances minimized clicking on the app's icon will only un-minimize one window; clicking on the app's icon again will re-minimize that window instead of un-minimizing the next one.)
Second, Gnome 3 fairly strongly wants you to keep all of the windows you're using on the screen (in some workspace). Minimization is not all that accessible and then Gnome 3 works quite hard to conceal minimized windows. In Gnome 2, minimization is accessible as a button and by clicking on a window in the taskbar, and then minimized windows are accessible in the taskbar. In Gnome 3 there is no taskbar; minimized windows are mostly inaccessible (especially if you have more than one window for an application) until you reveal all windows and then sort out which ones you want. The net effect is that it is significantly more work to flip back and forth between a set of windows in Gnome 3, especially if you have several windows from the same application.
As anyone who's looked at my desktop can tell, my style of working uses a lot of windows from the same applications (prominently terminals and browser windows), many of them minimized until I need them again. The same holds true on my laptop, where I do less but I have less screen real estate to do it in so I'm constantly shuffling various windows in and out of visibility. This style is a terrible match for Gnome 3, which really doesn't want me to work that way. To stereotype a bit, my working style is multi-tasking while Gnome 3 is a quite single-tasking environment. My view is that Gnome 3 is built for people who run one thing at a time, or at most no more things at once than all fit on their screen or screens (and they quit out of applications when they switch from one to the other).
From a certain perspective this makes user interface sense. Managing the hidden state of minimized windows is a cognitive burden and things like the taskbar are visual clutter (if you don't have minimized windows). People who don't really multitask are probably decently well served by the Gnome 3 interface, especially if they don't switch back and forth between applications very often. The single-window application unification helps by avoiding confusion if people forget that they already have a copy of the application running but minimized and we've already posited that these people multi-task and multi-window only rarely.
(I know people like this; they already run only one copy of every major application that they use, often without minimizing them.)
(I'm aware that I may be ascribing more coherence and planning to the Gnome 3 design process than it actually had. Allow me my potential charming illusions.)
2011-10-15
Why I say Fedora 15 could get my machine's Ethernet's name right
Fedora 15's consistent network device naming
is done by biosdevname; its documentation implies that this is done
using information that you can also see directly with dmidecode and
biosdecode. Today I'm going to walk through why I think there's
enough information in my machine's BIOS information to get it right
(as I mentioned yesterday when I complained
about this).
SMBIOS/DMI information comes in several different types. The biosdevname manpage implies that what it pays attention to are information on actual physical slots (DMI type 9) and onboard devices (specifically DMI type 41). Both of these report information that includes a 'bus address' for each thing they are reporting on. On my machine what I see is:
# dmidecode -t 41
[...]
Handle 0x0063, DMI type 41, 11 bytes
Onboard Device
Reference Designation: Onboard LAN
Type: Ethernet
Status: Enabled
Type Instance: 1
Bus Address: 0000:00:19.0
# dmidecode -t 9 | fgrep 'Bus Address'
Bus Address: 0000:00:01.0
Bus Address: 0000:00:1c.3
Bus Address: 0000:00:1c.4
Bus Address: 0000:00:1c.6
Bus Address: 0000:00:1c.7
Bus Address: 0000:00:1d.0
Bus Address: 0000:00:01.0
(The two slots with the same bus address have the designations PCIEX16_1 and PCIEX16_2.)
So the DMI information believes that there is an onboard Ethernet and as we expect there's nothing with that 'bus address' in the list of physical slots.
DMI bus addresses for PCI(E) devices appear to be in the form of
'segment group:PCI bus:device.function'. To relate this to lspci
output, you drop the segment group; my uninformed impression is that
only complicated large server machines have complex enough PCI bus
topologies to have non-zero segment group numbers. lspci says that the
sole Ethernet device is in slot 07:00.0, sitting behind a PCI bridge
at 00:1c.5 (according to 'lspci -t', although I don't know how it
determines that), and there is no slot 00:19.
The most informative thing biosdecode can tell us is the PCI interrupt
routing table (aka PIR). This table doesn't have anything for 00:19,
but it does mention 07:00:
# biosdecode | fgrep 07:00
Slot Entry 14: ID 07:00, on-board
So in summary: the SMBIOS claims that this machine has an onboard Ethernet at an otherwise unknown PCI slot name, the only actual PCI-visible Ethernet is not on or behind a SMBIOS-listed physical slot, and the BIOS's interrupt routing claims that the Ethernet is onboard.
A program that trusts the SMBIOS (or requires that its claims validate) will not declare the Ethernet to be an onboard one, because it can't prove that. But proof is the wrong standard for consistent device naming in the face of the contradictory evidence we have here; what you want is the most likely and most consistent interpretation. I maintain that the preponderance of evidence from all of this is that there is indeed an onboard Ethernet and it is at 07:00.0.
(If you had multiple Ethernet devices you might want to be more cautious, depending on what you could determine about the other ones, but we don't here.)
On a side note, I looked at the dmidecode and biosdecode output
on a SunFire X2100 (which has four onboard Ethernet ports). None of
them were listed in dmidecode but all four of them were correctly
identified as onboard devices in the PCI interrupt routing table
from biosdecode. My overall conclusion from this is that the PIR
is much more reliable than the DMI information, probably because
things actually care about the PIR's information and malfunction
if it isn't sufficiently correct.
Sidebar: sources of more information
The SMBIOS specification is reasonably readable, although it naturally assumes you understand things like PCI (which I don't really). DMI is the older standard which was apparently replaced by SMI (per here).
The problem with Fedora 15's consistent network device naming
Lately I've been doing a bunch of things with a real installed Fedora 15 machine (previous experimentation has mostly been with a Live USB stick used temporarily on various machines). One result of this is that I've actually noticed the name that Fedora 15 chose to give my ethernet as a result of its switch to consistent network device naming.
As it happens, my Fedora machine's Ethernet is now called p5p1. There
are at least two problems with this name.
The first one is that it's wrong. Since this is an onboard Ethernet
port, it should have an emN name (either em0 or em1 depending
on which bit of documentation I read). It doesn't get the right name
because biosdevname trusts the BIOS's information about what things
are and aren't onboard devices. Yes, really, despite plenty of
experience that trusting PC BIOSes for anything (or at least anything
not required to boot Windows) mostly serves as a source of comedy gold
(as long as you're at a safe distance).
The second problem is that this is a terrible name. Practically its sole virtue is that it's unique. It's not predictable (not without detailed information about the machine's bus topology and a good understanding of PCI), it's not very meaningful, and it's certainly not memorable. It's the kind of name that can only be sensibly used by programs that get it directly from the system's configuration in one way or another.
(There are a number of problems with slot based names, but one of them is that another machine with a different motherboard (but also onboard Ethernet) might well have a completely different name because its Ethernet controller is hooked up to the PCI(E) bus in a different 'slot' and a machine with a real network card (and no onboard network) will have yet another name.)
What this means is that while the idea of consistent network device
naming may in theory be good, the tools for achieving it are not yet
mature. Right now it certainly looks as if biosdevname is terminally
naive and will frequently give wrong answers (I would guess that it gets
things wrong more often than it gets them right).
(As it happens I think that there is enough information present in the
output of biosdecode to deduce that the machine's Ethernet must be an
onboard device, so it's clear that biosdevname should be able to do
better. Possibly it needs to be taught about all of the common ways
that BIOS creators get this information wrong and given heuristics for
figuring out what the real situation is likely to be.)
2011-10-14
Power consumption numbers for my 2011 home machine
It's been about five years since I did my last set of power consumption numbers for a desktop. Since I now actually have a new machine, that makes it time for a new set of numbers. I've tried to make the measurements as comparable with the old numbers as possible, but it's not quite an exact match.
The new machine has a CPU with a nominal 95 watt TDP and an efficient 650 watt power supply (vast overkill, but it came with the case). I don't think there's any particularly power hungry components apart from that, especially as I don't think I managed to stress the graphics card at all.
Watt readings taken from our (consumer) power meter, as in the past:
| powered off | 0 watts | |
| in the BIOS's hardware monitor screen | 76 watts | |
| running memtest86+ | 86 watts | |
| idle in Linux, with or without the screen blanked | 54 watts | |
| streaming writes to one disk | 62-66 watts | |
| streaming writes to two disks | 73 watts | |
| one core busy with a simple CPU soaker | 88 watts | (+34 watts from idle) |
| two cores busy | 107 watts | (+19 watts) |
| three cores busy | 130 watts | (+23 watts) |
| all four cores busy | 154 watts | (+24 watts) |
| four cores busy plus streaming writes to both disks | 160 watts | |
| four cores busy with a CPU soaker plus an OpenGL program (glglobe) doing things | 151 watts |
(I tested the power draw when the machine was powered off because the old machine drew three watts at that point. And it's not a difference between power meters; I'm actually using the same power meter I used five years ago.)
The glglobe test was my attempt to find an OpenGL program that would actually make the machine's graphics card actually do anything meaningful (and thus draw as much power as it could, since I've seen this make a clear difference). I suspect that the program wasn't making the card work very hard, plus there is the oddity that the power draw actually dropped compared to the four CPU soakers alone. (And yes, this wasn't a measurement artifact; when I quit glglobe, the power draw immediately jumped up.)
All of this was under Fedora 15 with Gnome 3. I didn't attempt to optimize or change any power profiles from the default setup.
Overall I'm happy to see that under full load the new machine draws no more power than the old machine, and it idles at significantly less power than the old one did (54 watts versus 98 watts). Since I last did this check in an era when machines were drawing more and more power every generation and every year, it's a pleasant change.
(Admittedly I think that five years ago may have been the tail end of that era, when everything started slowing down and people began to care about issues like machine noise.)
PS: if you know a better OpenGL stress test that's packaged for Fedora, let me know.
2011-10-09
My view of the state of graphics cards for Linux (in fall 2011)
I sort of alluded to the state of graphics cards for Linux in passing in my hardware spec list, so I want to write it down explicitly in more length (if only so that I can point to this later).
There are at the moment three main sources of decent PC graphics: Intel, nVidia, and ATI (now part of AMD). ATI and nVidia make both graphics cards and integrated graphics chipsets; Intel only makes integrated graphics (very integrated these days, as they are now part of the CPU die).
With one exception Intel has been a good to excellent open source supporter for graphics drivers, with well known X developers being paid to work on drivers and general X improvements, good relations with upstream, and so on. They seem to be perhaps the most genuinely open source friendly company in graphics today. The two drawbacks of Intel are that they are more or less entry level performance and they only do integrated graphics.
(The open sourcing exception is the infamous 'Poulsbo' chipset, aka 'GMA 500'. I'm not going to try to summarize the situation there, but let's just say that it was and is not really open source friendly. See here for one reference.)
ATI is fairly open source friendly and the drivers for their hardware are developed with their support and with documentation and code that they contribute. However, my impression is that they are not putting the kind of support and resources into it that Intel is (perhaps partly because they have their own binary driver effort to work on). As a result, not all features of their hardware (especially recent hardware) are supported in the open source drivers and most of the driver work is done by outside people on uncertain 'as available' schedules.
(See here, here, and here for information.)
nVidia is not friendly to open source at all. What open source drivers exist for nVidia hardware are basically all reverse engineered; nVidia's own open source code contributions appear to have deliberately obfuscated (and unmaintainable) code. My impression is that most of the advanced features of nVidia cards are generally not supported in open source drivers at all.
(See a Phoronix story from 2010 on this, as well as the nouveau driver wiki.)
There probably are nVidia cards where the open source drivers have decent support for all of the graphics things that I care about. However, I simply refuse to buy graphics hardware from a company that is as hostile to open source as nVidia is (at least unless I have no choice). This leaves me with a choice of ATI or Intel. I have somewhat of a bias towards Intel, but ATI is the only one of the two that will actually sell me separate graphics cards (and I can get higher performing, more featureful ones if I want). Hence the ATI based card in my planned new home machine.
How things work with either ATI or nVidia if you use their closed source binary drivers is irrelevant to me; I don't use binary drivers for graphics cards for any number of reasons. A large part of it is that I don't need features and (potential) performance badly enough to trade system stability, freedom of choice, ease of upgrades, and so on for them. As a result, I have no idea which one of ATI and nVidia either work better or perform better when using their binary drivers.
(More references for open source video driver stuff are back in an earlier entry on video cards.)
Sidebar: the real drawback of integrated graphics (so far)
The real drawback of integrated graphics for me is not the performance; it is that support for them is up to the whims of the specific motherboard you're looking at and even the chipset it uses. This adds a significant extra constraint when selecting a motherboard that is not there when you look only at graphics cards.
Thus, my current attitude towards integrated graphics is that I'm going to ignore them when planning machines. I'll pick motherboards and other parts based on the assumption that I'll need a graphics card, and if it turns out that my motherboard choice also has usable integrated graphics, well, I win modestly. However I don't expect to this to happen very often, as most motherboards don't seem to bring out the integrated graphics even when they're present and supported by the underlying chipset.
(From one perspective this is not too surprising. Many of these motherboards are aimed at people who will never consider using the integrated graphics, which makes the extra stuff for bringing out the integrated video a pure cost with no benefit (if nothing else, you need connectors and they take up space). Motherboard makers are very good at trimming costs.)
2011-10-08
My new Linux machine for fall 2011 (planned)
My current home machine turned five years old a couple of weeks ago, so it seems like about time for for me to get serious about putting together a new one. It helps that I've been mulling this over for some time; it also helps that new software is making the my current machine feel increasingly slow, which certainly adds motivation. As a result of this I was explaining my hardware choices to someone online, so I figure that I might as well recycle what I said as an entry.
I haven't actually ordered the machine from the local clone builder so my parts list here is theoretically not yet final (and I can't be sure that everything is ideal for Linux until I've actually tried it), but I've basically settled on the following build:
- Intel Core i5 2500
- This strikes me as the current relatively decent
sweet spot for a CPU that I want to be using for five years (the
i5 2400 might be equally good but going to the i5 2500 is only
$15 more from the local clone shop, so why not). I don't overclock
so the K-suffix CPUs are irrelevant to me. I could get the i7
2600 but I'd basically be paying $100 more for 2 MB of L3 cache,
which hardly seems worth it.
(Yes I'd get HyperThreading too, but that may be an antifeature.)
I've historically used AMD CPUs but Intel seems to be the current CPU top dog, especially if you care about thermal efficiency as well as raw CPU power.
- Asus P8P67 LE motherboard
- Asus is my default motherboard vendor
of choice (and I've had a string of good luck with them). The LE
variant of their Intel P67 chipset based P8P67 series has everything
that I want and pretty much nothing that I don't; in particular
I get two PS/2 connectors and external SATA.
(I also get IDE and a PCI slot if I really need either for some old piece of hardware.)
- 4x 4GB DDR3-1600Mhz RAM
- This may be absurd but it's inexpensive,
and getting 16 GB now means that I should be able to forget about RAM
issues for the next five years (or more, since the rate of PC improvement
seems to be slowing down). I am doing overkill this time around
because back in 2006 when I put together my current home machine I
opted for 'only' 2 GB on the grounds that it should be fine (and it
seemed like an absurdly large amount of memory). Three or four years
later, that turned out to be wrong.
A 1600 MHz speed seems to be the top of what's generally available and inexpensive, and various sources suggest that it doesn't matter too much anyways.
(I believe that it's actually cheaper than DDR3-1333 RAM would be.)
It amuses me that my home machine will have more memory than our multiuser login machines (they have 8 GB).
- A fanless Radeon HD 5450 based graphics card
- As an old fashioned
Linux user I have very low graphics needs, so my choice was simple;
I wanted a fanless ATI card that was old enough to have decent
Linux open source support. This is what my clone shop currently
has.
(Hopefully I am reading the entrails of the Radeon documentation correctly about what Radeon chipsets have decent support.)
(For scale, I have an Radeon X300 in my office machine and find it perfectly fine. I would like to find a fanless dual-DVI card someday, but I'm not holding my breath.)
PS: I care about open source driver support because I don't run binary drivers; performance and features of hardware under the ATI or nVidia binary drivers is irrelevant to me. As far as I know, ATI is still your best choice for open source supported graphics cards.
- A SATA LG DVD-RW optical drive
- I definitely want a DVD reader and
the extra cost for a DVD writer is trivial, even if I haven't burned
any DVDs at home over the past five years that I've had a DVD writer
in my current home machine.
- Two 500 GB Seagate 7200 RPM SATA drives
- I'll be transplanting my
current system's mirrored drives into the new system, but they'll
be used purely for the user data I have on them, not the currently
installed OS. Over time I've come to the
conclusion that I really want the OS on different physical drives
than my data so that I can disconnect my data drives and (re)install
the OS without having to worry that it (or I) will screw up
something and affect user filesystems.
I'm mirroring the system drives (even without user data) because I have no desire to spend any time reinstalling the OS after a disk failure if I don't have to. And disk failures always happen sooner or later.
I pick Seagate disks by reflex. Perhaps I should rethink that, but all HD manufacturers seem to be horrible in their own way (and time), and Seagates have been good to me so far.
(500 GB is absurdly large but apparently the drive vendors are all getting rid of smaller sizes, especially inexpensive 7200 RPM small sizes. Lately, even when we can get smaller SATA disks there's basically no point once you look at the relative costs involved. I think I will do something extravagant with the space, like keeping a local archive of all packages ever installed on my machine.)
- Antec Sonata IV case (with power supply)
- I'm not terribly enthused
about this case but it's the quiet case that the local clone
builder carries and it does not appear to be particularly bad.
From what I've read online I'd kind of prefer an Antec Sonata
Plus 550 (which is apparently the linear descendant of my current
Antec P150 case, which I am quite happy with), but Antec has
stopped making those. So it goes in the PC world.
(Reusing my current case has a number of issues, including requiring me to build the new machine myself.)
The clone builder assures me that I do not need an aftermarket CPU cooler. I have agreed with that since it saves me researching them.
(In general I have consciously narrowed my choices in order to actually reach the point where I can make them. For example, maybe there are better motherboards from vendors other than Asus but I have made no attempt to find them because I could spend days on that alone. The clone builder's stock parts list has been very helpful in this.)
By modern standards this is a relatively expensive machine (the current quote from the local clone builder we like is around $900) and there are a number of ways to make it cheaper (eg less RAM or a lower cost CPU). However, since I expect to be using this machine for at least as long as the five years that my current home machine has lasted, I think that it's acceptable and I'm not inclined to shave a few dollars here and there in ways that may degrade its long-term usefulness.
Somewhat contrary to what I've written before, my current plan is to install Fedora 15 on this machine. The state of Gnome 3 is irrelevant for one of my primary workstations (since my environment is entirely custom) and I might as well start out with a relatively current base (since on past evidence I am going to wind up neglecting it sooner or later).
Once I have this machine and get it working well, I guess that my next step is figuring out a replacement for my 'going to be five years old soon' LCD display (a Dell 1907FP). That will be an exciting morass, especially if I want to try to decide between a 1920x1080 widescreen display and something that's closer to my current display (assuming that there are decent non-widescreen LCDs left any more).
Sidebar: why not SSD(s)
Fundamentally for two reasons. First, I don't expect the system disks to be particularly active. The thing to accelerate would be my user filesystems, but I can't fit all of my data into current SSDs so I would still need my current mirrored data disks as overflow space. The end result of this is that I'd wind up with at least five disks in the machine (one system disk, two SSDs for my active data, and two HDs for less active data, and yes I would insist on mirroring SSDs that held my own data). This seems like expensive overkill given that I at least think that what I do is not disk intensive.
The second reason is, well, 16 GB of RAM. I am pretty sure that my working set of programs and frequently accessed files will fit easily in 16 GB, which means that a lot of things won't go to disk at all.