Mythology about Unix workstations
Talking of Unix workstations, there's some mythology about them that seems to go around, or at least that may be going around and I feel like preemptively shooting down.
First, people who think that 1990s era Unix workstations were marvels of performance and features that have yet to be surpassed either have a very selective memory, were using very high end hardware from SGI, or never really used those workstations. I have used everything from Sun 3/50s onwards, and I can assure you that a modern PC that costs $500 smokes each and every one in terms of speed and features.
In fact, as I alluded to in passing in my original entry, old workstation hardware was actually rather terrible. It was not bad for the time (sometimes it was quite good), but it was not very good on an absolute scale and it was tolerable only because the software was equally limited so as not to exceed the hardware's capabilities. Let us not idolize the old days lest we be forced to live in them again, thanks.
The other piece of mythology is the idea that Unix workstation hardware was at least a marvel of niceness and good design compared to the hodgepodge and hacks of the current PC architecture. I am pretty sure that this was historically false; I certainly remember a whole stream of Usenix papers about what could basically be called 'the secret life of your hardware', where a number of kernel hackers wrote up bitter descriptions of exactly how bad various pieces of hardware were, such as Ethernet driver chipsets. Graphics were not exempt from this; for example, at the start of the 1990s, some DEC people wrote an entire paen about the advantages of an extremely simple framebuffer because its 2D performance beat the heck out of most of the then-current more complex graphics chipsets.
(Before you snort in disbelief at this, note that it was an 8-bit framebuffer. That was considered mainline or even advanced at the start of the 1990s, since at least you got 256 colours.)
I don't think that this should surprise anyone. People make design mistakes at the start of anything, because it takes time for them to figure what really works and what just looks good on paper, and the Unix workstation era happened in the early times of people making (commodity) chipsets for most of the hardware capabilities that we now take for granted.
(The less said about various workstation vendor predecessors to SCSI the better, especially in the server space. I still remember our early 1990s decision to pass over this new, low-performing 'SCSI' stuff in favour of an advanced, fast IPI disk interface on our new Sun 4 server. This being a university, that server stayed in production long enough for our laughter to become rather hollow.)