2013-05-05
Unix is not necessarily Unixy
As I've written about before, in some quarters there is a habit of saying that everything added to Unix needs to be 'Unixy'. One of the many problems with this is that a number of aspects of Unix itself are not 'Unixy'. I don't mean that in a theoretical way, where we debate about whether a particular API or approach is really 'Unixy'. I mean that in a concrete sense, in that Bell Labs, generally regarded as the home of Unix and the people who understand its essential nature best, built various things differently than mainline Unix. In some cases they did this after mainline Unix had established something, which is a clear sign that they felt that other Unix developers had gotten it wrong.
(In the end their vision of the right way to do things was so extreme that they started over from scratch so they didn't have to worry about backwards compatibility. The result of that was Plan 9.)
The easiest place to see this is in the approach that Bell Labs took to
networking. Unfortunately I don't believe that manual pages from post-V7
Research Unix are online, but the next best thing is the networking
manual pages for Plan 9 (which has essentially the same interface from
what I understand). Plan 9 networking is completely different from the
BSD sockets API that is now the Unix standard; it is in large part much
more high level. You can read about it in the Plan 9 dial(2) manpage, and a version of
this interface without the Plan 9 bits has resurfaced in the Go net
package's Dial()
and Listen()
APIs.
You can certainly argue that these APIs are fundamentally not comparable to the BSD sockets API because they're on a different level (the BSD sockets API is a kernel API, while most of the Plan 9 API is implemented in library code). But in a sense this is besides the point, which is that the Plan 9 API is how Bell Labs thought programs should do networking.
(You can also argue that the Plan 9 API is insufficient in practice and that programs need and want more control over networking than it offers. I'm sympathetic to this argument but it does open up a can of worms about when one should discount the Bell Labs view on 'what is Unix' and what can replace it.)
The original vision of RISC was that it would be pervasive
In the middle of an excellent comment gently correcting my ignorance, a commentator on yesterday's entry wrote:
[RISCs requiring people to recompile programs] has some truth to it, but I would disagree that it was a big bet of RISC. Rather, it was a function of the market niche that classic RISC was confined to: Customers that paid tens or hundreds of thousands of dollars for a high-performance RISC machine would be willing to recompile their code to eke out the best possible performance.
I have to disagree with this because I don't think it matches up with the actual history of what I've been calling performance RISC. To put it one way, RISC was not originally intended to be just for high performance computing; in the beginning and for a fairly long time, RISC was intended to be pervasive. This is part of why in 1992 Andrew Tannenbaum could seriously say (and have people agree with him) that x86 would die out and RISC would become pervasive (cf). He did not mean 'pervasive in HPC'; he meant pervasive in general, across at least the broad range of machines used to run Unix.
The early vision of (performance) RISC was that RISC would supplant the current CISC architectures just as the current 16 and 32-bit CISC architectures had supplanted earlier 8-bit ones. The RISC pioneers may not have been thinking about 'PC' class machines (although Acorn gave it a serious try) but they were certainly thinking about and targeting garden variety Unix workstations and servers. And even in 1990, everyone knew and understood that most Unix servers were not HPC machines and they spent their time doing much more prosaic things. To really be successful and meaningful, RISC needed to be good for those machines at least as much as it needed to be good for the uncommon HPC server.
(Everyone also understood that these machines cost a lot less than full out everything for speed HPC servers. DEC, MIPS, Sun, and so on sold plenty of lower end servers and workstations, so they were well aware of this. I would guess that by volume, far more RISC machines were lower end machines than were high end ones through at least 1995 or so.)
RISC certainly did wind up fenced into the high performance computing market niche in the relatively long run, but that was because it failed outside that niche (run over by the march of the cheap x86 machines everywhere else). The HPC niche was not the original intention and had it been, RISC would have been much less exciting and interesting for everyone.
(And in this general market it was empirically not the case that most people were running code that was compiled specifically for their CPU's generation of optimizations, scheduling, and so on.)