2010-01-27
AT&T's mad unbundling and the damage it did to Unix
Once upon a time, AT&T had Unix, and Unix was big (by the standards of the time); it contained all sorts of things that made it useful. AT&T decided that it wanted to make as much money as possible from Unix, so it had a brilliant idea: it would sell a bunch of pieces of 'Unix' separately, as additional cost extras. After all, Unix systems were primarily used to run canned vendor-developed software (or at least AT&T no doubt maintained), so a lot of pieces weren't strictly necessary and thus could be split off, and you could even tell the customers that they were saving money by only buying what they needed.
This created AT&T's mad unbundling of Unix components, where they separated out previously all-included things like the C compiler, troff and associated commands (as the 'Documenter's Workbench'), and so on. Pretty soon the baseline version of (AT&T) Unix was significantly stripped down compared to its predecessors.
(AT&T also repeated this most every time that they made substantial improvements to commands, so you had 'new awk', improved versions of ksh, and so on, all of which cost more money.)
This had both immediate and long term effects. The immediate effect was to create annoyingly crippled versions of Unix, ones where you could not do things like compile your own programs or even read third-party manpages, and to divert a great deal of people's efforts to getting around these limitations. Naturally, people used to BSD Unix (which included C compilers, troff, and so on) hated these environments.
(The height of such workarounds is Henry Spencer's awf, a nroff clone written in old-style awk.)
The long term effect was to make Unix, all Unixes, less attractive and less useful at a crucial time for Unix in general. By maximizing its theoretical income, AT&T determinedly minimized (AT&T) Unix's appeal in general. The C compiler situation was an especially clever own goal, and it was saved from being a complete disaster only by the existence of the FSF's GNU C Compiler and the accompanying great effort to bootstrap GCC on as many Unixes as possible and make available the precompiled binaries. Without GCC, I really think that Unix would have done much worse in many places, and certainly important Unix augmentations like Perl would have spread significantly more slowly.
(Unix vendors could have crippled the GCC effort by not shipping the header files for the standard library; fortunately, they by and large did not. Note that this was by and large in the pre-Internet era, where many people could not just download precompiled binaries for Perl and so on from some Internet site. Even assuming that they would or could have installed lots of precompiled binaries from random third parties.)
Similar effects happened with the improved commands that AT&T developed; because they were extra cost, they spread through the Unix world only very slowly (if at all), and they almost might not have been written at all. Certainly their improvements did nothing to make Unix more attractive, because for most people they weren't part of their Unix, just as troff wasn't.
2010-01-17
I do not like Unix's fossilization
I like Unix, but I'm not entirely happy with the modern world of Unix, and not for the reason you might expect. Instead of disliking the changes from the old days, I find it distressing to see Unix slowly fossilize.
Unix should be picking up new good ideas. It should be adopting better
ways of writing shell scripts, adding more little programs,
and all of that. Yes, things like seq and (GNU) stat and (GNU)
date and time are not in the Posix specification and all that, but
they're useful and I think that they're in the Unix spirit. That they
are so strongly resisted makes me sad.
(I'll grumpily note that traditional Unix is really short of good
ways for shell scripts to extract various sorts of information about
things around them in ways that are easily usable by shell scripts.
Yes, you can sort of parse 'ls -l' output and the like, but you
shouldn't have to; ls -l is designed for human consumption, not
for shell scripts.)
You can argue about whether these new ideas are strongly resisted,
but I think that they are. Linux distributions adopt them fairly
widely, but then they don't tend to migrate to FreeBSD, OpenBSD, and
so on, and Solaris and AIX are of course more or less completely
hopeless; most of the people running Solaris now don't want any
changes, and Sun is happy to oblige. On the
other hand, OpenBSD has been fairly successful at introducing commands
to make shell scripting more secure and getting them widely adopted (eg,
mktemp), so maybe there is some hope.
(One argument is that many of the things that aren't propagating are GNU tools, which have both licensing and cultural issues as roadblocks. OpenBSD has easier licensing for everyone to adopt, and people who care about this probably tend to feel culturally closer to it.)
2010-01-08
A brief and jaundiced history of Unix packaging systems
In the beginning (in the days of V7 and BSD Unix), Unix systems came as a great big tarball or the equivalent that included everything and you just unpacked it onto your machine. If there were problems, people passed around new versions of various bits of source in various ways; you got some, you put them on your system, you recompiled things, and so on.
Shortly after Unix vendors started selling Unix, they discovered that they needed some actual mechanism to deliver bugfixes and updates to their customers in some form smaller than an entire OS distribution. This came to be known as 'patches' and vendors built various programs for it. Because this was the old days, there was a very strong desire to make these patches as small as possible.
Shortly after AT&T started selling Unix, they decided that they wanted
to make more money by charging extra for various 'optional' bits,
like the C compiler or troff. This required a mechanism to split
the previously monolithic blob of the OS up into multiple pieces, ie
'packages'. Other Unix vendors soon followed, even if they were selling
BSD, especially as Unix systems accreted more and more pieces that fewer
and fewer people were interested in.
However peculiar it seems in today's world, Unix vendors never merged their patching systems and their packaging systems, partly because packages were still fairly big and in the late 1980s and early 1990s people still cared a fair bit about updates being small. Significant OS updates (eg going from X.0 to X.1) were delivered as new packages and might well require a system reinstall, but small ones continued to be delivered as patches. Vendors built increasingly complex and baroque systems for doing each job.
Free Unixes and especially Linux distributions started from scratch in the mid to late 1990s in a very different environment, without any of this accreted history. With minimal manpower available, they built packaging systems because they had to and then simply delivered updates by giving people new versions of the entire package (size efficiency be damned). Because updates were delivered as new versions of packages, these packaging systems grew various features like handling package upgrades.
(Disclaimer: jaundiced views of history are not necessarily entirely correct.)