2019-11-20
In the old days, we didn't use multiple Unixes by choice (mostly)
One of the possible reactions to the fading out of multi-architecture Unix environments is to lament the modern hegemony of 64-bit x86 Linux and yearn for the good old days of multiple Unixes and heterogeneous Unix environments. However, my view is that this is false nostalgia. Back in the days, most people did not work on or run multiple Unixes and multiple architectures because they wanted to; they did it because they had to. In fact sensible places usually tried hard to be Unix monocultures (a Sun SPARC monoculture, for example), because that made your life much easier.
The reason that there was a flourishing bunch of Unixes back in the days and people often had so many of them was simple; there was no architecture or hardware standard, so every hardware vendor had their own hardware-specific Unix. If you wanted that vendor's hardware you pretty much had to take their Unix, and if you wanted their Unix you definitely had to take their hardware (much as with Apple today). Unless you and everyone else in your organization could stick to a single Unix and a single sort of hardware, you had to have a multi-Unix environment. Even if you stuck with a single vendor and their Unix, you could still wind up with multiple architectures as the vendor went through an architecture transition. Sometimes the vendor also put you through a Unix transition, for example when DEC changed from Ultrix to OSF/1, or Sun from SunOS to Solaris.
(There could be all sorts of reasons that you 'wanted' a vendor's hardware or Unix, including that they were offering you the best price on Unix servers at the moment or that some software you really needed ran best or only on their Unix or their hardware. And needless to say, different groups within your organization could have different needs, different budgets, or different salespeople and so wind up with different Unixes. Universities were especially prone to this back in the days, and were also prone to keeping old hardware (and its old or different Unix) running for as long as possible.)
Once there was a common hardware standard in the form of x86 PC hardware, the march towards a Unix monoculture on that hardware was probably inevitable. Unixes are just not that different from each other (more or less by design), and there are real benefits to eliminating those remaining differences in your environment by just picking one. For example, you only have to build and have around one set of architecture and ABI dependent files, remember one way of doing things and administering your systems, and so on.
The fading out of multi-'architecture' Unix environments
When I started in the Unix world, it was relatively common to have overall Unix environments where you had multiple binary architectures and user home directories that were shared between machines with different architectures. Sometimes you had multiple architectures because you were in the process of an architecture transition from one Unix vendor (for example, from Motorola 68K based Sun 3s to SPARC based Sun 4s, with perhaps a SunOS version jump in the process). Sometimes you had multiple architectures because you'd bought systems from different vendors; perhaps you still had some old Vaxes that you were nursing along for one reason or another but all your new machines were from Sun.
All of this pushed both Unix systems and user home directories in a
certain direction, one where it was sensible to have both /usr/share
and /usr/lib
, or /csri/share
and /csri/lib
, the latter of which
was magically arranged to be a symlink to the right architecture for
your current machine. For user home directories, you needed to somehow
separate out personal binaries for architecture A from the same binaries
for architecture B; usually this was different directories and having
your .profile
adjust your $PATH
to reflect the current machine's
architecture.
Those days have been fading out for some time. People's Unix environments have become increasingly single-architecture over the years, especially as far as user home directories are concerned. Partly this is just that there is much less diversity of Unixes and Unix vendors, and partly this is because cross machine shared home directories have gone out of style in general (along with NFS). The last gasp of a multi-architecture environment here was when we were still running both 32-bit and 64-bit Linux machines with NFS-mounted user home directories, but we got rid of our last 32-bit Linux machines more than half a decade ago.
(We had Solaris and then OmniOS machines, and we still have OpenBSD ones, but neither are used by users or have shared home directories.)
A certain amount of modern software and systems still sort of believe
in a multi-architecture Unix environment, and so will do things
like automatically install compiled libraries with an architecture
dependent name (I was recently pleased to discover that Python's
pip
package installer does this). However, an increasing amount
doesn't unless you go out of your way. For example, both Rust's
cargo
and Go's go
command install their compiled binaries into
a fixed directory by default, which only works if your home directory
isn't shared between architectures. In practice, this is fine, or
at least fine enough that both projects have been doing this for
some time. And it's certainly more convenient to just have a
$HOME/go/bin
and a $HOME/.cargo/bin
than to have ones with
longer and more obscure names involving snippets like linux-x86_64
.
(By 'architecture' here I mean the overall ABI, which depends on both the machine architecture itself and the Unix you're running on it. In the old days there tended to be one Unix per architecture in practice, but these days there's only a few machine architectures left in common use, so a major point of ABI difference is which Unix you're using.)