One way to set up local programs in a multi-architecture Unix environment

April 10, 2025

Back in the old days, it used to be reasonably routine to have 'multi-architecture' Unix environments with shared files (where here architecture was a combination of the process architecture and the Unix variant). The multi-architecture days have faded out, and with them fading, so has information about how people made this work with things like local binaries.

In the modern era of large local disks and build farms, the default approach is probably to simply build complete copies of '/local' for each architecture type and then distribute the result around somehow. In the old days people were a lot more interested in reducing disk space by sharing common elements and then doing things like NFS-mounting your entire '/local', which made life more tricky. There likely were many solutions to this, but the one I learned at the university as a young sprout worked like the following.

The canonical paths everyone used and had in their $PATH were things like /local/bin, /local/lib, /local/man, and /local/share. However, you didn't (NFS) mount /local; instead, you NFS mounted /local/mnt (which was sort of an arbitrary name, as we'll see). In /local/mnt there were 'share' and 'man' directories, and also a per-architecture directory for every architecture you supported, with names like 'solaris-sparc' or 'solaris-x86'. These per-architecture directories contained 'bin', 'lib', 'sbin', and so on subdirectories.

(These directories contained all of the locally installed programs, all jumbled together, which did have certain drawbacks that became more and more apparent as you added more programs.)

Each machine had a /local directory on its root filesystem that contained /local/mnt, symlinks from /local/share and /local/man to 'mnt/share' and 'mnt/man', and then symlinks for the rest of the directories that went to 'mnt/<arch>/bin' (or sbin or lib). Then everyone mounted /local/mnt on, well, /local/mnt. Since /local and its contents were local to the machine, you could have different symlinks on each machine that used the appropriate architecture (and you could even have built them on boot if you really wanted to, although in practice they were created when the machine was installed).

When you built software for this environment, you told it that its prefix was /local, and let it install itself (on a suitable build server) using /local/bin, /local/lib, /local/share and so on as the canonical paths. You had to build (and install) software repeatedly, once for each architecture, and it was on the software (and you) to make sure that /local/share/<whatever> was in fact the same from architecture to architecture. System administrators used to get grumpy when people accidentally put architecture dependent things in their 'share' areas, but generally software was pretty good about this in the days when it mattered.

(In some variants of this scheme, the mount points were a bit different because the shared stuff came from one NFS server and the architecture dependent parts from another, or might even be local if your machine was the only instance of its particular architecture.)

There were much more complicated schemes that various places did (often universities), including ones that put each separate program or software system into its own directory tree and then glued things together in various ways. Interested parties can go through LISA proceedings from the 1980s and early 1990s.


Comments on this page:

Goodness, I did a lot of this for my clients in finance, with multiple different architectures and OSes. And of course not everyone fully agreed on how to do this (and /opt arrived late in the day to challenge /local). And launch scripts and uname and NFS automount played a big part in this. Separate dev and prod environments doubled eveything up...

By Remy at 2025-04-11 14:46:30:

This entry made me remember the time we had multiple Sun machine architectures and SunOS / Solaris versions. Variable substitution to the rescue! The automount maps contained a lot of $ARCH, $OSNAME and $OSVERS variables. Good times :-)

By Nobody in particular at 2025-04-12 09:11:19:

I lived in multi-arch and multi-OS environments at a couple universities in those years. There were some nice properties about the (really only slightly more complicated) scheme of giving each version of a package its own directory along the lines of <pkg>-<pkgversion>/<abi>-<os>-<osversion>. One was that at the at the cost of incremental disk usage, rollbacks/downgrades became more tractable. Another was that it replicated nicely: some users' machines could be NFS clients, others' could copy down the packages they used, either for reduced latency or for disconnected operation.

The environments I encountered back then also stored inter-package dependencies, install/uninstall scripts, and other metadata at well-known paths relative to a package's installation directory, which more or less eliminated the need for a "package manager" per se. What files does a package provide? Use find(1). What package provides /usr/local/bin/foo? Resolve the symlink and go up 3 directory levels.

(It wasn't all smiles and sunshine: actually building most software was torture and commercial Unices were mostly bug-riddled trash. But ISTR provisioning and consuming software this way was nice. ISTM a bit of a shame that the Linux and BSD packaging ecosystems didn't incorporate any of those ideas from the start.)

Written on 10 April 2025.
« The problem of general OIDC identity provider support in clients
How I install personal versions of programs (on Unix) »

Page tools: View Source, View Normal.
Search:
Login: Password:

Last modified: Thu Apr 10 22:29:50 2025
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.