Large Unix programs were historically not all that portable between Unixes

September 1, 2021

I recently read Ruben Schade's I’m not sure that UNIX won (via) and had a number of reactions to it. One of them is about the portability of programs among Unixes, which is one of the issues that Schade sees as a problem today. Unfortunately, I have bad news for people who are yearning for the (good) old days. The reality is that significant Unix programs have never been really portable between Unix variants, and if anything today is at an all-time high for program portability by default between Unixes.

Back in the days (the late 1980s and early 1990s specifically), one of the things that Larry Wall was justly famous for was his large, intricate, and comprehensive configure scripts that made rn and Perl build on pretty much any Unix and Unix-like system that you could name. Wall's approach of configure scripts was generalized and broadened by GNU Autoconf, GNU Autotools, and so on. These tools did not automatically make your complex programs portable between different Unixes, but they gave you the tools that you could use to sort out how to achieve that, and to automatically detect various things you needed to do to adopt to the local Unix (and if you used some of them, you automatically got pointed to the right include directories and the right libraries to link with).

People did not create and use all of these tools because they wanted a complex build process or to write lots of extra (and often obscure) code. They used these systems because they had to, because there were all sorts of variations between the Unix systems of the time. Some of these variations were in where programs were and what their capabilities were (the POSIX compatible Bourne shell wasn't always /bin/sh, for example). Others were in what functions were available, what include files you used to get access to them, and what libraries you had to link in.

(Hands up everyone who ever had to add some variation of '-lsocket -lnsl -lresolv' to their compile commands on some Unix to use hostname resolution and make IP connections.)

You might hope that POSIX would have made all of this obsolete in old Unixes. Not so. First, not all Unixes were fully POSIX compatible in the first place; some only added partial POSIX compatibility over time (I'm not sure any were really POSIX compatible very fast). Second, even when Unixes such as Solaris had a POSIX compatibility layer, they didn't necessarily make it the default; you could have to go out of your way to get POSIX compatible utilities, functions, include files, and libraries. And finally, not everything that substantial Unix programs wanted to use was even covered by POSIX (or free of issues when implemented in practice).

All of this incompatibility was encouraged by the commercial Unix vendors because it was in their cold blooded self interest to get people to make their current programs hard to build and run outside of Solaris, IRIX, HP-UX, OSF/1, or whatever. The more of a pain it would be to move to another vendor's Unix, the less chance that vendor could steal your customer from you by offering a cheaper deal. In a related development, Unix vendors spent a long time invoking the specter of "backwards compatibility" as a reason for never changing their systems to make them more POSIX compatible by default, to modernize their command line tools, and so on.

The situation with modern open source Unixes is much better. They are mostly POSIX compatible by default, and Unixes having converged on a relatively standard set of include files, standard library functions, and so on. There are variations between Unixes (including between different libc implementations on Linux) and between current and older releases, but for the most part the differences are much smaller today, to the degree that a lot of the work that GNU Autoconf does by default feels quaint and time-wasting.

(Where there are major differences they tend to be in areas related to system management and system level concerns, instead of user level C programs like rn.)

PS: Unix programs tended to be much more portable between the same Unix on different architectures, but relatively few old Unix vendors ever had such environments, especially for long. And let us not talk about the move from 32-bit to 64-bit environments, or the issue that was known as the time as "all the world's a Vax" (experienced as people began to move to Suns, which among other differences had a different endianness).


Comments on this page:

By Andrew at 2021-09-01 22:32:59:

Congratulations. You aren't running Eunice.

By Andrew at 2021-09-01 22:38:13:

This was also a contributor to the popularity of Perl itself as an implementation language; Perl had that big Configure script, and became pretty well ubiquitous on Unix systems, so if you wrote your code in Perl you could get the benefit of its portability, and avoid most of the need to write your own system-specific hacks.

By cks at 2021-09-03 11:51:08:

For those that have never seen the message in the first comment, it's an (in)famous line of output from Larry Wall's configure scripts (most often seen for Perl, since fewer people build (t)rn). Eunice was a Unix-like environment for VMS, and not well regarded (so Perl working on it too was strong dedication to making it run in lots of places). You can see the message in context in eg the image for this tweet.

By Pete at 2021-12-19 11:48:20:

The author's points are naive and are lacking a real-world, commercial perspective. Of course backward compatibility is huge; it would have been trivial to fix a standards issue in a utility that broke compatibility but nobody in engineering would want to face the VP of Sales after a key customer's mission-critical app broke when the system was updated. Thus, utilities like awk(1) had long-lived peculiarities and corner-cases that could not be addressed because it had considerable fielded commercial usage. Yes, "-lnsl -lsocket" was a mistake and it was corrected later but do you really think there was a customer lock-in theme when this change was introduced? The real answer is that the engineering group at the time had a different idea of where TCP/IP transport API's were going and were making them more pluggable. You can't fault an organization for thinking about the future unless you are expecting innovation to stop. Finally, your overall premise is broken when you look at the success of Linux-kernel distributions. All of the commercial Unix systems made it relatively easy to move apps to Linux and fueled the Linux commercial takeover starting in the early 2000s. If you are sincerely looking to criticize true vendor lock-in at the time, maybe focus on VAX/VMS, Tandem Guardian, IBM MVS, HP MPE/ix, Windows (gasp), or Unisys OS/1100.

Written on 01 September 2021.
« Go doesn't have a stack the way that some other languages do
Go multi-module workspace mode, a forthcoming feature in Go 1.18 »

Page tools: View Source, View Normal.
Search:
Login: Password:

Last modified: Wed Sep 1 22:06:05 2021
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.