Why Go changes its minimum version requirements for OSes (and hardware)
Yesterday I wrote about the unexpected (to me) risk of Go ending support for older OS versions. This raises the obvious question of why Go does this and the related question of what is affected by the increased version requirements. I'm not going to try to give exhaustive answers here; instead, I have some examples.
There are several reasons why Go changes its requirements, depending on what is moving forward. To start with CPU architecture requirements, Go 1.8 said about its ARM support:
Go 1.8 will be the last release to support Linux on ARMv5E and ARMv6 processors: Go 1.9 will likely require the ARMv6K (as found in the Raspberry Pi 1) or later. [...]
Requiring more recent processors lets Go use instructions and other CPU features that first became available on those processors, which simplifies and improves code generation and so on. In a casual scan of the Go commit log, I can't spot a specific change that requires ARMv6K or later, but this issue comes close and is illustrative of the sort of things that come up.
Next we have the case of OpenBSD in Go 1.9's draft release notes:
Go 1.9 now enables PT_TLS generation for cgo binaries and thus requires OpenBSD 6.0 or newer. Go 1.9 no longer supports OpenBSD 5.9.
This is for thread local storage, per the OpenBSD ELF manpage, and PT_TLS is supported on other platforms. The code review, Github issue, and especially the commit show how removing the special case for OpenBSD not supporting PT_TLS simplified various bits of code.
Finally, moving a minimum OS version requirement forward lets the standard library use better system calls and so on that are only supported on more recent versions of the OS (or drop workarounds for things that are no longer needed). See, for example, this issue about (future) code cleanups when Go drops support for FreeBSD 9.x and below, or this commit fixing Go 1.9 on FreeBSD 9.3, where the problem is that FreeBSD 9.3 doesn't have a nicer, newer system call for pipe creation.
(While I'm accumulating links, there is are Go issues for defining the OS version deprecation policy and defining it for old architectures. The agreed on policy appears to be that people get one release's advance warning.)
For CPU architecture changes and changes like the OpenBSD case, the
impact is generally going to be pervasive or at least unpredictable
(Go may not generate PT_TLS ELF segments all of the time, but
I have no idea when it will or won't). You should probably assume
that all code generated by an updated compiler is affected and will
probably not run in unsupported environments. For changes in the
standard library, you might get away with an unsupported environment
if you don't do anything that touches the changed area. However,
this is not as simple as not using
os.Pipe() (for example), because
other things in the standard library (and other packages that you
us) may use the changed bit. For
os.Pipe() you'd need to avoid a
number of things in
and possibly other things elsewhere.
An unexpected risk of using Go is it ending support for older OS versions
A few years ago I wrote The question of language longevity for new languages, where I used a concern about Go's likely longevity as the starting point to talk about this issue in general. The time since then has firmed up Go's position in general and I still quite like working in it, but recently a surprising and unexpected issue has cropped up here that is giving me some cause for thought. Namely, the degree to which Go will or won't continue to support older versions of OSes.
(By 'Go' here I mean the main Go implementation. Alternate implementations such as gccgo have their own, different set of supported OS versions and environments.)
Like most compilers, Go has a set of minimum version requirements for different OSes. It's actually fairly hard to find out what all of these are; the requirements for major binary release platforms can be found here, but requirements for platforms may only show up in, for example, the Go 1.8 release notes. Probably unsurprisingly, Go moves these minimum requirements forward every so often, usually by dropping support for OS versions that aren't officially supported any more. A couple of topical examples, from the draft Go 1.9 release notes, are that Go 1.9 will require OpenBSD 6.0 and will be the last Go release that supports FreeBSD versions before 10.3 (theoretically Go 1.8 supports versions as far back as FreeBSD 8).
I'm sure that for many people, Go's deprecation of older and now unsupported OS versions is not a problem because they only ever have to deal with machines running OS versions that are still supported, even for OSes (such as OpenBSD) that have relatively short support periods. Perhaps unfortunately, I don't operate in such an environment; not for OpenBSD, and not for other OSes either. The reality is that around here there are any number of systems that don't change much (if at all) and just quietly drift out of support for one reason or another, systems that I want or may want to use Go programs on. This makes the degree to which Go will continue to support old systems somewhat of a concern for me.
On the other hand, you can certainly argue that this concern is overblown. Building Go from source and keeping multiple versions around is easy enough, and old binaries of my programs built with old Go versions are going to keep working on these old, unchanging systems. The real problems would come in if I wanted to do ongoing cross-platform development of some program and have it use features that are only in recent versions of Go or the standard library. Life gets somewhat more exciting if I use third party packages, because those packages (or the current versions of them) may depend on modern standard library things even if my own code doesn't.
(If my program doesn't use the very latest shiny things from the standard library, I can build it with the latest Go on Linux but an older Go on OpenBSD or FreeBSD or whatever.)