2020-05-13
Getting my head around what things aren't comparable in Go
It started with Dave Cheney's Ensmallening Go binaries by prohibiting comparisons (and earlier tweets I saw about this), which talks about a new trick for making Go binaries smaller by getting the Go compiler to not emit some per-type internally generated support function that are used to compare compound types like structs. This is done by deliberately making your struct type incomparable, by including an incomparable field. All of this made me realize that I didn't actually know what things are incomparable in Go.
In the language specification, this is discussed in the section on comparison operators. The specification first runs down a large list of things that are comparable, and how, and then also tells us what was left out:
Slice, map, and function values are not comparable. However, as a special case, a slice, map, or function value may be compared to the predeclared identifier
nil
. [...]
(This is genuinely helpful. Certain sorts of minimalistic specifications would have left this out, leaving us to cross-reference the total set of types against the list of comparable types to work out what's incomparable.)
It also has an important earlier note about struct values:
- Struct values are comparable if all their fields are comparable. Two struct values are equal if their corresponding non-blank fields are equal.
Note that this implicitly differentiates between how comparability is determined and how equality is checked. In structs, a blank field may affect whether the struct is comparable at all, but if it is comparable, the field is skipped when actually doing the equality check. This makes sense since one use of blank fields in structs is to create padding and help with alignment, as shown in Struct types.
The next important thing (which is not quite spelled out explicitly in the specification) is that comparability is an abstract idea that's based purely on field types, not on what fields actually exist in memory. Consider the following struct:
type t struct { _ [0]byte[] a int64 }
A blank zero-size array at the start of a struct occupies no memory and in a sense doesn't exist in the actual concrete struct in memory (if placed elsewhere in the struct it may have effects on alignment and total size in current Go, although I haven't looked for what the specification says about that). You could imagine a world where such nonexistent fields didn't affect comparability; all that mattered was whether the actual fields present in memory were comparable. However, Go doesn't behave this way. Although the blank, zero-sized array of slices doesn't exist in any concrete terms, that it's present as a non-comparable field in the struct is enough for Go to declare the entire struct incomparable.
As a side note, since you can't take the address of functions, there's no way to manufacture a comparable
value when starting from a function. If you have a function field
in a struct and you want to see which one of a number of possible
implementations a particular instance of the struct is using, you're
out of luck. All you can do is compare your function fields against
nil
to see whether they've been set to some implementation or
if you should use some sort of default behavior.
(Since you can compare pointers and you can take the address of slice and map variables, you can manufacture comparable values for them. But it's generally not very useful outside of very special cases.)
The modern HTTPS world has no place for old web servers
When I ran into Firefox's interstitial warning for old TLS versions, it wasn't where I expected, and where it happened gave me some tangled feelings. I had expected to first run into this on some ancient appliance or IPMI web interface (both of which are famous for this sort of thing). Instead, it was on the website of an active person that had been mentioned in a recent comment here on Wandering Thoughts. On the one hand, this is a situation where they could have kept their web server up to date. On the other hand, this demonstrates (and brings home) that the modern HTTPS web actively requires you to keep your web server up to date in a way that the HTTP web didn't. In the era of HTTP, you could have set up a web server in 2000 and it could still be running today, working perfectly well (even if it didn't support the very latest shiny thing). This doesn't work for HTTPS, not today and not in the future.
In practice there are a lot of things that have to be maintained on a HTTPS server. First, you have to renew TLS certificates, or automate it (in practice you've probably had to change how you get TLS certificates several times). Even with automated renewals, Let's Encrypt has changed their protocol once already, deprecating old clients and thus old configurations, and will probably do that again someday. And now you have to keep reasonably up to date with web server software, TLS libraries, and TLS configurations on an ongoing basis, because I doubt that the deprecation of everything before TLS 1.2 will be the last such deprecation.
I can't help but feel that there is something lost with this. The HTTPS web probably won't be a place where you can preserve old web servers, for example, the way the HTTP web is. Today if you have operating hardware you could run a HTTP web server from an old SGI Irix workstation or even a DEC Ultrix machine, and every browser would probably be happy to speak HTTP 1.0 or the like to it, even though the server software probably hasn't been updated since the 1990s. That's not going to be possible on the HTTPS web, no matter how meticulously you maintain old environments.
Another, more relevant side of this is that it's not going to be possible for people with web servers to just let them sit. The more the HTTPS world changes and requires you to change, the more your HTTPS web server requires ongoing work. If you ignore it and skip that work, what happens to your website is the interstitial warning that I experienced and eventually it will stop being accepted by browsers at all. I expect that this is going to drive more people into the arms of large operations (like Github Pages or Cloudflare) that will look after all of that for them, and a little bit more of the indie 'anyone can do this' spirit of the old web will fade away.
(At the same time this is necessary to keep HTTPS secure, and HTTPS itself is necessary for the usual reasons. But let's not pretend that nothing is being lost in this shift.)