2013-05-13
My language irritations with Go (so far) and why I'm wrong about them
The great thing about an evolving language is that if you're slow enough about writing up your irritations with it, some of them can wind up fixed (or part fixed). So this list is somewhat shorter than it was when I originally wrote my first Go program, and none of the irritations are major. Also, I will reluctantly concede that Go has good engineering reasons for all of them.
My largest single irritation is that break
acts on switch
and
select
; I expected it to act only on any enclosing control structure,
so that you could write something like:
for { select { case <-mchan: // message silently swallowed case <-schan: break }
Instead you have to invent a boolean loop condition. I understand why Go
does this; it enables you to exit early out of a switch
or select
case instead of having to wrap everything in ever increasing levels of
nesting. This is likely especially important because Go uses explicit
error checking (which would otherwise force those nested if blocks).
The issue that got partially fixed is Go's return requirements. When I wrote the original version
of my program the natural form of one function was a big switch with
a number of specific cases and then a default:
to catch the rest;
however, the original rules required a surplus return
at the end of
the function, which irritated me by forcing me to move the default case
to the end of the function, obscuring the logic. The Go 1.1 changes make
my particular case okay but I believe there remain cases where you need
an unreachable ending return
(or panic
) to make the compiler happy.
You can make an argument that the original and current state of affairs
are good software engineering. If the compiler did true reachability
analysis it'd increase the number of cases where an innocent looking
change to some part of the code would suddenly make the return
coverage not be complete and thus produce potentially odd messages about
missing return
s. The current brute force rules protect against this
and lead Go programmers to write in a certain sort of consistent style.
My final issue is my perennial one of being unable to cleanly cancel IO being done by goroutines, breaking them out of things so that they can see a death signal from outside. You can argue that this is a bug in the runtime, but the problem with this is that everything that calls an IO operation then needs to be aware of this particular error case (and catch it, and propagate it up the call stack in whatever way is appropriate). A good start to making it a bug in the runtime would be for the runtime to define a specific error for 'IO attempted on closed connection' and for absolutely everything to use it.
(As it stands, the net
package doesn't even define a publicly
visible error instance for this case, although it does define one
internally. It's my personal view that this beautifully illustrates why
this is a general language problem; while you can 'solve' it in code,
it requires absolutely everyone to get it right and, well, they clearly
don't.)
Again this is a software engineering tradeoff. Both the semantics and the runtime implementation of goroutines are undoubtedly vastly simplified because you don't have to worry about being able to signal or cancel a goroutine from outside itself. Outside of the program exiting, all of the interaction that a goroutine has with the outside world are initiated by itself, on its own terms. This makes it much easier to reason about the effects of a goroutine, especially if it's careful not to use global state.
The Unix philosophy is not an end to itself
Today I feel like opening a can of worms that I've alluded to before.
Here is something very important about the Unix philosophy (regardless of what exactly that is): the Unix philosophy was not conceived as an empty philosophy that was an end to itself. Instead it is above all a theory about how to make computers easy, powerful, and useful. This philosophy (or at least the things built by people following it at Bell Labs and elsewhere) has been extraordinarily successful, and I'm not just talking about Unix; concepts first pioneered in Unix and C now form core pieces of pretty much every computer system in the world.
But it's possible to take this too far. To put it one way, it's my strong view that the core goal of Unix is to be useful, not to be philosophically pure. The underlying purpose comes first and fitting how to be useful into 'the Unix way of doing things' comes second. If Unix has to be non-Unixy for a while (or even permanently) in order to be useful, then, well, I pick usefulness. Excessive minimalism and 'Unixness' for the sake of minimalism and Unixness is a kind of masochism.
(Of course the devil is in the details, as it always is. It's certainly possible to ruin Unix without getting anything worth it in exchange.)
What this biases me towards is an environment where one solves the
problem first then try to make it fit into the traditional 'Unix way'
second. Which is why part of me thinks that GNU sort's -h
option is perfectly fine because it solves a real problem (and
solves it now).
(The counterargument is that Unix cannot be all things to all people. As with all systems, at some point you have to draw a line and say 'this doesn't fit, you need to go elsewhere'. I don't know how to balance this. I do know that a certain amount of griping about 'the one true Unix way' and how (some) modern Unixes are ruining it reminds me an awful lot of the griping of Lisp adherents at the rise of Unix, and for that matter the griping of Unix people (myself sometimes included) at the rise of Windows and Macs.)