Wandering Thoughts

2019-08-15

Getting LSP-based editing working for Go in GNU Emacs

When I set up autocompletion for Go in GNU Emacs years ago, I used gocode and eventually company-mode, because that was what was your best option in 2015 and 2016. Things have happened since then; for a summary, I'll refer you to "Go, pls stop breaking my editor" from GopherCon 2019. The short version is that the future is on the Language Server protocol, using lsp-mode in Emacs with the Go LSP server. However, how to set up that future in GNU Emacs today is not entirely obvious, and it's taken me some time and puzzlement to get my Emacs environment working in this modern (or future) world. Here is what I've done so far and what I understand about Emacs' lsp-mode.

First, there's several parts of LSP in GNU Emacs. Lsp-mode is apparently the core, and then there's lsp-ui, which provides a bunch of the visible interface that you'll normally use. If you use company-mode, company-lsp hooks into the LSP stuff to make company talk to it; for Go, this replaces company-go. I get all of these through MELPA. Flycheck seems to automatically integrate with lsp-ui without any extra package.

To initially start all of this, what I did in my .emacs was just:

(require 'lsp-mode)
(add-hook 'go-mode-hook #'lsp)
(require 'company-lsp)
(push 'company-lsp company-backends)

(If you're coming from an existing gocode-based setup that uses company-mode, you'll want to stop using company-go. However, you can keep all of your existing company-mode keybindings and as far as I can tell, they just carry on working.)

This gives you a very busy interface by default (which is apparently deliberate, and I can't entirely blame them for the decision). This is where the split between lsp-mode and lsp-ui becomes important, because most of the busy things are done by lsp-ui and so must be configured or turned off through its settings. In general, the lsp-ui git repo README has nice animated GIFs that will generally illustrate what the various sub-components are (although their examples aren't using Go). This helps when you're trying to figure out what to change.

Before I discuss them, here are my current LSP customizations are (all from custom-set-variables):

'(lsp-enable-snippet nil)
'(lsp-ui-doc-delay 1)
'(lsp-ui-doc-max-height 8)
'(lsp-ui-sideline-delay 2)
'(lsp-ui-sideline-show-code-actions nil)
'(lsp-ui-sideline-show-hover nil)

I also set a custom colour, although it's now surplus:

'(lsp-ui-sideline-code-action ((t (:foreground "dim gray"))))

I disable snippets because otherwise lsp-mode tells me that it requires yasnippet, and I don't want to install a package that I don't have a clear use for. Apparently some languages have LSP servers that provide snippets, but Go's currently doesn't.

I turn off code actions completely for two reasons. First, Go currently only provides one code action, 'Organize imports', which basically runs goimports, and I can do that myself without a button. Second, unfortunately right now code actions appear even in non-graphical GNU Emacs sessions (such as when you're ssh'd in to a machine), and as far as I know you can't even select them in this case, so all they do is take up space. If I used GNU Emacs for more and was better at configuring it, I would arrange to turn off code actions only in Go mode, and perhaps only when not running within a window system.

(I currently only use lsp-mode with Go, partly because Go is pretty much the only language I write with GNU Emacs right now. At some point I should investigate getting it working with Python too.)

In my 80 colum wide default GNU Emacs windows, sideline hover clutters up multiple lines with its information and obscures the actual code. If I had ultra-wide Emacs windows, the right-aligned sideline hover information might be usefully separated from the code on the left, but as it is it isn't. Much of the same information shows up in the lsp-ui-doc information, which appears up at the top right.

I set delays for both the sideline and the documentation because otherwise I find that things are far too frantic; if I'm moving the cursor around, there will be a constant flurry of things appearing and disappearing and changing. The current larger delays mean that things only start appearing once I've settled down.

The default maximum height for the LSP documentation area is frankly absurd, at least at the vertical size I tend to run my GNU Emacs windows at; with the defaults, it's entirely possible to have the documentation area for a structure or interface almost entirely take over your screen. You'll probably want to tune this for how big your Emacs windows tend to be. Unfortunately, as far as I know there's no way to dismiss the documentation area until you move to a new spot; I think the best you can do is write a function to toggle documentation on or off.

(One such Emacs Lisp function is available here. I may have to adopt it. An alternate would be to toggle the maximum height of the area back and forth, leaving it normally small but then allowing it to jump large if I wanted to pause to read all of something.)

PS: If you make a mistake in setting the LSP project root for something, these are all saved in ~/.emacs.d/.lsp-session-v1, or you can use lsp-workspace-folder-remove in GNU Emacs. One such potential mistake is using the default project root for something inside $GOPATH/src, which will be its specific bit of go/src instead of the whole thing. I'm not sure what the difference is, but at least with a 'project root' of ~/go/src you don't get asked this question all the time in new areas.

PPS: My current lsp-mode setup is very likely to not be ideal and I'm pretty sure that I'm running into the limitations of my current Emacs knowledge. People who actually understand what they're doing can probably come up with a better setup, and I hope someone writes one up sometime.

(For instance, lsp-ui has a 'peek' feature to look at definitions or references of things, but I don't know how to easily navigate back from peeking at something. I assume that there is a trick to it that people who actively know Emacs already know.)

GoEmacsWithLspMode written at 00:38:12; Add Comment

2019-08-13

Changes to Go and the appearance of finality

In Thinking about about the Go Proposal Process, Russ Cox says the following about changes to Go in "Prototypes & Experiments":

For most non-trivial changes it is helpful to understand them by trying them out before making a decision. [...] We arrange to land language changes on day 1 of a development cycle to maximize that window. But for large changes we probably need a way to make prototypes available separately, to give even more time, a bit like the vgo prototype for Go modules.

I have a reaction to this and some opinions.

When the Go team lands language changes in the development version of Go, the default state of affairs at least appears to be that the change will be in the next release of Go, because after all it's in the development version. The Go team does not usually 'try things out' in the development tree; what lands in public is high quality and is almost always going to go into the next release unless some real problem is discovered and it has to be reverted. If the language changes are currently contentious and people disagree over them, landing them creates the impression that these changes are now a mostly done deal. This is especially the case if they land early, where other changes may come to depend on them (complicating a reversion), including changes other people make to tools and code to prepare for the next release of Go.

If the Go team does not mean this, I think that they should take steps to separate the regular development version, with its steady march toward release, from language changes that are being published so that people can try them out. One way to do this would be to land these changes in a separate tree, one based on the current release. As an additional bonus, this would allow people to try out the language changes without having to also take all of the other changes and uncertainties inherent in using a development version that's in flux, which might get more people to try them out.

(There have been times when other changes in the development version broke things like common editor tooling, making it hard to actually use the development version in anything like a normal environment.)

Another way to do this would be to require people to use a flag (perhaps a build option) to enable the language changes in the development version until at least part way through the process. This could create a stronger separation between changes that are clearly on course to be released and changes that are explicitly considered more experimental, while at the same time partially avoiding problems from merging significant changes late in the development cycle. There is some precedent for this approach already, with things like $GO111MODULE.

(Landing language changes in a separate tree doesn't preclude them from also landing in the development tree, so you could do both at once. If the Go team does, I think that language changes in the development tree should still default to off for part of the development cycle.)

This isn't something that's necessary on a technical level, or even a procedural one, since in the past the Go team has reversed itself and reverted features late in the Go development cycle because they weren't ready. But for contentious changes, I think it is very much called for, because appearances matter and the current appearance of things tilts fairly strongly in one direction.

PS: As an indication of how the Go team and language changes are viewed today, the dropping of try() was a genuine surprise to me and I think a decent number of other people, despite the significant amount of disagreement that had been expressed about it (cf, which I wrote almost as the proposal was being dropped).

GoAppearanceOfChanges written at 23:32:37; Add Comment

2019-07-30

I think it's time to explicitly set Go's $GO111MODULE environment variable

Go is in the process of transitioning from the old traditional $GOPATH approach to organizing Go code to Go modules. I have my opinions about the fairly rapid speed of this change, but it's clear that the Go team wants to push this forward, and as we know, what the team wants they usually get (try excepted). Given both this ongoing change and my habit of sometimes using the development version of Go, I think it's time for me to explicitly set the $GO111MODULE and $GOPATH environment variables that control this behavior. You might want to consider doing this too.

The Go team is clearly going to be changing the default behavior of modules over time. This creates two problems, even for people who don't use the development version of Go. The first one is simply that the behavior you get will change as you update from Go version to Go version. Explicitly setting the relevant environment variables puts the timing of this change under your control; you can do it at Go version update or not, at your option.

(Apart from the bit where '$GO111MODULE=auto' is changing its behavior between Go 1.12 and Go 1.13. Probably the change is for the better, but it makes me annoyed anyway; some people are going to experience a change in behavior that will break building some Go code that they use.)

The second problem is that various Go programs that deal with Go code are likely to change their default behavior as and if they're rebuilt with newer versions of Go, because they inherit the current Go's defaults through core packages. This cuts both ways; you can experience problems if you rebuild a program like gopls with the latest Go and then try to use it in an older environment, or if you don't rebuild a perfectly fine program like goimports just because you started using a new version of Go. Up until very recently it hasn't been important to carefully match the version of Go that utility programs were compiled with to the version of Go that you were actually using, so I suspect that a lot of people have older binaries that they still use.

Explicitly setting $GO111MODULE to a suitable value for your environment doesn't fix all of the problems here, but it does help reduce the variation in what happens; at least you're explicitly telling tools what you want. Setting $GOPATH too isn't currently necessary, but I think it's probably a good insurance against the future.

PS: I am perpetually making mistakes about how many 1's the environment variable has. I even almost published this entry with the title talking about 'GO11MODULE'. I know why it has three 1's, but I wish it was called something else.

Sidebar: Changes in module behavior is why I stopped using Go tip

I could write a bunch of things, but instead I will just point to my tweets on this: here, here, and especially here. The situation has improved since then; now building Go 1.12 starting from the latest development version of Go (ie, almost Go 1.13) merely reports a bunch of:

go_asm.h:493:33: 'p' exponent requires hexadecimal mantissa

As far as I know, the Go people never promised backward compatibility for the Go toolchain (that you could build older versions of Go with newer toolchains), so they are within their rights here. I just don't have to like it, and I took it as a sign to stop using the development version of Go.

Will I update promptly to Go 1.13 when it gets released? I don't know, but quite possibly not, for the first time in Go releases. I'm tempted to let other people find all of the things that don't build any more with the Go 1.13 default behavior for Go modules.

GoTimeToSetGO111MODULE written at 22:34:42; Add Comment

2019-07-28

A note on using the Go Prometheus client package to exposed labeled metrics

For reasons beyond the scope of this blog entry, I have been recently playing around adding some Prometheus metrics to a Go program through the official Prometheus Go client packages. The particular metrics I wanted to add were going to have labels:

scripts_requests_total { script="ntpdate" } 58
scripts_requests_total { script="sntp" }    112

How to do this is not entirely clear in the Prometheus client package documentation. You start by creating, say, a CounterVec, then some magic happens involving methods with names like With and CurryWith, and you finally have metrics that you can set, increment, and so on. After some head scratching I have finally figured out a mental model of this, and now I'm going to write it down before I forget it.

When you create a metric with labels by using, say, NewCounterVec(), what you really create is a template for the actual metrics you will wind up with. Your template specifies what the names of the labels are, but obviously it doesn't (and can't) specify what values the labels have for any particular metric. In order to actually get a specific metric, you must fill in values for all of the labels in some way, which creates a specific metric from your template. With a specific metric in hand, you can now manipulate it to, for example, count things. If you're working with metrics that have labels, you always have to perform this specialization step, even if you're only ever going to generate a single metric (for example, a 'scripts_build_info' metric where the point is the label values).

The Go package offers two ways of doing this specialization. First, you can create a Labels map that maps all of the label names to specific values, and then use .With() or .GetMetricWith(). Second, you can simply use .WithLabelValues() or .GetWithLabelValues() to list off all of the label values in the same order as you specified the labels themselves when you created the *Vec metric template. Which one you use depends on which is more convenient.

These metrics templates can also be partially specified, filling in some but not all label values. This is done with the .CurryWith() and .MustCurryWith() methods, which return a 'narrower' *Vec metric template. I can imagine several situations where this might be useful, but since I haven't used this in code yet I'm not going to write out my speculations here.

(I suspect that Prometheus client packages for other languages follow a similar model, but I haven't looked at them yet.)

Sidebar: An unfortunate limitation of promhttp

The Prometheus Go client packages include promhttp, which can generate metrics related to the HTTP requests that your program handles (and also to any HTTP requests it makes, with a separate set of functions). Unfortunately the instrumentation it provides doesn't have any way to customize the labels of the metrics on a per-request basis, and re-implementing what it's doing requires duplicating a bunch of un-exported HTTP middleware functionality that it contains.

For a non-hypothetical example, promhttp goes through a large amount of work to be able to capture the HTTP reply's status code and other response information. When I started looking through that code, I decided that the HTTP status wasn't quite important enough to put in my own metrics.

(The problem here is that a concrete instance of the http.ResponseWriter may support additional interfaces, like http.Flusher or http.CloseNotifier, and this support may be important to either your HTTP server code or things that your code calls. It's easy to implement a plain http.ResponseWriter in middleware, but then downstream code loses access to these additional interfaces.)

GoPrometheusMetricLabels written at 22:08:38; Add Comment

2019-07-16

Go's proposed try() will be used and that will change how code is written

One of the recent commotions in Go is over the Go team's proposed try() built-in error check function, which is currently planned to be part of Go 1.14 (cf). To simplify, 'a, [...] := try(f(...))' can be used to replace what you would today have to write as:

a, [...], err := f(...)
if err != nil {
   return [...], err
}

Using try() means you can drop that standard if block and makes your function clearer; much more of the code that remains is relevant and important.

Try() is attractive and will definitely be used in Go code, probably widely, and especially by people who are new to Go and writing more casual code. However, this widespread use of try() is going to change how Go code is written.

One of my firm beliefs is that most programmers are strongly driven to do what their languages make easy, and I don't think try() is any exception (I had similar thoughts about the original error handling proposal). What try() does is that try() makes returning an unchanged error the easiest thing to do. You can wrap the error from f() with more context if you work harder, but the easiest path is to not wrap it at all. This is a significant change from the current state of Go, where wrapping an error is a very easy thing that needs almost no change to the boilerplate code:

a, [...], err := f(...)
if err != nil {
   return [...], fmt.Errorf("....", ..., err)
}

In a try() world, adding that error wrapping means adding those three lines of near boilerplate back in. As a result, I think that once try() is introduced, Go code will see a significantly increased use of errors being returned unchanged and unwrapped from deep in the call stack. Sure, it's not perfect, but programmers are very good at convincing themselves that it's good enough. I'm sure that I'll do it myself.

This change isn't necessarily bad by itself, but it does directly push against the Go team's efforts to put more context into error values, an effort that actually landed changes in the forthcoming Go 1.13 (see also the draft design). It's possible to combine try() and better errors in clever ways, as shown by How to use 'try', but it's not the obvious, easy path, and I don't think it's going to be a common way to use try().

I am neither for or against try() at the moment, because I think that being for or against it in isolation is asking the wrong question. The important question is how Go wants errors to work, and right now the Go team does not seem to have made up its mind. If the Go team decides that errors should frequently be wrapped on their way up the call stack, I believe that try() in its current form is a bad idea.

(If the Go team thinks that they can have both try() in its current form and people routinely wrapping errors, I think that they are fooling themselves. try() will be used in the easiest way to use it, because that's what people do.)

PS: While Go culture is currently relatively in favour of wrapping errors with additional information, I don't think that this culture will survive the temptation of try(). You can't persuade people to regularly do things the hard way for very long.

Update: The Go team dropped the try() proposal due to community objections, rendering the issue moot.

GoTryWillBeUsedSimply written at 23:36:22; Add Comment

2019-07-03

Converting a variable to a single-element slice in Go via unsafe

I was recently reading Chris Wellons' Go Slices are Fat Pointers. At the end of the article, Wellons says:

Slices aren’t as universal as pointers, at least at the moment. You can take the address of any variable using &, but you can’t take a slice of any variable, even if it would be logically sound.

[...] However, if you really wanted to do this, the unsafe package can accomplish it. I believe the resulting slice would be perfectly safe to use:

// Convert to one-element array, then slice
fooslice = (*[1]int)(unsafe.Pointer(&foo))[:]

I had to read this carefully before I understood what it was doing, but then after I read the documentation for unsafe.Pointer() carefully, I believe that this is fully safe. So let's start with what it's doing. The important thing is this portion of the expression:

(*[1]int)(unsafe.Pointer(&foo))

This is essentially reinterpreting foo from an integer to a one-element array of integers, by taking a pointer to it and then converting that to a pointer to a one-element array. I believe that this use of unsafe.Pointer() is probably valid, because it seems like it falls under the first valid use in the documentation:

(1) Conversion of a *T1 to Pointer to *T2.

Provided that T2 is no larger than T1 and that the two share an equivalent memory layout, this conversion allows reinterpreting data of one type as data of another type. [...]

In Go today, an integer and a one-element array of integers are the same size, making the first clause true and pretty much requiring that the second one is true as well. I don't think that Go requires this in the language specification, but in practice it's very likely to be the case in any implementation that wants to adhere to Go's ethos of efficiency and minimalism. Once we have a valid pointer to a (valid) one-element array of int, it's perfectly legal to create a slice from it, which is what the '[:]' does. So if this use of unsafe is valid, the resulting slice is fully safe and valid.

Now we get to the interesting question of why Go doesn't allow this without the use of unsafe.Pointer(). One possible answer is that this is not allowed simply because it would require extra work in the language specification and the compiler. This may well be the case (and it's certainly a very Go style reason), but another possibly reason is that Go doesn't want to require that all implementations make a one-element array have exactly the same memory layout and implementation as a single variable. By confining this to the limited assurances of unsafe and not making it part of the guaranteed language specification, Go keeps people's options open.

(Of course this is only theoretical, because in practice a new implementation will likely want to reuse as much of the standard library as possible and the current standard library uses unsafe in various places. If you don't match what works with unsafe today in mainline Go, you're going to have to rewrite some of that code. Also, see how unsafe type conversions are still garbage collection safe for some more discussion of this area.)

GoVariableToArrayConversion written at 22:00:33; Add Comment

2019-07-01

The power of option types is in what they do to the rest of the language

I was recently reading Real problems with functional languages (via), where I ran across the standard explanation of why option types matter. As I read it (and not necessarily as it's really written in that article), they matter because they force you to deal explicitly with null values.

Reading this sparked a little light in my head, because this isn't really correct. Modern compilers do not need an explicit type to force you to do this checking; with data flow analysis inside functions, they are perfectly capable of forcing you to check whether or not a variable is null before you make any other use for it (and only allowing that use in a code path where you had proven the variable wasn't null). So why don't they?

The simple answer is that a compiler that required this would be insufferable to use, because every single function where you had a potentially nullable entity would need its own set of checks (unless the compiler had extremely reliable inter-function analysis), all the way up and down the call stack. It wouldn't matter that every path to this function having this entity was already checking for nulls; you'd have to do it again, over and over. In practice, what people using the compiler (and the language) would insist on is some sort of notation for 'this cannot ever be null', which would allow you to not check in any code that only dealt with such variables.

(The compiler would then require that you could only return, assign to, or otherwise set those entities to non-null values, by requiring checks beforehand. But they would be localized checks.)

This leads me to my new view of the real power of option types, which is that the power of option types is that they let you check for null only once, instead of everywhere in your program. Having option types makes it ergonomic to create types that are defined so that they cannot be null and then have the compiler enforce this, while still giving you an escape hatch for situations where you have to return something other than a legitimate value of the type.

Of course, this is not the only thing option types can do. Even in a language where you can't restrict your types this way, option types provide you a way to make people explicitly consider errors, null returns, or whatever by wrapping up the value they really want inside something that forces them to confront the issue and blows up if they don't. But it doesn't do it as pervasively as 'cannot be null' types, and to the extent that you try to make it pervasive, it becomes more and more annoying, because it is exactly those 'prove it's not null over and over again' checks being enforced by user-created types.

OptionTypesIndirectPower written at 22:25:45; Add Comment

2019-06-15

Some notes on Intel's CPUID and how to get it for your CPUs

In things like Intel's MDS security advisory, Intel likes to identify CPU families with what they call a 'CPUID', which is a hex number. For example, the CPUID of the Sandy Bridge Xeon E5 'Server Embedded' product family is listed by Intel as 206D7, the CPUID of the Westmere Xeon E7 family is 206F2, and the CPUID of the Ivy Bridge Xeon E7 v2 family is 306E7. Given that one of these families has a microcode update theoretically available, one of them is supposed to get it sometime, and one will not get a microcode update, it has become very useful to be able to find out the CPUID of your Intel processors (especially given Intel's confusing Xeon names).

On x86 CPUs, this information comes from the CPU via the CPUID instruction, which provides all sorts of information (including the brand name of the processor itself, which the processor directly provides in ASCII). Specifically, it is the 'processor version information' that you get from using CPUID to query the Processor Info and Feature Bits. Many things will tell you this information, for example Linux's /proc/cpuinfo and lscpu, but they decode what it represents to give you the CPU family, model, and stepping (using a complicated algorithm that is covered in that Wikipedia entry on CPUID). Intel's 'CPUID' is it directly in hex, and I don't know if you can reliably reverse a given family/model/stepping triplet into the definite CPUID (I haven't tried to do it).

(Intel's MDS PDF also lists a two-hex-digit 'Platform ID'. I don't know where this comes from or how you find out what yours is. I thought I found some hints, but they don't appear to give the right answer on my test machine.)

There are a variety of ways to get the Intel CPUID in raw hex. The most brute force method and perhaps the simplest is to write a program that uses the CPUID instruction to get this. Keen people can use C with inline assembly, but I used Go with a third party package for this that I found through the obvious godoc.org search:

package main
import (
  "fmt"
  "sigs.k8s.io/node-feature-discovery/pkg/cpuid"
)

func main() {
  r := cpuid.Cpuid(0x01, 0x00)
  fmt.Printf("cpuid: %x\n", r.EAX)
}

This has the great benefit of Go for busy sysadmins; it compiles to a static binary that will run on any machine regardless of what packages you have installed, and you can pretty much cross-compile it for other Unixes if you need to (at least 64-bit x86 Unixes; people with 32-bit x86 Unixes are out of luck here without some code changes, but this package may help).

(Intel also has a CPUID package for Go, but it wants to decode this information instead of just give it to you literally so you can print the hex that Intel uses in its documentation. I wish Intel's left hand would talk to its right hand here.)

On Linux machines, you may have the cpuid program available as a package, and I believe it's also in FreeBSD ports in the sysutils section (and FreeBSD has another 'cpuid' program that I know nothing about). Cpuid normally decodes this information, as everything does, but you can get it to dump the raw information and then read out the one field of one line you care about, which is the 'eax' field in the line that starts with '0x00000001':

; cpuid -1 -r
CPU:
   0x00000000 0x00: eax=0x00000016 ebx=0x756e6547 ecx=0x6c65746e edx=0x49656e69
   0x00000001 0x00: eax=0x000906ea ebx=0x04100800 ecx=0x7ffafbff edx=0xbfebfbff
[...]

(This is my home machine, and the eax of 0x000906ea matches the CPUID of 906EA that Intel's MDS PDF says that an i7-8700K should have.)

Perhaps you see why I think a Go program is simpler and easier.

IntelCPUIDNotes written at 23:27:09; Add Comment

2019-06-09

Go recognizes and specially compiles some but not all infinite loops

A while back, for my own reasons, I wrote an 'eatcpu' program to simply eat some number of CPUs worth of CPU time (by default, all of the CPUs on the machine). I wrote it in Go, because I wanted a straightforward program and life is too short to deal with threading in C. The code that uses CPU is simply:

func spinner() {
        var i int
        for {
                i++
        }
}

At the time I described this as a simple integer CPU soaker, since the code endlessly increments an int. Recently, to make my life more convenient, I decided to put it up on Github, and as part of that I decided that I wanted to actually know what it was doing; specifically I wanted to know if it actually was all running in CPU registers or if Go was actually loading and storing from memory all of the time. I did this in the straightforward way of running 'go tool compile -S' (after some research) and then reading the assembly. It took me some time to understand what I was reading and believe in it, because here is the entire assembly that spinner() compiles down to:

0x0000 00000 (eatcpu.go:27)     JMP     0
0x0000 eb fe

(The second line is the actual bytes of object code.)

Go 1.12.5 had recognized that I had an infinite loop with no outside effects and had compiled it down to nothing more than that. Instead of endless integer addition, I had an endless JMP, which was probably using almost none of the CPU's circuitry (certainly it doesn't need to use the integer ALU).

The Go compiler is clever enough to recognize that a variation of this is still an infinite loop:

func spinner2() int {
        var i int
        for {
                i++
        }
        return i
}

This too compiles down to 'JMP 0', since it can never exit the for loop to return anything.

However, the Go compiler does not recognize impossible situations as being infinite loops. For example, we can write the following:

func spinner3() uint {
        var i uint
        for ; i >= 0 ; {
                i++
        }
        return i
}

Since i is an unsigned integer, the for condition is always true and the loop will never exit. However, Go 1.12.5 compiles it to actual arithmetic and looping code, instead of just a 'JMP 0'. The core of the assembly code is:

0x0000  XORL    AX, AX
0x0002  JMP     7
0x0004  INCQ    AX
0x0007  TESTQ   AX, AX
0x000a  JCC     4
0x000c  MOVQ    AX, "".~r0+8(SP)
0x0011  RET

(The odd structure is because of how plain for loops are compiled. The exit check is relocated to the bottom of the loop, and then on initial loop entry, at 0x0002, we skip over the loop body to start by evaluating the exit check.)

If I'm understanding the likely generated x86 assembly correctly, this will trivially never exit; TESTQ likely compiles to some version of TEST, which unconditionally clears CF (the carry flag), and JCC jumps if the carry flag is clear.

(The Go assembler's JCC is apparently x86's JAE, per here, and per this x86 JUMP quick reference, JAE jumps if CF is clear. Since I had to find all of that and follow things through, I'm writing it down.)

On the whole, I think both situations are reasonable. Compiling infinite for loops to straight JMPs is perfectly reasonable, since they do get used in real Go code, and so is eliminating operations that have no side effects; put them together and spinner() turns into 'JMP 0'. On the other hand, the unsigned int comparison in spinner3() should never happen in real, non-buggy code, so it's probably fine for the optimizer to not recognize that it's always true and thus that this creates an infinite loop with no outside effects.

(There is little point to spending effort on optimizing buggy code.)

PS: I don't know if there's already a Go code checker that looks for unsigned-related errors like the comparison in spinner3(), but if there isn't there is probably room for one.

GoInfiniteLoopOptimization written at 21:56:40; Add Comment

2019-06-05

Go channels work best for unidirectional communication, not things with replies

Once, several years ago, I wrote some Go code that needed to manipulate a shared data structure. At this time I had written and read less Go code than I have now, and so I started out by trying to use channels and goroutines for this. There would be one goroutine that directly manipulated the data structure; everyone else would ask it to do things over channels. Very rapidly this failed and I wound up using mutexes.

(The pattern I tried is what I have since seen called a monitor goroutine (via).)

Since then, I have come to feel that this is one regrettable weakness of Go channels. However nice, useful, and convenient they are for certain sorts of communication patterns, Go channels do not give you very good ways of implementing a 'RPC' communication pattern, where you make a request of another goroutine and expect to get an answer back, since there is no direct way to reply to a channel message. In order to be able to reply to the sender, your monitor goroutine must receive a unique reply channel as part of the incoming request, and then things can start getting much more complicated and tangled from there (with various interesting failure modes if anyone ever makes a programming mistake; for example, you really want to insist that all reply channels are buffered).

My current view is that Go channels work best for unidirectional communication, where either you don't need an answer to the message you've sent or it doesn't matter which goroutine in particular receives and processes the 'reply' (really the next step), so you can use a single shared channel that everyone pulls messages from. Implementing some sort of bidirectional communication between specific goroutines with channels is generally going to be painful and require a bunch of bureaucracy that will complicate your code (unless all of the goroutines are long-lived and have communication patterns that can be set up once and then left alone). This makes the "monitor goroutine" pattern a bad idea simply for code clarity reasons, never mind anything else like performance or memory churn.

(This is especially the case if you have a bunch of different requests to send to the one goroutine, each of which can get a different reply, because then you need a bunch of different channel types unless you're going to smash everything together in various less and less type-safe ways. The more methods you would implement on your shared data structure, the more painful doing everything through a monitor goroutine will be.)

I'm not sure there's anything that Go could do to change this, and it's not clear to me that Go should. Go is generally fairly honest about the costs of operations, and using channels for synchronization is more expensive than a mutex and probably always will be. If you have a case where a mutex is good enough, and a shared data structure is a great case, you really should stick with simple and clearly correct code; that it performs well is a bonus. Channels aren't the answer to everything and shouldn't try to be.

(Years ago I wrote Goroutines versus other concurrency handling options in Go about much the same issues, but my thinking about what goroutines were good and bad at was much less developed then.)

(This entry was sparked by reading Golang: Concurrency: Monitors and Mutexes, A (light) Survey, because it made me start thinking about why the "monitor goroutine" pattern is such an awkward one in Go.)

GoChannelsAndReplies written at 01:02:59; Add Comment

(Previous 10 or go back to May 2019 at 2019/05/23)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.