Wandering Thoughts

2019-11-07

Realizing that Go constants are always materialized into values

I recently read Global Constant Maps and Slices in Go (via), which starts by noting that Go doesn't let you create const maps or slices and then works around that by having an access function that returns a constant slice (or map):

const rateLimit = 10
func getSupportedNetworks() []string {
    return []string{"facebook", "twitter", "instagram"}
}

When I read the article, my instinctive reaction was that that's not actually a constant because the caller can always change it (although if you call getSupportedNetworks() again you get a new clean original slice). Then I thought more about real Go constants like rateLimit and realized that they have this behavior too, because any time you use a Go constant, it's materialized into a mutable value.

Obviously if you assign the rateLimit constant to a variable, you can then change the variable later; the same is true of assigning it to a struct field. If you call a function and pass rateLimit as one argument, the function receives it as an argument value and can change it. If a function returns rateLimit, the caller gets back a value and can again change it. This is no different with the slice that getSupportedNetworks() returns.

The difference between using rateLimit and using the return value from getSupportedNetworks is that the latter can be mutated through a second reference without the explicit use of Go pointers:

func main() {
   a := rateLimit
   b := &a
   *b += 10

   c := getSupportedNetworks()
   d := c
   d[1] = "mastodon"

   fmt.Println(a, c)
}

But this is not a difference between true Go constants and our emulated constant slice, it's a difference in the handling of the types involved. Maps and slices are special this way, but other Go values are not.

(Slices are also mutable at a distance in other ways.)

PS: Go constants can't have their address taken with '&', but they aren't the only sorts of unaddressable values in Go. In theory we could make getSupportedNetworks() return an unaddressable value by making its return value be '[3]string', as we've seen before; in practice you almost certainly don't want to do that for various reasons.

(This seems like an obvious observation now that I've thought about it, but I hadn't really thought about it before reading the article and having my reflexive first reaction.)

GoConstantsAsValues written at 00:13:57; Add Comment

2019-10-27

My common patterns in shell script verbosity (for sysadmin programs)

As a system administrator, I wind up writing a fair number of scripts that exist to automate or encapsulate some underlying command or set of commands. This is the pattern of shell scripts as wrapper scripts or driver scripts, where you could issue the real commands by hand but it's too annoying (or too open to mistakes). In this sort of script, I've wound up generally wanting one of three different modes for verbosity; let's call them 'quiet', 'dryrun', and 'verbose' In 'verbose' mode the script runs the underlying commands and reports what exactly it's running, in 'quiet' mode the script runs the underlying commands but doesn't report them, and in 'dryrun' mode the script reports the commands but doesn't run them.

Unfortunately this three-way setup is surprisingly hard to implement in a non-annoying way in Bourne shell scripts. Now that I've overcome some superstition I wind up writing something that looks like this:

run() {
  [ "$verb" != 0 ] && echo "+ $*"
  [ "$DOIT" = y ] && "$@"
}

[...]

run prog1 some arguments
run prog2 other arguments
[...]

This works for simple commands, but it doesn't work for pipelines and especially for redirections (except sometimes redirection of standard input). It's also somewhat misleading about the actual arguments if you have arguments with spaces in them; if I think I'm likely to, I need a more complicated thing than just 'echo'.

For those sort of more complicated commands, I usually wind up having to do some variant of this code as an inline snippet of shell code, often writing the verbose report of what will be run a little bit differently than what will actually get run to be clear about what's going on. The problem with this is not just the duplication; it's also the possibility of errors creeping into any particular version of the snippet. But, unfortunately, I have yet to come up with a better solution in general.

One hacky workaround for all of this is to make the shell script generate and print out the commands instead of actually trying to run them. This delegates all of the choices to the person running the script; if they just want to see the commands that would be run, they run the script, while if they actually want to run the commands they feed the script's output into either 'sh -e' or 'sh -ex' depending on whether they want verbosity. However, this only really works well if there's no conditional logic that needs to be checked while the commands are running. The moment the generated commands need to include 'if' checks and so on, things will get complicated and harder to follow.

ShellScriptVerbosity written at 00:21:54; Add Comment

2019-10-26

An incorrect superstition about running commands in the Bourne shell

Every so often, I wind up writing some bit of shell script that wants to execute an entire command line that it has been passed (program and all). For years I have written this as follows:

# re-execute the command line
cmd="$1"; shift
"$cmd" "$@"

Some version of this has crept into innumerable shell scripts, partly for reasons beyond the scope of this entry. I've always considered it just a little Bourne irritation that "$@" didn't let you directly run commands and you had to peel off the command first.

Except that that's wrong, as I found out as I was just about to write an entry complaining about it. Both Bash and several other implementations of the Bourne shell are perfectly happy to let you write this as the obvious:

# re-execute the command line
"$@"

I've apparently spent years programming via superstition. Either I got this wrong many years ago and it stuck, or at some point I had to use some version of the Bourne shell where this didn't work. Well, at least I know better now and I can do it the easy way in future scripts.

(One case for this sort of re-execution is if you've got a generic script for running other commands with locking. The script is given a lock name and the command line to run under the lock, and it will wind up wanting to do exactly this operation.)

PS: Since this seems to be so widely supported across various versions of the Bourne shell, I assume that this is either POSIX compatible or even POSIX mandated behavior. I haven't bothered to carefully read the relevant specification to be sure of that, though.

BourneCommandSuperstition written at 00:06:18; Add Comment

2019-10-20

A small irritation with Go's crypto/tls package

I generally like Go's TLS support in the crypto/tls package, to the small extent that I've used it; things pretty much work in straightforward ways as I'd expect and it's easy to use. But I do have one issue with it, and that is that it doesn't supply a .String() method for any of the TLS related constants and types it provides.

The obvious problem is that this leads to duplicated work for everyone who wants to report or log the ciphers, TLS version information, and so on used in connections (and you should capture this information). Unless you want to log the raw hex bytes and throw everyone to the wolves to decode it later, you're going to be converting things to strings yourself, and everyone does it (well, lots of people at least).

So, lots of people create the obvious thing that looks something like this:

var tlsVersions = map[uint16]string{
   tls.VersionSSL30: "SSLv3",
   tls.VersionTLS10: "TLS1.0",
   tls.VersionTLS11: "TLS1.1",
   tls.VersionTLS12: "TLS1.2",
   tls.VersionTLS13: "TLS1.3",
}

Congratulations, you now have three problems. The first problem is that this code doesn't build on older versions of Go, because their older version of crypto/tls doesn't have the tls.VersionTLS13 constant. This issue has been encountered by real code out there in the world. The second problem is that this is going to generate deprecation warnings about the use of tls.VersionSSL30 on modern versions of Go, and may someday fail to build entirely in a future version.

(It's not clear if the Go authors will consider the removal of the VersionSSL30 constant to be an API breakage that is prevented by the Go 1.0 compatibility rules. Similar questions apply for SSL and TLS ciphers that Go drops support for.)

The third problem is that this code will fail to handle new TLS versions that are added in future versions of Go, and which it may encounter when it's compiled under those Go versions and makes connections using those new TLS versions; similar things can happen if you try to have a list of TLS ciphers and names for them. In entirely realistic code (as in, respected programs have done it), this can lead to the program panicing.

In Go programs today, your best option is to ignore those temptingly convenient crypto/tls constants and define them yourself, either as raw numbers (in a tlsVersions map or your equivalent) or as painfully recreated constants of your own that are under your control. Perhaps you can use 'go generate' to automate this from some sort of list.

All of this could be avoided if crypto/tls would convert these to strings for you. Unfortunately it's probably too late to really make this convenient by providing .String() methods, because the relevant types are fixed in stone by the Go 1.0 compatibility promise. However, the package could still provide functions to provide string names for TLS versions and TLS ciphers that are supported by it.

PS: In case you think I'm picking on one specific program here, my own code has this issue too with TLS versions (I was already using raw hex values for my mapping from TLS ciphers to names for them). I'll probably be converting it to use raw hex values for TLS versions, which of course makes it more opaque.

GoTLSNoStringIssue written at 22:36:29; Add Comment

2019-10-13

If you don't test it in your automated testers, it's broken

A while back I tweeted:

The Go developers keep breaking Go's self-tests when you have gccgo installed. Since they don't install it on almost any of their automated builders, I don't think they really care and so I'm done dealing with it myself. It's time to uninstall gcc-go.

I could file yet another Go bug report about this breakage due to gccgo that could as well have been filed by an automated machine, but I've lost all my interest in being a human machine for the Go developers. Building with gccgo present can and should be checked automatically.

(I don't actually use gccgo; I had it installed basically only out of curiousity.)

As a preface and disclaimer, I'm not picking on Go here in specific, because I think this is a general failure. It's just that I know the specific cases with Go and they make for a good illustration. The various incidents with gccgo are one of them, and then recently there was an issue when running tests in a terminal instead of in a situation where there was no terminal.

There's a saying in programming that if you don't test it, it doesn't work. Let me amend that saying; if you don't test it in your automated builders or CI, it doesn't work. This is true even if you have tests for it, and in fact especially if you have tests for it. If your CI is not running those tests, you are relying on the fallible humans making changes to remember to do testing in an environment where those tests will trigger.

(The problem with having tests for it that aren't being exercised by CI is that such tests are likely to give people a false sense of confidence. There are tests for it, the CI passed, and therefor the tests pass, right? Well, no, not if some tests are skipped by CI. And don't count on people to keep straight what is and isn't tested by CI.)

As these two examples from Go show, making sure that your CI triggers all tests is not necessarily trivial, and it's also not necessarily straightforward to guess whether or not CI will run a test. For that matter, I'm fairly sure it's not easy for your CI maintainers to keep up with what the CI environment needs in order to fully run tests, and sometimes doing this may require uncommon and peculiar things. It was perfectly reasonable to add tests to Go that ran to verify its behavior when it was talking to a terminal and it was perfectly reasonable for Go's automated builders and CI system to run builds not using a terminal (or a fake of it), but the combination created a situation where something obvious broke and wasn't detected by builders.

TestsNotInCIProblem written at 23:36:12; Add Comment

Magic is fine if it's all magic: why I've switched to use-package in GNU Emacs

A while back I tweeted:

I revised my .emacs to use-package for all of the extra packages that I'm using (not yet for stock Emacs things, like c-mode settings). It was mostly smooth and the result is cleaner and easier to follow than it used to be, although there is more magic.

Since I deal with my .emacs only occasionally, I think clean and easy to follow is more useful than 'low magic'. In practice everything in my .emacs is magic when I come back to it six months later; I'm going to have to remember how it all works no matter what I use.

Use-package (also) is the hip modern way to configure Emacs packages such as lsp-mode (which I've switched to). For example, if you search around on the Internet for sample .emacs, discussions of configuring things, and so on, you'll most likely find people doing this with use-package. This is one advantage of switching to using use-package; going with what everyone seems to be using makes it easier to find answers to questions I may have.

(For this purpose it doesn't matter whether or not use-package is in wide use among the general population of GNU Emacs users. What matters is what people write about on the Internet where I can find it.)

Another advantage of use-package is that it makes my .emacs easier to follow. Use-package forces grouping everything to do with a package together in one spot and it makes a certain amount of configuration and setup easier and shorter. I'm not sure it reduces the amount of things I need to learn to set up hooks and add keymappings and so on, but it feels like it makes them more accessible, straightforward, and easier to read later.

The theoretical downside of use-package is that it is magic (and that dealing with it requires occasional magic things that I don't understand even within the context of use-package). Normally I shy away from magic, but after thinking about it I decided I felt differently here. The best way to summarize this is that if you only deal with something occasionally, it's all magic no matter what.

Sure, I'm immersed in my .emacs now and sort of understand it (for some value of 'now'). But I don't deal with it very often, so it may be six months or a year or more before I touch it again. In a year, I will almost certainly have forgotten everything to do with all of the various things that go in your .emacs, and that's the same regardless of whether or not I use use-package. That use-package is more magic and requires learning other things won't really matter then, because it will all be magic to me unless and until I spent the time to re-learn things. And if something breaks and I have to fix it, again it will all be magic either way and I'll at least start out with Internet searches for the error message.

(Also, in the sense that there are lots of use-package examples to crib from, using use-package is the lazy way. I don't have to really learn how things work, I can just copy things. Of course this is how you get superstitions.)

EmacsUsePackageWhy written at 01:07:08; Add Comment

2019-09-18

Converting a Go pointer to an integer doesn't quite do what it looks like

Over on r/golang, an interesting question was asked:

[Is it] possible to parse a struct or interface to get its pointer address as an integer? [...]

The practical answer today is yes, as noted in the answers to the question. You can convert any Go pointer to uintptr by going through unsafe.Pointer(), and then convert the uintptr into some more conventional integer type if you want. If you're going to convert to another integer type, you should probably use uint64 for safety, since that should hold any uintptr value on any current Go platform.

However, the theoretical answer is no, in that this conversion doesn't quite get you what you might think it does. What this conversion really gives you is the address that the addressable value had at the moment the conversion to uintptr was done. Go very carefully does not guarantee that this past address is the same as the current address, although it always will be today.

(I'm assuming here that there are other references to the addressable value that keep it from being garbage collected.)

Go's current garbage collector is a non-compacting garbage collector, where once things are allocated somewhere in memory, they never move for as long as they're alive. Since a non-compacting garbage collector has stable memory addresses for things, converting an address to an integer gives you something that is always the integer value of the current address of that thing. However, there are also compacting garbage collectors, which move live things around during garbage collection for various reasons. In these garbage collectors, the memory address of things is not stable.

Go is deliberately specified so that you could implement it using a compacting GC, and at one point this was the long term plan. When it moved things as part of garbage collection, such a Go would update the address of actual pointers to them to the new value. However, it would not magically update integer values derived from those pointers, whether they're uintptrs or some other integer types. In a compacting GC world, getting the uintptr of the address of something twice at different times could give you two different values. Each value was accurate at the moment you got it, but it's not guaranteed to be accurate one instant past that; a GC pass could happen at any time and thus the thing could be moved at any time.

Leaving the door open for a compacting GC is one of the reasons that the rules surrounding the use of unsafe.Pointer() and uintptr are so carefully and narrowly specified, as we've seen before. In fact the documentation points this out explicitly:

A uintptr is an integer, not a reference. Converting a Pointer to a uintptr creates an integer value with no pointer semantics. Even if a uintptr holds the address of some object, the garbage collector will not update that uintptr's value if the object moves, nor will that uintptr keep the object from being reclaimed.

(The emphasis is mine.)

The Go garbage collector never moves things today, which leads to the practical answer for today of 'yes, you can do this'. But the theoretical answer is that the address of things could be constantly changing, and maybe someday in the future they sometimes will.

Update: As pointed out in the r/golang comments on my entry, I'm wrong here. In Go today, stacks are movable as stack usage grows and shrinks, and you can take the address of a value that is on the stack and that subsequently gets moved with the stack.

GoPointerToInteger written at 00:32:58; Add Comment

2019-09-14

Some notes on the structure of Go binaries (primarily for ELF)

I'll start with the background. I keep around a bunch of third party programs written in Go, and one of the things that I do periodically is rebuild them, possibly because I've updated some of them to their latest versions. When doing this, it's useful to have a way to report the package that a Go binary was built from, ideally a fast way. I have traditionally used binstale for this, but it's not fast. Recently I tried out gobin, which is fast and looked like it had great promise, except that I discovered it didn't report about all of my binaries. My attempts to fix that resulted in various adventures but only partial success.

All of the following is mostly for ELF format binaries, which is the binary format used on most Unixes (except MacOS). Much of the general information applies to other binary formats that Go supports, but the specifics will be different. For a general introduction to ELF, you can see eg here. Also, all of the following assumes that you haven't stripped the Go binaries, for example by building with '-w' or '-s'.

All Go programs have a .note.go.buildid ELF section that has the build ID (also). If you read the ELF sections of a binary and it doesn't have that, you can give up; either this isn't a Go binary or something deeply weird is going on.

Programs built as Go modules contain an embedded chunk of information about the modules used in building them, including the main program; this can be printed with 'go version -m <program>'. There is no official interface to extract this information from other binaries (inside a program you can use runtime/debug.ReadBuildInfo()), but it's currently stored in the binary's data section as a chunk of plain text. See version.go for how Go itself finds and extracts this information, which is probably going to be reasonably stable (so that newer versions of Go can still run 'go version -m <program>' against programs built with older versions of Go). If you can extract this information from a binary, it's authoritative, and it should always be present even if the binary has been stripped.

If you don't have module information (or don't want to copy version.go's code in order to extract it), the only approach I know to determine the package a binary was built from is to determine the full file path of the source code where main() is, and then reverse engineer that to create a package name (and possibly a module version). The general approach is:

  1. extract Go debug data from the binary and use debug/gosym to create a LineTable and a Table.
  2. look up the main.main function in the table to get its starting address, and then use Table.PCToLine() to get the file name for that starting address.
  3. convert the file name into a package name.

Binaries built from $GOPATH will have file names of the form $GOPATH/src/example.org/fred/cmd/barney/main.go. If you take the directory name of this and take off the $GOPATH/src part, you have the package name this was built from. This includes module-aware builds done in $GOPATH. Binaries built directly from modules with 'go get example.org/fred/cmd/barney@latest' will have a file path of the form $GOPATH/pkg/mod/example.org/fred@v.../cmd/barney/main.go. To convert this to a module name, you have to take off '$GOPATH/pkg/mod/' and move the version to the end if it's not already there. For binaries built outside some $GOPATH, with either module-aware builds or plain builds, you are unfortunately on your own; there is no general way to turn their file names into package names.

(There are a number of hacks if the source is present on your local system; for example, you can try to find out what module or VCS repository it's part of if there's a go.mod or VCS control directory somewhere in its directory tree.)

However, to do this you must first extract the Go debug data from your ELF binary. For ordinary unstripped Go binaries, this debugging information is in the .gopclntab and .gosymtab ELF sections of the binary, and can be read out with debug/elf/File.Section() and Section.Data(). Unfortunately, Go binaries that use cgo do not have these Go ELF sections. As mentioned in Building a better Go linker:

For “cgo” binaries, which may make arbitrary use of C libraries, the Go linker links all of the Go code into a single native object file and then invokes the system linker to produce the final binary.

This linkage obliterates .gopclntab and .gosymtab as separate ELF sections. I believe that their data is still there in the final binary, but I don't know how to extract them. The Go debugger Delve doesn't even try; instead, it uses the general DWARF .debug_line section (or its compressed version), which seems to be more complicated to deal with. Delve has its DWARF code as sub-packages, so perhaps you could reuse them to read and process the DWARF debug line information to do the same thing (as far as I know the file name information is present there too).

Since I have and use several third party cgo-based programs, this is where I gave up. My hacked branch of the which package can deal with most things short of "cgo" binaries, but unfortunately that's not enough to make it useful for me.

(Since I spent some time working through all of this, I want to write it down before I forget it.)

PS: I suspect that this situation will never improve for non-module builds, since the Go developers want everyone to move away from them. For Go module builds, there may someday be a relatively official and supported API for extracting module information from existing binaries, either in the official Go packages or in one of the golang.org/x/ additional packages.

GoBinaryStructureNotes written at 18:35:21; Add Comment

2019-09-11

Making your own changes to things that use Go modules

Suppose, not hypothetically, that you have found a useful Go program but when you test it you discover that it has a bug that's a problem for you, and that after you dig into the bug you discover that the problem is actually in a separate package that the program uses. You would like to try to diagnose and fix the bug, at least for your own uses, which requires hacking around in that second package.

In a non-module environment, how you do this is relatively straightforward, although not necessarily elegant. Since building programs just uses what's found in in $GOPATH/src, you can cd directly into your local clone of the second package and start hacking away. If you need to make a pull request, you can create a branch, fork the repo on Github or whatever, add your new fork as an additional remote, and then push your branch to it. If you didn't want to contaminate your main $GOPATH with your changes to the upstream (since they'd be visible to everything you built that used that package), you could work in a separate directory hierarchy and set your $GOPATH when you were working on it.

If the program has been migrated to Go modules, things are not quite as straightforward. You probably don't have a clone of the second package in your $GOPATH, and even if you do, any changes to it will be ignored when you rebuild the program (if you do it in a module-aware way). Instead, you make local changes by using the 'replace' directive of the program's go.mod, and in some ways it's better than the non-module approach.

First you need local clones of both packages. These clones can be a direct clone of the upstream or they can be clones of Github (or Gitlab or etc) forks that you've made. Then, in the program's module, you want to change go.mod to point the second package to your local copy of its repo:

replace github.com/rjeczalik/which => /u/cks/src/scratch/which

You can edit this in directly (as I did when I was working on this) or you can use 'go mod edit'.

If the second package has not been migrated to Go modules, you need to create a go.mod in your local clone (the Go documentation will tell you this if you read all of it). Contrary to what I initially thought, this new go.mod does not need to have the module name of the package you're replacing, but it will probably be most convenient if it does claim to be, eg, github.com/rjeczalik/which, because this means that any commands or tests it has that import the module will use your hacks, instead of quietly building against the unchanged official version (again, assuming that you build them in a module-aware way).

(You don't need a replace line in the second package's go.mod; Go's module handling is smart enough to get this right.)

As an important note, as of Go 1.13 you must do 'go get' to build and install commands from inside this source tree even if it's under $GOPATH. If it's under $GOPATH and you do 'go get <blah>/cmd/gobin', Go does a non-module 'go get' even though the directory tree has a go.mod file and this will use the official version of the second package, not your replacement. This is documented but perhaps surprising.

When you're replacing with a local directory this way, you don't need to commit your changes in the VCS before building the program; in fact, I don't think you even need the directory tree to be a VCS repository. For better or worse, building the program will use the current state of your directory tree (well, both trees), whatever that is.

If you want to see what your module-based binaries were actually built with in order to verify that they're actually using your modified local version, the best tool for this is 'go version -m'. This will show you something like:

go/bin/gobin go1.13
  path github.com/rjeczalik/bin/cmd/gobin
  mod  github.com/rjeczalik/bin    (devel)
  dep  github.com/rjeczalik/which  v0.0.0-2014[...]
  =>    /u/cks/go/src/github.com/siebenmann/which

I believe that the '(devel)' appears if the binary was built directly from inside a source tree, and the '=>' is showing a 'replace' in action. If you build one of the second package's commands (from inside its source tree), 'go version -m' doesn't report the replacement, just that it's a '(devel)' of the module.

(Note that this output doesn't tell us anything about the version of the second package that was actually used to build the binary, except that it was the current state of the filesystem as of the build. The 'v0.0.0-2014[...]' version stamp is for the original version, not our replacement, and comes from the first package's go.mod.)

PS: If 'go version -m' merely reports the 'go1.13' bit, you managed to build the program in a non module-aware way.

Sidebar: Replacing with another repo instead of a directory tree

The syntax for this uses your alternate repository, and I believe it must have some form of version identifier. This version identifier can be a branch, or at least it can start out as a branch in your go.mod, so it looks like this:

replace github.com/rjeczalik/which => github.com/siebenmann/which reliable-find

After you run 'go build' or the like, the go command will quietly rewrite this to refer to the specific current commit on that branch. If you push up a new version of your changes, you need to re-edit your go.mod to say 'reliable-find' or 'master' or the like again.

Your upstream repository doesn't have to have a go.mod file, unlike the case with a local directory tree. If it does have a go.mod, I think that the claimed package name can be relatively liberal (for instance, I think it can be the module that you're replacing). However, some experimentation with sticking in random upstreams suggests that you want the final component of the module name to match (eg, '<something>/which' in my case).

GoHackingWithModules written at 20:49:02; Add Comment

2019-09-09

A safety note about using (or having) go.mod inside $GOPATH in Go 1.13

One of the things in the Go 1.13 release notes is a little note about improved support for go.mod. This is worth quoting in more or less full:

The GO111MODULE environment variable continues to default to auto, but the auto setting now activates the module-aware mode of the go command whenever the current working directory contains, or is below a directory containing, a go.mod file — even if the current directory is within GOPATH/src.

The important safety note is that this potentially creates a confusing situation, and also it may be easy for other people to misunderstand what this actually says in the same way that I did.

Suppose that there is a Go program that is part of a module, example.org/fred/cmd/bar (with the module being example.org/fred). If you do 'go get example.org/fred/cmd/bar', you're fetching and building things in non-module mode, and you will wind up with a $GOPATH/src/example.org/fred VCS clone, which will have a go.mod file at its root, ie $GOPATH/src/example.org/fred/go.mod. Despite the fact that there is a go.mod file right there on disk, re-running 'go get example.org/fred/cmd/bar' while you're in (say) your home directory will not do a module-aware build. This is because, as the note says, module-aware builds only happen if your current directory or its parents contain a go.mod file, not just if there happens to be a go.mod file in the package (and module) tree being built. So the only way to do a proper module aware build is to actually be in the command's subdirectory:

cd $GOPATH/src/example.org/fred/cmd/bar
go get

(You can get very odd results if you cd to $GOPATH/src/example.org and then attempt to 'go get example.org/fred/cmd/bar'. The result is sort of module-aware but weird.)

This makes it rather more awkward to build or rebuild Go programs through scripts, especially if they involve various programs that introspect your existing Go binaries. It's also easy to slip up and de-modularize a Go binary; one absent-minded 'go get example.org/...' will do it.

In a way, Go modules don't exist on disk unless you're in their directory tree. If that tree is inside $GOPATH and you're not in it, you have a plain Go package, not a module.

(If the directory tree is outside $GOPATH, well, you're not doing much with it without cd'ing into it, at which point you have a module.)

The easiest way to see whether a binary was built module-aware or not is 'goversion -m PROGRAM'. If the program was built module-aware, you will get a list of all of the modules involved. If it wasn't, you'll just get a report of what Go version it was built with. Also, it turns out that you can build a program with modules without it having a go.mod:

GO111MODULE=on go get rsc.io/goversion@latest

The repository has tags but no go.mod. This also works on repositories with no tags at all. If the program uses outside packages, they too can be non-modular, and 'goversion -m PROGRAM' will (still) produce a report of what tags, dates, and hashes they were at.

Update: in Go 1.13, 'go version -m PROGRAM' also reports the module build information, with module hashes included as well.

This does mean that in theory you could switch over to building all third party Go programs you use this way. If the program hasn't converted to modules you get more or less the same results as today, and if the program has converted, you get their hopefully stable go.mod settings. You'd lose having a local copy of everything in your $GOPATH, though, which opens up some issues.

Go113AndGoModInGOPATH written at 23:55:53; Add Comment

(Previous 10 or go back to September 2019 at 2019/09/08)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.