Is bootstrapping Go from source faster using Go 1.9 or Go 1.8?
A while back I wrote about building the Go compiler from source and then where the time goes when bootstrapping Go with various Go versions. In both of these, I came to the conclusion that bootstrapping with Go 1.8.x. Now that Go 1.9 is out, an interesting question is whether using Go 1.9 as your bootstrap Go compiler makes things appreciably faster. One particular reason to wonder about this is that the Go 1.9 announcement specifically mentions that Go 1.9 compiles functions in a package in parallel. Building Go from source builds the standard library, which involves many packages with a lot of functions.
I will cut to the chase: Go 1.9 is slightly faster for this, but not enough that you should rush out and change your build setup. It also appears that the speed increases are higher on heavily multi-CPU machines, such as multi-socket servers, and lower on basic machines like old four-core desktops. In the best case the improvements appear to be on the order of a couple of seconds out of a process that takes 30 seconds or so (even on that old basic machine).
(In the process of trying to test this, I discovered that the Go
build process appears to only partially respect
believe that some things more or less directly look up how many
CPU cores your system has in order to decide how many commands
to run in parallel.)
This is a little bit disappointing, but we can't have everything and Go already builds very fast. It's also possible that the standard library is a bad codebase to show off this function-level parallelism. I could speculation about possible reasons why but it would be speculation, so for once I'm going to skip it.
(You get a small entry for today because I'm a bit tired right now. Plus, even negative results are results and thus are at least potentially worth reporting.)