Wandering Thoughts archives

2014-09-22

Go is mostly easy to cross-compile (with notes)

One of the things I like about Go is that it's generally very easy to cross-compile from one OS to another; for instance, I routinely build (64-bit) Solaris binaries from my 64-bit Linux instead of having to maintain a Solaris or OmniOS Go compilation environment (and with it all of the associated things I'd need to get my source code there, like a version of git and so on). However when I dug into the full story in order to write this entry, I discovered that there are some gaps and important details.

So let's start with basic cross compilation, which is the easy and usual bit. This is well covered by eg Dave Cheny's introduction. The really fast version looks like this (I'm going to assume a 64-bit Linux host):

cd /some/where
hg clone https://code.google.com/p/go
cd go/src
./all.bash
export GO386=sse2
GOOS=linux GOARCH=386 ./make.bash --no-clean
GOOS=solaris GOARCH=amd64 ./make.bash --no-clean
GOOS=freebsd GOARCH=386 ./make.bash --no-clean

(See Installing Go from source for a full discussion of these environment variables.)

With this done, we can build some program for multiple architectures (and deploy the result to them with just eg scp):

cd $HOME/src/call
go build -o call
GOARCH=386 go build -o call32
GOOS=solaris GOARCH=amd64 go build -o call.solaris

(Add additional architectures to taste.)

This generally works. I've done it for quite some time with good success; I don't think I've ever had such a cross-compiled binary not work right, including binaries that do network things. But, as they say, there is a fly in the ointment and these cross-compiled binaries are not quite equivalent to true natively compiled Go binaries.

Go cross-compilation has one potentially important limit: on some platforms, Linux included, true native Go binaries that use some packages are dynamically linked into the C runtime shared library and some associated shared libraries through Cgo (see also). On Linux I believe that this is necessary to use the true native implementation of anything that uses NSS; this includes hostname lookup, username and UID lookup, and group lookups. I further believe that this is because the native versions of this use dynamically loaded C shared libraries that are loaded by the internals of GNU libc.

Unfortunately, Cgo does not cross-compile (even if you happen to have a working C cross compiler environment on your host, as far as I know). So if you cross-compile Go programs to such targets, the binaries run but they have to emulate the native approach and the result is not guaranteed to give you identical results. Sometimes it won't work at all; for example os/user is unimplemented if you cross-compile to Linux (and all username or UID lookups will fail).

(One discussion of this is in Alan Shreve's article, which was a very useful source for writing this entry.)

Initially I thought this was no big deal for me but it turns out that it potentially is, because compiling for 32-bit Linux on 64-bit Linux is still cross-compiling (as is going the other way, from 32-bit host to 64-bit target). If you build your Go environment on, say, a 64-bit Ubuntu machine and cross-compile binaries for your 32-bit Ubuntu machines, you're affected by this. The sign of this happening is that ldd will report that you have a static executable instead of a dynamic one. For example, on 64-bit Linux:

; ldd call32 call64
call32:
        not a dynamic executable
call64:
        linux-vdso.so.1 =>  (0x00007ffff2957000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f3be5111000)
        libc.so.6 => /lib64/libc.so.6 (0x00007f3be4d53000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f3be537a000)

If you have both 64-bit and 32-bit Linux and you want to build true native binaries on both, at least as far as the standard packages go, you have to follow the approach from Alan Shreve's article. For me, this goes like the following (assuming that you want your 64-bit Linux machine to be the native version, which you may not):

  1. erase everything from $GOROOT/bin and $GOROOT/pkg
  2. run 'cd src; ./make.bash' on the 32-bit machine
  3. rename $GOROOT/pkg/linux_386 to some other name to preserve it
  4. build everything on your 64-bit machine, including the 32-bit cross-compile environment.
  5. delete the newly created $GOROOT/pkg/linux_386 directory hierarchy and restore the native-built version you saved in step 3.

If you're building from source using the exact same version from the Mercurial repository it appears that you can extend this to copying the pkg/$GOOS_$GOARCH directory between systems. I've tested copying both 32-bit Linux and 64-bit Solaris and it worked for me (and the resulting binaries ran correctly in quick testing). This means that you need to build Go itself on various systems but you can get away with doing all of your compilation and cross-compilation only on the most convenient system for you.

(I suspect but don't know that if you have any Cgo-using packages you can copy $GOPATH/pkg/$GOOS_$GOARCH around from system to system to get functioning native versions of necessary packages. Try it and see.)

Even with this road bump the pragmatic bottom line is that Go cross-compilation is easy, useful, and is probably going to work for your Go programs. It's certainly easy enough that you should give it a try just to see if it works for you.

programming/GoCrossCompileNotes written at 23:32:46; Add Comment

Another side of my view of Python 3

I have been very down on Python 3 in the past. I remain sort of down on it, especially in the face of substantial non-current versions on the platforms I use and want to use, but there's another side of this that I should admit to: I kind of want to be using Python 3.

What this comes down to at its heart is that for all the nasty things I say about it, Python 3 is where the new and cool stuff is happening in Python. Python 3 is where all of the action is and I like that in general. Python 2 is dead, even if it's going to linger on for a good long while, and I can see the writing on the wall here.

(One part of that death is that increasingly, interesting new modules are only going to be Python 3 or are going to be Python 3 first and only Python 2 later and half-heartedly.)

And Python 3 is genuinely interesting. It has a bunch of new idioms to get used to, various challenges to overcome, all sorts of things to learn, and so on. All of these are things that generally excite me as a programmer and make it interesting to code stuff (learning is fun, provided I have a motivation).

Life would be a lot easier if I didn't feel this way. If I felt that Python 3 had totally failed as a language iteration, if I thought it had taken a terrible wrong turn that made it a bad idea, it would be easy to walk away from it entirely and ignore it. But it hasn't. While I dislike some of its choices and some of them are going to cause me pain, I do expect that the Python 3 changes are generally good ones (and so I want to explore them). Instead, I sort of yearn to program in Python 3.

So why haven't I? Certainly one reason is that I just haven't been writing new Python code lately (and beyond that I have real concerns about subjecting my co-workers to Python 3 for production code). But there's a multi-faceted reason beyond that, one that's going to take another entry to own up to.

(One aspect of the no new code issue is that another language has been competing for my affections and doing pretty well so far. That too is a complex issue.)

python/Python3Yearning written at 00:20:04; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.