Why writing sysadmin tools in Go is getting attractive

January 26, 2014

I don't find Go as nice a language as Python, but it is not terrible; in the past I've called it 'Python with performance'. What makes it especially attractive for sysadmin tools, at least for me, is how it simplifies deployment. Python deployment has three problems: the actual Python versions on various different systems, the availability of additional modules on those systems (and their versions), and actually packaging up and shipping around a modular Python program (cf, and loading zipfiles is not quite a full solution). As a compiled language without any real use of shared libraries, Go makes all of these go away.

Getting away from modularity concerns is frankly a big issue. Python makes modular programs just awkward enough that it pushes me away from them for small things, even if the structure of the problem would make modularity sensible. Because it's a compiled language, Go obviates all of these issues; regardless of how many pieces I split the source up into, Go will compile it down to a single self-contained binary that is all I have to ship around and run. Closely related to modularity concerns is use of third party modules and packages. Again Go needs these only at compile time and they can be in my own build area; I don't have to worry about what's installed on target systems or available to be installed through their local package manager. If it makes my program better to use some Go package, I can.

I also don't have to worry about Python versions any more or in fact the development environment in general because under most circumstances Go will cross-compile from my preferred environment. Deployment targets can be as bare-bones as they come and it doesn't matter to me because I can sit in a full environment with a current Go version, git, a fully set up Emacs, and so on and so forth. I do need to do some things on the deploy targets when testing, debugging, or tuning performance but nowhere near as much as I otherwise would.

As I sort of discussed in another context, all of these issues more or less go away if you're working on heavyweight things. Heavyweight things already have complex deployments in general and they often want to drag along their own custom environments even (or especially) in interpreted languages like Python; you may well run a big important Python app in its own virtualenv with its own build of Python and whatever collection of third party modules you want. But this is not the world that sysadmin tools generally operate in. Sysadmin tools are almost always small things that don't justify a lot of heavyweight work (and don't have complex deployment needs).


Comments on this page:

By dozzie at 2014-01-26 19:24:48:

It's a really nice thing that Go compilation output is a standalone binary, but it could use (easier) dynamic compilation process a bit. Small admin tools that are 500kB are a bit too big, especially when they contain the same printf() and read()/write() functions over and over again.

Personally I prefer distributing my tools with RPMs, but 1) I already have own RPM repositories deployed and 2) I only have Red Hats (4 through 6), so my environment is somewhat different than your.

By cks at 2014-01-26 21:43:44:

Since I have a relatively heterogenous environment I very much value that Go binaries are self-contained even if they're very big. It both enables cross-compilation and frees me from having to worry about whether Go shared libraries are installed on a specific system and if so, what version they are.

With that said, you can apparently get what you want by using gccgo instead of the golang toolchain. I just did a test build now and got a 60k binary that depended on /usr/lib64/libgo.so.4.

(And the punchline is that that's not installed on our 64-bit Ubuntu machines and I'm not sure if there's any package for them that has the 32-bit version. This is why I like static binaries because I don't have to worry about that.)

By Twirrim at 2014-01-27 11:07:08:

"Small admin tools that are 500kB are a bit too big"

Really?

I'm a huge fan of small and simple, and I've ragged on Go a bit for its over-sized binaries, but ultimately systems are more than powerful enough and have amply fast file systems that 500kB+ binaries shouldn't present any kind of significant barrier (especially when you consider the speed advantages of the final output).

It's also worth emphasising that whilst you're seeing 500kB, when you kick in a python program your system is loading a 3.1M python executable, let alone all the libraries that need to be found and added to the runtime environment (python has some funky namespace stuff that can add to the IO quite a bit there)

By dozzie at 2014-01-27 15:57:49:

@cks:

With that said, you can apparently get what you want by using gccgo instead of the golang toolchain.

Yes, that's what I referred to when talking about easier compilation. Unfortunately...

I just did a test build now and got a 60k binary that depended on /usr/lib64/libgo.so.4.

...there's this drawback. I understand depending on libc, zlib, OpenSSL, BerkeleyDB or some other libraries. Depending on somewhat artificial (looking from my distance) ELF library is not something I would like. Also, it doesn't seem to be first-class citizen like totally static compilation.

@Twirrim:

"Small admin tools that are 500kB are a bit too big"

Really?

Yes. If there are several dozens of them (I do have them that many), their size already started to annoy me. 3MB of Python is not a problem, especially when said Python interpreter is already there (because of Yum).

They annoy me even more so when I need to copy them from Espoo to Bangalore twenty times in a row (the connection between Finland and India is not the fastest one could imagine).

By cks at 2014-01-27 16:10:01:

Depending on libgo.so.4 is inevitable. Go is a separate language with its own runtime and standard library; either you statically include the runtime in the binary or you link to a shared library version of the runtime. In neither case can you get away from the runtime.

(Someone might be able to put in a huge amount of work to create a set of Go packages that were thin shims over libc routines, but the result would look nothing like standard Go code using standard Go packages. And you're still left with the core runtime, which appears to be not insignificant; a Go binary that literally does nothing still weighs in at 576 Kb on my 64-bit Fedora 20 machine with the latest Go tip.)

@dozzie

Yes. If there are several dozens of them (I do have them that many), their size already started to annoy me. 3MB of Python is not a problem, especially when said Python interpreter is already there (because of Yum).

As long as your python utilities work with the version of python available (old in 6 and ancient in 5) and don't require any additional pip libraries (seriously argparse why the hell isn't this included (or packaged))

They annoy me even more so when I need to copy them from Espoo to Bangalore twenty times in a row (the connection between Finland and India is not the fastest one could imagine).

Add a local cache (package them and push the repo to each site or just stick a proxy in)

From 89.70.157.130 at 2014-01-28 15:56:20:

@cks: Go compiler could strip unnecessary parts of its runtime in similar way to GCC when it links statically a program with some libraries. But all this is totally academic, as I don't expect anybody to pay much attention to dynamic linking in Go. As I said, dynamic linking seems to be more of a side effect of something than a thing really used by mainstream users.

@tacticus: Yeah. Good blind advice from someone without any knowledge about target environment. Really, good bet. I don't feel like even starting explaining why it's not viable. Short hint: corporate inertia.

By cks at 2014-01-28 18:59:26:

@89.70.157.130: I believe that the Go toolchain is stripping unnecessary parts of the runtime, as a handy other Go program I have is substantially larger despite not having particularly much code (eg compiling both the minimum program and my program with gccgo gives an executable size delta of 32kb). I suspect that the core runtime simply pulls in a lot of things. I wouldn't be surprised if eg garbage collection pulls in goroutine support, printing support for reporting panics, and so on.

Written on 26 January 2014.
« The origin of RCS (the version control system)
Things that affect how much support you get from a Linux distribution »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Sun Jan 26 01:46:12 2014
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.