2013-11-14
One reason I like Go: it seems natural to avoid object churn
If you want to write well performing code in any language that features dynamic allocation and garbage collection, one of the important things is to avoid object churn. One of the things I like about Go is that I can easily write code where I'm pretty confident I'm not doing any unnecessary allocations. Even better, so far this code has generally been the natural way to do things.
To show you what I mean, here's typical code to copy from an input source to the network:
// 0-length buf is valid and does nothing func writeall(dst io.Writer, buf[]byte) (err error) { var nr int for len(buf) > 0 && err == nil { nr, err = dst.Write(buf) buf = buf[nr:] } return } func fromto(src io.Reader, dst io.Writer) { var rerr, werr error var n int buf := make([]byte, BUFSIZE) for rerr == nil && werr == nil { n, rerr = src.Read(buf[0:]) // we assume n is meaningful // even if rerr is asserted. werr = writeall(dst, buf[:n]) } // real code would then handle rerr // and werr and do something sensible. }
If I'm correct I think there's exactly one dynamic memory allocation
in this entire chunk of code under normal circumstances, namely the
explicit allocation of buf
in fromto()
. In a language like Python,
very similar code would be creating and destroying temporary objects all
over the place and you would need to be very careful and write somewhat
unnatural code to minimize it.
(If and when there are IO errors, it's likely that the resulting
error
values involve dynamic allocation behind the scenes. But
this is an exceptional case. Most of the time rerr
and werr
will be nil
.)
In this example a lot of the credit goes to Go's powerful concept of
slices. Slices explicitly make references to an underlying blob of
storage instead of making copies of it. Using slices we can (for
example) iterate our way through successive versions of the write buffer
in writeall()
in a completely natural way without actually making any
copies; all we do is shuffle around a single static reference.
(Where this comes in really handy is the interface to .Write()
, which
can take a simple slice instead of all of buffer, offset, and length
because slices basically encapsulate exactly that.)
Part of this is also the different semantics of names between Python and Go. Go uses what I've called storage semantics and storage semantics intrinsically does less allocation because you almost always have an assigned place to put fixed-size objects (especially if you give them static types, as Go does). In theory I think that a sufficiently clever compiler can handle all of the basic function arguments and return values here without any dynamic allocation, although I don't know if the normal Go compiler is quite that smart.
Sidebar: where I may be wrong
Storage semantics don't entirely save you in a garbage collected environment where you can take references to things and local variables don't have an explicit lifetime limitation. A clever compiler can do escape analysis to determine that local variables can be stack allocated instead of needing explicit heap allocations, which all of the variables here qualify for. I assume Go's compiler is reasonably clever here because it's a fairly important optimization if Go wants to be efficient.
A total failure of escape analysis here would result in the heap
allocation and then destruction of all of the local variables in
writeall()
every time it was called. Variables in fromto()
would also be heap allocated but not reallocated during the for
loop.
(Inspecting the assembly from compiling this code suggests that both the standard Go compiler and gccgo are doing full escape analysis here and thus are heap allocating everything.)