Go's arbitrary-precision constants and cross compilation

November 16, 2016

Go famously makes plain numeric constants in source code be untyped and arbitrary-precision (per the language specification). This extends to constant expressions, which are evaluated in theoretically arbitrary precision (and in practice precision much larger than that directly supported by the actual machines Go is likely to be running on). All of this is really convenient in various ways, especially since arbitrary-precision arithmetic is usually what we pretend we're getting in general with computers.

It recently struck me that these arbitrary precision constants and constant expressions have an advantage when it comes to easily supporting cross compilation. One of the classical problems in cross compilation is computing constants and constant expressions when you're cross-compiling for a target that has a different precision than your host. The natural (or naive) way to code a compiler is to compute constants and constant expressions using host arithmetic and thus host precision, which naturally can give different results than the same compiler code running on the target with target arithmetic. This is, well, undesirable. But if you always want to compute constants in target precision, you need code that can emulate the target's arithmetic even when it differs from yours, and you have to make sure it works correctly even in weird situations, and then you may need a different set of code for another target.

(And this set of code is probably only run when you're cross-compiling, which most people never do, and thus it can easily pick up subtle or even not so subtle bugs over time. The test matrix you need gets pretty big pretty fast.)

Go's arbitrary precision numeric constants take this entire issue out of play. You simply can't evaluate Go constants using native arithmetic regardless of whether or not you're cross-compiling, so the compiler has needed special high precision code for constants from the start. This code can be made carefully portable so that it always gets the same results on all supported Go platforms, and then the entire problem goes away. The only time you care about precision is when you turn the untyped constants into actual variables or concrete constants during code generation, and at that point you're naturally working with the target's precision and numeric representations, since you have to encode values correctly with the right endianness, floating point format, and so on.

(The current Go compiler implements constants and constant expressions using the standard math/big package. After all, if you already have an arbitrary precision number package, you might as well use it when you need arbitrary precision numbers.)


Comments on this page:

By Anon at 2016-11-17 15:47:48:

I remember reading that your compiler and architecture can cause Quake 2's software rendering to change ever so subtly - http://blog.jwhitham.org/2015/04/gcc-bug-323-journey-to-heart-of.html .

Written on 16 November 2016.
« Open source and the problem of pure maintenance
The somewhat odd subject of Django versus Python »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Wed Nov 16 00:54:37 2016
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.