Wandering Thoughts


Avoiding the 'dangling else' language problem with mandatory block markers

There is a famous parsing ambiguity in many programming languages known as the dangling else, where it's ambiguous which 'if' statement an 'else' is associated with. In C, for example, you can write:

if (a)
if (b)
   res = 10;
   res = 20;

Which if the else is associated with is somewhat ambiguous; is it the first or the second? (C provides an answer, as any language that allows this must.)

It's recently struck me that one way to avoid this in language design is to require some form of explicit block markers for the statements inside an if (and an else). In C terms, to disallow bare statements after if and else and always require '{' and '}'.

Go takes this approach with explicit block markers. In Go, an if requires a delimited block, not a bare statement, and else is similarly unambiguous. A lot of things in Go require blocks, in fact. Python also does this through its implicit blocks that are created through indentation; which if an else is associated with is both visually and syntactically unambiguous.

(Python allows writing simple 'if' clauses on one line, but not ones that involve nested ifs, and then you have to put the else on the next line.)

Another Unix language that avoids this ambiguity is the Bourne shell. Although we don't necessarily think about it all that often, the 'then' and 'fi' in Bourne shell if expressions are block markers, and they're mandatory:

if [ -x /usr/bin/something ]; then

Although you can write this all on a single line, you can't leave out the markers. This is actually a somewhat tricky case, because 'else' is a peer with these two markers; however, the result isn't ambiguous:

if [ -x /something ]; then
if [ -x /otherthing ]; then

Since we haven't seen a 'fi', the 'else' has to be associated with the second 'if' instead of the first.

DanglingElseAndBlocks written at 02:35:23; Add Comment


Sorting out Go's 'for ... = range ..' and when it copies things

I recently read Some tricks and tips for using for range in GoLang, where it said, somewhat in passing:

[...] As explained earlier, when the loop begins, it will copy the original array to a new one and loop through the elements, hence when appending elements to the original array, the copied array actually doesn't change.

My eyebrows went up because I'd forgotten this little bit of Go, and I promptly scuttled off to the official specification to read and understand the details. So here are some notes, because the issues behind this turn out to be more interesting than I expected.

Let's start with the basic form, which is 'for ... := range a { ... }'. The expression to the right of the range is called the range expression. The specification says (emphasis mine):

The range expression x is evaluated once before beginning the loop, with one exception: if at most one iteration variable is present and len(x) is constant, the range expression is not evaluated.

Obviously if the range expression is a function call, the function call must be made (once) and then the return value used in the range expression. However, in Go even evaluating an expression that's a single variable produces a copy of the value of that variable (in the abstract; in the concrete the compiler may optimize this out). So when you write 'for a, b := range c', Go (nominally) evaluates c and uses the resulting copy of c's current value.

(Among other consequences, this means that assigning a different value to c itself inside the loop doesn't change what the loop does; c's value is frozen at the start, when it's evaluated.)

As the additional bit of the specification explains, this doesn't happen if you use at most one iteration value and you're ranging over one of the small number of things where len(x) is a constant (the rules for this are somewhat legalistic). If you use two iteration variables, you always evaluate the range expression and make a copy, which is another reason for Go to prefer the single variable version (to go with nudging you to not copy actual values unless necessary).

However, things get tricky if you use pointers. Here:

a := [5]int{1, 2, 3, 4, 5}
for _, v := range a {
    a[3] = 10
    fmt.Println("Pass 1:", v)
// reset our mutation
a[3] = 4
// loop via pointer:
b := &a
for _, v := range b {
    b[3] = 10
    fmt.Println("Pass 2:", v)

In the second loop, what gets copied when the range expression is evaluated is the pointer, not the array it points to (note that b is not a slice, it's a pointer to an array). Go's implicit dereferencing of pointers means that the code for the two loops looks exactly the same, although they behave differently (the first prints the original array values before the mutation in the loop, the second mutates 'a[3]' before printing it).

On the one hand, this may be confusing. On the other hand, this provides a way to effectively sidestep all sorts of range expression copying (if you don't want it); all you have to do is pointerize your range expression, and almost nothing will care about the difference. Fortunately often you don't care about the copying to begin with, because making copies of strings, slices, and maps doesn't require copying the underlying data. The only thing that you can range over that's expensive to copy is an actual array, and directly using actual arrays in Go is relatively rare (especially when using real arrays can cause interesting errors).

If you do a 'copying' range over anything other than a real array (which is copied) or a string (which is immutable), you can still mutate the values from what you're ranging over in your range loop in a way that future iterations of your range loop will or at least may see. Probably you don't want to do this.

(This is the consequence of ranging over slices and maps not making a copy of the underlying data. Because your range copies the slice itself, shrinking or enlarging the original slice won't change the number of iterations. You can potentially change the number of iterations of a map inside of the loop, though.)

Probably I don't need to care about this range copying, at least from an efficiency perspective (I had better remember its other consequences). My Go code (and Go in general) only very rarely uses fixed size arrays, which are the only expensive thing to copy. Copying slices and maps is pretty close to free, and those are usually what I range over (apart from channels, which I consider a special case).

GoRangeCopying written at 00:55:56; Add Comment


More or less what versions of Go support what OpenBSD releases (as of March 2020)

In light of recent entries, such as the situation with Go on OpenBSD, I've decided to write down what I can find out about what versions of Go theoretically work on what releases of OpenBSD. This is not actually tested for various reasons including the lack of suitable OpenBSD machines (for example, ones that are not in production use as important firewalls).

The Go people have a wiki page on OpenBSD that lists both the Go versions in OpenBSD's ports tree and the built from source versions that are supported on various OpenBSD releases. Additional information comes from the Go release notes for major versions; the release notes are potentially more accurate than the wiki, especially for old OpenBSD releases.

Going through both sources, I believe it breaks down this way for 32 and 64-bit x86:

  • OpenBSD 5.0 through 5.4 is supported by Go 1.1 and Go 1.2 only (the wiki seems to say it's supported by Go 1.0, but the Go 1.1 release notes specifically mention that OpenBSD support is new in it).

  • OpenBSD 5.5 requires at least Go 1.3 and last supports Go 1.6 or Go 1.7, depending on who you believe. The wiki says Go 1.7, but the Go 1.7 release notes say that it requires OpenBSD 5.6 or better because it uses the then new getentropy(2) system call.

  • OpenBSD 5.6 through 6.0 requires at least Go 1.4.1 due to at least issue #9102. You might be able to use Go 1.3 if you don't use net.Interface, but I'm not sure. OpenBSD 5.8 is last supported in Go 1.7, OpenBSD 5.9 is last supported in Go 1.8, and OpenBSD 6.0 is last supported in Go 1.10.

    OpenBSD 5.6 is the oldest version currently listed as having a version of Go in its ports tree.

  • OpenBSD 5.9 is the oldest release supported by Go 1.8, from the Go 1.8 release notes; this is apparently due to using new syscalls that OpenBSD was switching to.

  • OpenBSD 6.0 is the oldest release supported by Go 1.9, at least for cgo binaries, per the release notes. It's possible that non-cgo binaries from Go 1.9 would run on OpenBSD 5.9, but they won't run on anything before that due to those syscall changes in Go 1.8.

  • OpenBSD 6.1 requires at least Go 1.8 and is last supported by Go 1.10.

    Since Go 1.4 is the last version of Go that can be built without an existing Go compiler and it doesn't run on 6.1, building Go on OpenBSD 6.1 (and later) requires either getting an initial version of Go from the OpenBSD ports tree (if you can still find an operable ports archive for your old OpenBSD release) or bootstrapping from cross-compiled source.

  • OpenBSD 6.2 and 6.3 are listed as requiring at least Go 1.9 in the wiki, but I can't find anything in the release notes about why. The wiki claims that 6.2 and onward are still supported in Go 1.14 (the current Go version as I write this).

    OpenBSD 6.2 is also the oldest release supported by Go 1.11, per the release notes.

  • OpenBSD 6.4 (and later) requires at least Go 1.11, due to a new OpenBSD kernel thing covered in CL 121657.

OpenBSD 6.5 and 6.6 have no additional Go version restrictions; they're supported by 1.11 onward through Go 1.14 (currently). These are the two releases that are currently supported by OpenBSD (as mentioned in the depths of the FAQ). In a few months, OpenBSD will likely release OpenBSD 6.7, at which point 6.7 and 6.6 will become the two supported releases and Go will only officially support them.

Generally it looks like if you can stick with OpenBSD 6.2 or better, you're theoretically fine for Go support and can use the latest Go version (and thus build programs that require it, are module based, and so on). OpenBSD 6.2 dates from October of 2017, so many people will be covered by this. A future version of Go may stop working on older OpenBSD versions (the next release of Go will likely only officially support OpenBSD 6.7 and 6.6), but so far there seems to be nothing that would force this.

We're an unusual case in our range of OpenBSD releases. Currently we have machines running OpenBSD 5.1, 5.3, 5.4, 5.8, 6.2, 6.4, and 6.6. If we tried to use Go on all of them, we would have to restrict ourselves to Go features from Go 1.2 and also maintain multiple Go versions across the various range of machines (and multiple compiled binaries for Go programs).

GoWhatOpenBSDs-2020-03 written at 23:52:30; Add Comment


The situation with Go on OpenBSD

Over in the fediverse, Pete Zaitcev had a reaction to my entry on OpenBSD versus Prometheus for us:

Ouch. @cks himself isn't making that claim, but it seems to me that anything written in Go is unusable on OpenBSD.

I don't think the situation is usually that bad. Our situation with Prometheus is basically a worst case scenario for Go on OpenBSD, and most people will have much better results, especially if you stick to supported OpenBSD versions.

If you stick to supported OpenBSD versions, upgrading your machines as older OpenBSD releases fall out of support (as the OpenBSD people want you to do), you should not have any problems with your own Go programs. The latest Go release will support the currently supported OpenBSD versions (as long as OpenBSD remains a supported platform for Go), and the Go 1.0 compatibility guarantee means that you can always rebuild your current Go programs with newer versions of Go. You might have problems with compiled binaries that you don't want to rebuild, but my understanding is that this is the case for OpenBSD in general; it doesn't guarantee a stable ABI even for C programs (cf). If you use OpenBSD, you have to be prepared to rebuild your code after OpenBSD upgrades regardless of what language it's written in.

(You may have to change the code too, since OpenBSD doesn't guarantee API compatibility across versions either. But the API is generally more stable than the ABI.)

If you have older, out of support OpenBSD versions and Go programs that you keep developing, you're fine as long as you make your Go code work with the oldest Go release that's required for your oldest OpenBSD. This mostly means avoiding new additions to the Go standard library, although sometimes performance improvements make new patterns in Go code more feasible and you'll have to not rely on them. You'll probably have to freeze and maintain your own binary copies of old Go releases for appropriate old OpenBSD versions. Go's currently ongoing switch from its old way of just fetching packages to modules may cause you some heartburn, but you can probably deal with this by vendoring everything.

(If you have Go programs that you don't keep developing, life is even easier because you can freeze your pre-build binaries for older OpenBSD versions as well. All you need to do is periodically build the latest Go for the latest OpenBSD and then build your programs with it.)

If you have older OpenBSD releases as well as current ones, and a Go program where you want to use the latest Go features (or where you want to keep up with dependencies and those dependencies do), you can still do this provided that you don't need to run new versions of your program on old OpenBSDs. If they're fine with the version they have, even if it's not as efficient or as feature-full as the latest one, then you can just freeze their pre-build binaries and only run the latest version of your program on the latest OpenBSDs with the latest Go.

And this gets us down to Prometheus, where we have the worst case scenario; you need to run the current version of your program on all OpenBSD versions and the current version uses features from very recent versions of Go, which only run on a few OpenBSD versions. This doesn't work, as discussed.

Sidebar: Cgo and OpenBSD

In general, avoiding cgo is likely to extend the range of OpenBSD versions that a Go program can run on, even a Go program that's built with the very latest version of Go. Although older versions of OpenBSD aren't officially supported by Go 1.14, it can cross-build a pure Go program that seems to work as far back as OpenBSD 5.8 (on 64-bit x86), although it fails on OpenBSD 5.4 with a 'bad system call' core dump. Using cgo gives you two problems; you can't cross-build, and you're likely to be much more restricted in what range of OpenBSD versions the resulting binaries will run on.

(Being able to cross-build from Linux makes it much easier to set up older Go versions to build binaries for old OpenBSD versions, since old Go versions will generally build fine even on new Linuxes.)

GoOpenBSDSituation written at 22:34:00; Add Comment


One reason for Go to prefer providing indexes in for ... range loops

I was recently reading Nick Cameron's Early Impressions of Go from a Rust Programmer (via). One of the things Cameron did not like about Go was, well, let me quote directly from the article:

  • for ... range returns a pair of index/value. Getting just the index is easy (just ignore the value) but getting just the value requires being explicit. This seems back-to-front to me since I need the value and not the index in most cases.

Implicitly, this is about ranging over slices and arrays, rather than strings (for which indexing is a bit different) or maps.

I can't know the reasons behind this decision of Go's, but I think that one good reason for this approach is to encourage people towards using the value in place by accessing it through an index into the array or slice. The potential problem with using the value directly from the range statement is that when you use the value from a for ... range over a slice or array, you're actually using a copy of it. When you write:

for i, v := range aslice {
    fmt.Printf("%p %p\n", &v, &aslice[i])

The two pointer values are not the same (and v is the same pointer value all the time). The for loop variable v is a copy of aslice[i], not the same as it. Sometimes this copy will be basically free, for instance if you're ranging over a slice of something small. But if you're ranging over a slice of decent sized structures or something else that's big, the copy is not free at all and you may well want to use them in place in aslice.

(In some situations making a copy is not even correct, but you're not likely to encounter those with slices or arrays. For instance, you're probably not going to have an array of sync.Mutexes, or structures containing them. More likely you'd have an array of pointers to them, and copying pointers to mutexes (and structures containing them) is perfectly valid.)

By returning the index first (and making it easy to get the index without the value), Go quietly nudges you toward using the index to directly access the value, rather than forcing it to make a copy of the value for you. Go only has to materialize a copy of the value if you go out of your way to ask for it, and it hopes that you won't. At the very least, this nudge may get you to think about it.

(I also feel that it fits in with Go's general philosophy and approach. The natural starting point of a for ... range over an array or a slice is an index, so that's what you get to start with; making a copy of the value is an extra step that the compiler has to go out of its way to do, so it's not natural to have it the first thing produced.)

GoForRangeNudging written at 23:46:00; Add Comment


Some git aliases that I use

As a system administrator, I primarily use git not to develop my own local changes but to keep track of what's going on in projects that we (or I) use or care about, and to pull their changes into local repositories. This has caused me to put together a set of git aliases that are probably somewhat different than what programmers wind up with. For much the same reason that I periodically inventory my Firefox addons, I'm writing down my common aliases here today.

All of these are presented in the form that they would be in the '[alias]' section of .gitconfig.

  • source = remote get-url origin

    I don't always remember the URL of the upstream of my local tracking repository for something, and often I wind up want to go there to do things like look at issues, releases, or whatever.

  • plog = log @{1}..

    My most common git operation is to pull changes from upstream and look at what they are. This alias uses the reflog to theoretically show me the log for what was just pulled, which should be from the last reflog position to now.

    (I'm not confident that this always does the right thing, so often I just cut and paste the commit IDs that are printed by 'git pull'. It's a little bit more work but I trust my understanding more.)

  • slog = log --pretty=slog

    Normally if I'm reading a repo's log at all, I read the full log. But there are some repos where this isn't really useful and some situations where I just want a quick overview, so I only look at the short log. This goes along with the following in .gitconfig to actually define the 'slog' format:

        slog = format:* %s

  • pslog = log --pretty=slog @{1}..

    This is a combination of 'git plog' and 'git slog'; it shows me the log for the pull (theoretically) in short log form.

  • ffpull = pull --ff-only
    ffmerge = merge --ff-only

    These are two aliases I use if I don't entirely trust the upstream repo to not have rebased itself on me. The ffpull alias pulls with only a fast-forward operation allowed (the equivalent of setting 'git config pull.ff only', which I don't always remember to do), while ffmerge is what I use in worktrees.

(Probably I should set up a git alias or something that configures a newly cloned repo with all of the settings that I want. So far I think that's only 'pull.ff only', but there will probably be more in the future.)

I have other git aliases but in practice I mostly don't remember them (for instance, I have an 'idiff' alias for 'git diff --cached').

GitAliasesIUse written at 00:52:07; Add Comment


I frequently use dependencies because they enable my programs to exist

I was recently reading Drew DeVault's Dependencies and maintainers, which I encourage you to read too (it's short). To me, the climax of DeVault's post is the following:

[...] The idea of depending on a library I’ve never heard of, several degrees removed via transitive dependencies, maintained by someone I’ve never met and have no intention of speaking to, is absolutely nuts to me. [...]

I can't speak for anyone but myself, but as it happens I have at least two Go programs that use significant external dependencies; my program for lightweight remote control of Unix Firefox on X and my SNTP query program. I've never looked deeply into the state of the dependencies for either, and certainly I haven't checked for transitive dependencies.

Why not is pretty straightforward. Both of these programs exist because relying on their dependencies made them simple and fast to write. Had they been larger projects, neither of them would even have been written; it would not be worth the time and the bother (especially for my SNTP query program). If you're writing programs with limited time, you pretty much have to take dependencies on trust. Not only do you not have the time to conduct a significant investigation or become really involved in the community, you also don't have any real alternative because you can't afford to recreate the dependency yourself. Either the dependency works out or you don't have a program, so you might as well assume it will work out and try.

(This would be a bad idea if you were going to invest significant amounts of work into writing your program, but this is for small, readily written programs where you're not out much time if you code something and it fails.)

PS: Sometimes I use dependencies because they make my life easier and get things done faster, even if I could theoretically live without them. This is the case for using a dependency for option handling in my Go programs; it's not strictly necessary, but it works and it's more pleasant than Go's standard flag package.

DependenciesEnablePrograms written at 01:27:30; Add Comment


Why I've come to like that Go's type inference is limited

Although Go is a statically typed language, it has some degree of type inference to make your life easier and less bureaucratic. However, this type inference is limited to within a single function, so Go won't do things like infer the return type of your function for you even though it could. When I first started writing Go code (at the time, primarily coming from Python), I found this limitation irritating. The Go compiler could perfectly well see the types involved (and would complain if I got them wrong), so it felt annoying that it made me declare them again. Over time, I've come to appreciate this limitation and find it a good thing.

The obvious problem you avoid by limiting type inference to only within a single function is what I will call 'spooky type errors at a distance'. If return types can be inferred, you can have an entire chain of functions with inferred return types; you call A who calls B who calls C who calls D, and you use the result in some way that requires it to be of a type (or compatible with an interface). Now suppose D changes its return type. This change in return type will propagate up through the chain of inferred types until it hits your function and generates a type error where the new inferred type isn't compatible with what you're doing with it any more. D's type change has propagated to cause errors not in C but in you, far away from the change itself.

(One advantage of the type error happening in C is that C is the code that directly deals with D, and so it's the code and the people who are most familiar with what's going on, what they could change to deal with it, and so on. You and your function may have no real understanding of D or even have never heard of it before.)

Avoiding spooky type errors at a distance also means that you avoid arguments and decisions about where they should be fixed. With specified return types, if D's return type changes either C must fix it or change its own API by visibly changing its return type. If return types are inferred, you could maybe fix this anywhere in the call stack, from you on down. Each different fix would probably have different implications, some of them hard to track. With fixed return types, you avoid all that; it's always clear who has to change next and what the likely consequences are.

As a consequence of all of this, the effects of changing a return type are much more visible and obvious. With type inference for return types, you can tell yourself that no one will notice right up until the point that actually someone does. I've done this in my own Python code, when I forgot that some usage far away from the equivalent of function D depended on some property that I was now silently changing.

Since Go is designed in the service of large scale software engineering, I think this is the right trade-off for Go to make. Spooky action at a distance is exactly what you don't want in something designed for large scale software engineering, because that far off thing is probably written and maintained by an entirely different bunch of people from you. Even spooky action at a distance within your own package makes the effects and impact of changes less clear. Being straightforward is an advantage.

(When imagining a hypothetical Go with return type inference, let's assume that you don't allow type inference across package boundaries, because going that far opens a very large can of worms. This would mean that exported names had different type rules than unexported ones.)

GoLimitedTypeInferenceLike written at 01:11:02; Add Comment


How Go's net.DialContext() stops things when the context is cancelled

These days, a number of core Go standard packages support functions that take a context.Context argument and abort their operation if the context is cancelled. This is an interesting trick in Go, because normally you can't gracefully interrupt a goroutine doing network IO (which leads to problems in practice). When I started looking into the relevant standard library code I expected to find that things like net.Dialer.DialContext() had special hooks into the runtime's network poller (netpoller) to do this. This turns out to not be the case; instead dialing uses an interesting and elegant approach that's open to everyone doing network IO.

In order to abort an outstanding dial operation if the context is cancelled, the net package simply sets an expired (write) deadline. In order to do this asynchronously, it starts a background goroutine to listen for the context being cancelled (and then there's some complexity involved to clean everything up properly and handle potential races; races caused a number of issues, eg issue 16523). Setting read and write deadlines is already explicitly documented as affecting currently pending reads (and writes), not just future ones, so dialing is reusing a general mechanism that already needs to exist.

(This reuse is a little bit tricky for dialing, which is taking advantage of a customary and useful property where the underlying OS only reports a network socket as writeable once it's connected. This means that you generally check for a connection having completed by seeing if it's now writeable, and in turn this means you can sensibly limit or abort this check by setting a write deadline.)

Now that I've discovered this use of deadlines in DialContext, it's clear that I can do the same thing to abort outstanding network reads or writes in my own code. As a bonus, this will probably return a fairly distinctive error, or I can wrap this in something that implements 'read with context' or 'write with context', probably with some of the race precautions seen in the net package's code.

PS: I was going to say that this is also how net.ListenConfig.Listen handles its context being cancelled, but then I went to look at the code and now I have no idea how that actually works.

PPS: If the context you pass to DialContext() already has a deadline, DialContext() immediately sets a write deadline on the underlying network connection, in addition to its handling of cancellation. There's also some complexity in the code to stop as soon as possible if the context is cancelled immediately, before it starts up the whole extra goroutine infrastructure to wait.

GoDialCancellationHow written at 23:45:58; Add Comment


Things I've stopped using in GNU Emacs for working on Go

In light of my switch to basing my GNU Emacs Go environment on lsp-mode, I decided to revisit a bunch of .emacs stuff that I was previously using and take out things that seemed outdated or that I wasn't using any more. In general, my current assumption is that Go's big switch to using modules will probably break any tool for dealing with Go code that hasn't been updated, so all of them are suspect until proven otherwise. For my own reasons, I want to record everything I remove.

My list, based on an old copy of my .emacs that I saved, is:

  • go-guru, which was provided through go-mode; one of the things that I sort of used from it was a minor mode to highlight identifiers. To the extent that I care about such highlighting, it's now provided by lsp-mode.

  • gorename and the go-rename bindings for it in go-mode. In practice I never used it to automatically rename anything in my code, so I don't miss it now. Anyway, lsp-mode and gopls do support renaming things, although I have to remember that this is done through the lsp-rename command and there's no key or menu binding for it currently.

  • godoctor, which was another path to renaming and other operations. I tried this out early on but found some issues with it, then mostly never used it (just like gorename).

  • go-eldoc, which provided quick documentation summaries that lsp-mode will now also do (provided that you tune lsp-mode to your tastes).

  • I previously had M-. bound to godef-jump (which comes from go-mode), but replaced it with an equivalent lsp-mode binding to lsp-ui-peek-find-definitions.

  • I stopped using company-go to provide autocompletion data for Go for company-mode in favour of company-lsp, which uses lsp-mode as a general source of completion data.

All of these dropped Emacs things mean that I've implicitly stopped using gocode, which was previously the backend for a number of these things.

In general I've built up quite a bunch of Go programming tools from various sources, such as gotags, many of which I installed to poke at and then never got around to using actively. At some point I should go through everything and weed out the tools that haven't been updated to deal with modules or that I simply don't care about.

(The other option is that I should remove all of the Go programs and tools I've built up in ~/go/bin and start over from scratch, adding only things that I turn out to actively use and want. Probably I'm going to hold off on doing this until Go goes to entirely modular builds and I have to clean out my ~/go/src tree anyway.)

I should probably investigate various gopls settings that I can set either through lsp-go or as experimental settings as covered in the gopls emacs documentation. Since I get the latest Emacs packages from Melpa and compile the bleeding edge gopls myself, this is more or less an ongoing thing (with occasional irritations).

GoEmacsDroppedTools written at 18:33:59; Add Comment

(Previous 10 or go back to December 2019 at 2019/12/25)

Page tools: See As Normal.
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.