The (peculiar) freedom of having a slow language

December 6, 2015

Back in my entry on why speeding up (C)Python matters, I said in an aside that there was a peculiar freedom in having a slow language. Today I'm going to explain what I meant in that cryptic aside.

The peculiar freedom of having a slow language is the mirror image of the peculiar constraint of having a fast language, which is that in a fast language there is usually a (social) pressure to write fast code. Maybe not the very fastest code that you could, that's premature optimization, but at least code that is not glaringly under-performant. When the language provides you a fast way to do what your code needs to do, you're supposed to use it. Usually this means using a 'narrow' feature, one that is not particularly more powerful than you need.

In a slow language like (C)Python, you are free of this constraint. You don't have to feel guilty about using an 'expensive' feature or operation to deal with a small problem instead of carefully writing some narrow efficient code. The classical example of this is various sorts of simple parsing. In many languages, using a regular expression to do most parsing is vastly indulgent because it's comparatively slow, even if it leads to simple and short code; there is great social pressure to write hand-rolled character inspection code and the like. In CPython you can use regexps here without any guilt; not only are they comparatively fast, they're probably faster than hand written code that does it the hard way.

The result of this is that in CPython I solve a lot of problems with simple brute force using builtins, regular expressions, and other broad powerful features, while in languages like Go I wind up writing more complicated, more verbose code that is more narrow and more efficient because it only does what's strictly necessary.

(I came to really be aware of this after recently writing some Go code to turn newlines into CR NL sequences as I was writing output to the network. In Python this is a one-liner piece of code; in Go, the 'right' Go-y way involves a carefully efficient hand-rolled loop, even though you could theoretically do it in exactly the same way that Python does.)


Comments on this page:

By Michael J. Cohen (mic) at 2015-12-07 13:03:04:

CRNL? Do you mean CRLF? :P

By cks at 2015-12-07 23:19:58:

Whether it should be CRNL or CRLF is an interesting cultural issue. Unix and Unix-derived languages, Python included, are pretty strongly on the 'NL' side; it's what Unix has consistently called the character over its history and Python reflects this in the C-derived \n and so on.

(Attempts in C and elsewhere to make \n be an abstract character instead of concrete ASCII 0x0a have not been entirely successful, partly because so much software has been written in an environment where the embodiment of the abstract idea of \n is in fact 0x0a. Applications to ASCII/Unicode character set issues are left as an exercise.)

I will go a step further and claim that in such a slow language, the wasteful way of doing things is often the faster way.

At least that is the case if the language runtime is written in a lower-level language (in practice, C) and lacks a JIT compiler. Under those conditions, opcode dispatch is the deciding factor. Algorithmically parsimonious code typically requires running lots of simple ops, which incurs a huge amount of dispatch overhead and thus kill any chance to compete. Wasteful code typically relies on heavyweight primitives that do big chunks of work outside the opcode dispatcher’s purview, so it doesn’t need to invoke near as many ops, resulting in (much) more bang per clock cycle buck. Since expressive and flexible code is typically wasteful in this way, you get to have your cake and eat it (as long as you can justify using the slow language…).

This dynamic gets disrupted if you do have a JIT and the language is self-hosting to a significant extent. In that case the dispatch overhead (or lack thereof) for user code is the same as for the runtime and the reward for relying on heavyweight primitives disappears.

Written on 06 December 2015.
« The details behind zpool list's new fragmentation percentage
The old-fashioned and modern ways to remap keys in X (some notes) »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Sun Dec 6 02:07:37 2015
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.