Link: A deep dive into the Go memory allocator
Allocator Wrestling is a summary of Eben Freeman's talk from GopherCon 2018 on the Go memory allocator (via, and see also) and its garbage collection system. The slides are here (via) and have more details and elaborations on various things than the livebloged summary, although you probably want to read both (good talks are rarely entirely captured by their slides).
I love seeing under the hood of a complex system this way, and it's probably helped me move towards understanding some things about how much memory Go programs use (or appear to use).
Link: Where Vim Came From
Where Vim Came From (via) is an interesting and thorough overview of the history of vim, vi, ed, and other predecessors (with copious footnotes). It's nice to see all of the pieces laid out this way, and I learned of some historical links that I hadn't already known.
(I do wonder what vi would have been like if ed had kept QED's multiple buffer support.)
Link: A Child’s Garden of Inter-Service Authentication Schemes
A Child’s Garden of Inter-Service Authentication Schemes is an opinionated overview of service to service authentication schemes from Latacora (via, which has comments worth reading for once, including from various Latacora people). As with pretty much everything Latacora writes on their blog, it's not just informative, it's entertaining too. I find it well worth reading.
Link: About the memory management in the Bourne shell
About the memory management in the Bourne shell (via) is a collection of discussions about the original Bourne shell's creative, interesting, and infamous approach to memory management in the original Unix memory allocation scheme. If you like this kind of thing, it's worth reading through and decoding things.
(It also links to a recording of Stephen Bourne's BSDCan 2015 talk "Early days of Unix and design of sh", which I haven't watched yet but keep seeing links to. Someday.)
Link: Parsing: a timeline
Jeffery Kegler's Parsing: a timeline (via) is what it says on the title; it's an (opinionated) timeline of various developments in computer language parsing. There are a number of fascinating parts to it and many bits of history that I hadn't known and I'm glad to have read about. Among other things, this timeline discusses all of the things that aren't actually really solved problems in parsing, which is informative all by itself.
(I've been exposed to various aspects of parsing and it's a long standing interest of mine, but I don't think I've ever seen the history of the field laid out like this. I had no idea that so many things were relatively late developments, or of all of the twists and turns involved in the path to LALR parsers.)
Link: Closing the Loop: The Importance of External Engagement in Computer Science Research
Professor John Regehr's Closing the Loop: The Importance of External Engagement in Computer Science Research is an excellent article on the general spots where academic computer science can become disconnected with the real world and the engineering problems that are found there. Since I work in academia (and have read Greg Wilson for some time), this is an issue relatively near to my heart and I quite liked how he presents things in the article. It's a new framing of the issues, one that puts things in a clear way.
He's also written a followup post, Paths to External Engagement in Computer Science Research. This one is probably mostly of interest to people inside the sausage factory who want to interact with the outside, as opposed to people on the outside wondering why on earth academic computer science isn't more useful to them.
Link: Some fascinating details of cellular data transmission
Part of Ilya Grigorik’s “High Performance Browser Networking” is a fascinating section on the Radio Resource Controller (RRC):
Both 3G and 4G networks have a unique feature that is not present in tethered and even WiFi networks. The Radio Resource Controller (RRC) mediates all connection management between the device in use and the radio base station. Understanding why it exists, and how it affects the performance of every device on a mobile network, is critical to building high-performance mobile applications. The RRC has direct impact on latency, throughput, and battery life of the device in use.
As someone who just has a smartphone but likes to peek under the covers, I found it compelling reading, even if I'm not directly building anything that is affected by this. If nothing else it gives me a greater appreciation of what my smartphone is doing and what sort of things in applications (and my own usage) may be using up extra battery.
(Via Can You Afford It?: Real-world Web Performance Budgets, itself via lobste.rs.)
Link: The Python decorators they won't tell you about
H. Chase Stevens The decorators they won't tell you about (via Hacker News, repeatedly; it's been posted several times) is another view of Python decorators. I'll give you a quote that shows the flavour:
Decorators are often described as "functions which take functions and return functions", a description which is notable in that, technically speaking, not a single word of it is true.
(H. Chase Stevens is right about this, by the way.)
If you're interested in understanding more about what decorators are and some clever (or crazy) things they can be used for beyond the obvious, this is well worth reading. I certainly enjoyed it, even if some of the tricks it shows are things that I'd probably never use in real code.
(Breaking out of conventional views is always useful, in my opinion.)
Link: Citation Needed [on array indexing in programming languages]
Mike Hoye's Citation Needed is ostensibly about the origins of zero-based array indexing in programming languages. But that's not really what it's about once Mike Hoye gets going; it's really about our field's attitude towards history, the consequences of that attitude, and the forces that drive it, including inaccessible papers. Even if you're indifferent to where zero-based array indexing comes from, that portion of the article is well worth reading and thinking about.
(I'm not going to quote any of it. Read the whole thing, as they say; it's not that long.)
PS: This is from 2013, so you might have read it already. If you aren't sure and don't remember it, read it again.
Link: Linux Load Averages: Solving the Mystery
Load averages are an industry-critical metric – my company spends millions auto-scaling cloud instances based on them and other metrics – but on Linux there's some mystery around them. Linux load averages track not just runnable tasks, but also tasks in the uninterruptible sleep state. Why? I've never seen an explanation. In this post I'll solve this mystery, and summarize load averages as a reference for everyone trying to interpret them.
In the process of doing this, Brendan Gregg goes back to TENEX (including its source code) for the more or less original load average. Then he chases down the kernel patch from October 1993 that changed Linux's load averages from purely based on the size of the run queue to including processes in disk wait. It goes on from there, including some great examples of how to break down a load average to see what's contributing what (using modern Linux tracing tools, which Gregg is an expert on). The whole thing is really impressive and worth reading.
(Gregg's discussion is focused on Linux alone. For a cross-Unix view, I've written entries on when the load average was added to Unix and the many load averages of different Unix strains. In the latter entry I confidently asserted that Linux's load average included 'disk wait' processes from the start, which Gregg's research has revealed to be wrong.)