Link: Some fascinating details of cellular data transmission
Part of Ilya Grigorik’s “High Performance Browser Networking” is a fascinating section on the Radio Resource Controller (RRC):
Both 3G and 4G networks have a unique feature that is not present in tethered and even WiFi networks. The Radio Resource Controller (RRC) mediates all connection management between the device in use and the radio base station. Understanding why it exists, and how it affects the performance of every device on a mobile network, is critical to building high-performance mobile applications. The RRC has direct impact on latency, throughput, and battery life of the device in use.
As someone who just has a smartphone but likes to peek under the covers, I found it compelling reading, even if I'm not directly building anything that is affected by this. If nothing else it gives me a greater appreciation of what my smartphone is doing and what sort of things in applications (and my own usage) may be using up extra battery.
(Via Can You Afford It?: Real-world Web Performance Budgets, itself via lobste.rs.)
Link: The Python decorators they won't tell you about
H. Chase Stevens The decorators they won't tell you about (via Hacker News, repeatedly; it's been posted several times) is another view of Python decorators. I'll give you a quote that shows the flavour:
Decorators are often described as "functions which take functions and return functions", a description which is notable in that, technically speaking, not a single word of it is true.
(H. Chase Stevens is right about this, by the way.)
If you're interested in understanding more about what decorators are and some clever (or crazy) things they can be used for beyond the obvious, this is well worth reading. I certainly enjoyed it, even if some of the tricks it shows are things that I'd probably never use in real code.
(Breaking out of conventional views is always useful, in my opinion.)
Link: Citation Needed [on array indexing in programming languages]
Mike Hoye's Citation Needed is ostensibly about the origins of zero-based array indexing in programming languages. But that's not really what it's about once Mike Hoye gets going; it's really about our field's attitude towards history, the consequences of that attitude, and the forces that drive it, including inaccessible papers. Even if you're indifferent to where zero-based array indexing comes from, that portion of the article is well worth reading and thinking about.
(I'm not going to quote any of it. Read the whole thing, as they say; it's not that long.)
PS: This is from 2013, so you might have read it already. If you aren't sure and don't remember it, read it again.
Link: Linux Load Averages: Solving the Mystery
Load averages are an industry-critical metric – my company spends millions auto-scaling cloud instances based on them and other metrics – but on Linux there's some mystery around them. Linux load averages track not just runnable tasks, but also tasks in the uninterruptible sleep state. Why? I've never seen an explanation. In this post I'll solve this mystery, and summarize load averages as a reference for everyone trying to interpret them.
In the process of doing this, Brendan Gregg goes back to TENEX (including its source code) for the more or less original load average. Then he chases down the kernel patch from October 1993 that changed Linux's load averages from purely based on the size of the run queue to including processes in disk wait. It goes on from there, including some great examples of how to break down a load average to see what's contributing what (using modern Linux tracing tools, which Gregg is an expert on). The whole thing is really impressive and worth reading.
(Gregg's discussion is focused on Linux alone. For a cross-Unix view, I've written entries on when the load average was added to Unix and the many load averages of different Unix strains. In the latter entry I confidently asserted that Linux's load average included 'disk wait' processes from the start, which Gregg's research has revealed to be wrong.)
Link: How does "the" X11 clipboard work?
X11: How does “the” clipboard work? (via) is a technical walk through the modern X11 selection system, one that winds up discussing things at the level of the X protocol and Xlib, with helpful code examples. I learned some quite useful things in the process, for example how to use xclip to query things to find out what formats a selection is available in.
(Technical details about X selections are relevant to me because I use a program that deals with them and which I'd like to see do so more conveniently.)
Link: NASA DASHlink - Real System Failures
The Observed Failures slide deck from NASA DASHlink (via many places) is an interesting and even alarming collection of observed failures in hardware and software, mostly avionics related. I find it both entertaining and a useful reminder that all of this digital stuff is really analog underneath and that leads to interesting failure modes. Lest you think that all of these were hardware faults and us software people can be smug, well, not really. There are more; read the whole thing, as they say.
Link: ZFS Storage Overhead
ZFS Storage Overhead (via) is not quite about what you might think. It's not about, say, the overhead added by ZFS's RAIDZ storage (where there are surprises); instead it's about some interesting low level issues of where space disappears to even in very simple pools. The bit about metaslabs was especially interesting to me. It goes well with Matthew Ahrens' classic ZFS RAIDZ stripe width, or: How I Learned to Stop Worrying and Love RAIDZ, which is endlessly linked and cited for very good reasons.
Link: Survey of [floating point] Rounding Implementations in Go
Rounding in Go is hard to do correctly. That is, given a
float64, truncate the fractional part (anything right of the decimal point), and add one to the truncated value if the fractional part was >= 0.5. [...]
But it's a lot more than that, because the article then proceeds
to demonstrate just how complicated floating point rounding really
is (in any language), how non-obvious this complexity is, and how
easy it is to get it wrong. It ends with a walk-through of what
math.Round in Go 1.10, which is implemented not through
floating point operations but through direct knowledge of how
floating point values are implemented at
the bit level.
Floating point is one of those things in programming that looks like it's reasonably simple but instead is a deep pit of potential complexity once you move beyond doing very simple things. I enjoy articles like this because they are both a good reminder of this and a peek behind the curtain.
Links: Git remote branches and Git's missing terminology (and more)
Mark Jason Dominus's Git remote branches and Git's missing terminology (via) is about what it sounds like. This is an issue of some interest to me, since I flailed around in the same general terminology swamp in a couple of recent entries. Dominus explains all of this nicely, with diagrams, and it helped reinforce things in my mind (and reassure me that I more or less understood what was going on).
He followed this up with Git's rejected push error, which covers a
git push' issue with the same thoroughness as his first
article. I more or less knew this stuff, but again I found it useful
to read through his explanation to make sure I actually knew as much
as I thought I did.
Link: The evolution of Unix's overall architecture
Diomidis Spinellis has created a set of great resources for looking at Unix's history. He started with the Unix history rep and has then used that to create block diagrams of Unix's structure in V1 and modern FreeBSD. His article Unix Architecture Evolution Diagrams (via) explains the interesting story of how he put these diagrams together. He also has a site on where manpages appear over Unix versions.