Link: HiDPI on dual 4K monitors with Linux
Vincent Bernat's article HiDPI on dual 4K monitors with Linux (via) is about what you'd expect it to be about and is, as they say, relevant to my interests. Especially relevant to me is the section on HiDPI support on Linux with X11, which runs down a collection of issues and contains a very useful chart about what is supported in what application and toolkit, which added some information that I hadn't known.
Note that Bernat's experience with xterm and rxvt don't match mine, perhaps because we're setting the X-level DPI information in somewhat different ways. My experience, as covered here, is that plain X applications using XFT fonts scale them appropriately once you get the DPI set everywhere (ie, if you tell xterm to use Monospace-12, you will get an actual 12 point size on your HiDPI monitor, not 12 points at 96 DPI and thus tiny fonts). If you use bitmap fonts, though, you're in trouble and unfortunately xterm still uses those by default for some things, like its popup menus.
(It's the nature of these articles to become out of date over time as HiDPI support improves and changes, but it's still a useful snapshot and some of these applications will probably never change.)
Link: Vectorized Emulation [of CPUs and virtual machines]
Vectorized Emulation: Hardware accelerated taint tracking at 2 trillion instructions per second (via) is about, well, let me quote from the introduction rather than try to further summarize it:
In this blog I’m going to introduce you to a concept I’ve been working on for almost 2 years now. Vectorized emulation. The goal is to take standard applications and JIT them to their AVX-512 equivalent such that we can fuzz 16 VMs at a time per thread. The net result of this work allows for high performance fuzzing (approx 40 billion to 120 billion instructions per second [the 2 trillion clickbait number is theoretical maximum]) depending on the target, while gathering differential coverage on code, register, and memory state.
Naturally you need to do all sorts of interesting tricks to make this work. The entry is an overview, and the author is going to write more entries later on the details of various aspects of it, which I'm certainly looking forward to even if I'm not necessarily going to fully follow the details.
I found this interesting both by itself and for giving me some more insight into modern SIMD instructions and what goes into using them. SIMD and GPU computing feel like something that I should understand some day.
(I find SIMD kind of mind bending and I've never really dug into how modern x86 machines do this sort of stuff and what you use it for.)
Link: "The History of a Security Hole" (in various *BSD kernels)
To yank my words from Twitter, Michal Necasek's The History of a Security Hole is a fascinating exploration of both the arcana of the x86 and what C can innocently do to you. Watching the code train barrel down the tracks towards its doom was decidedly compelling. There are also some useful lessons for long term software development that can be extracted here, since many of the mistakes made were entirely natural ones.
I often find this sort of stuff fascinating, so I really liked reading this entry and found I couldn't look away once things got going and mistakes piled up on top of misunderstandings. By the way, don't read this as a slam on the *BSDs; this sort of cascading misunderstanding can happen in any software, and undoubtedly has happened in non-BSD kernels as well in spots. It's simply easy to miss things in large, complex software (see eg).
Link: A deep dive into the Go memory allocator
Allocator Wrestling is a summary of Eben Freeman's talk from GopherCon 2018 on the Go memory allocator (via, and see also) and its garbage collection system. The slides are here (via) and have more details and elaborations on various things than the livebloged summary, although you probably want to read both (good talks are rarely entirely captured by their slides).
I love seeing under the hood of a complex system this way, and it's probably helped me move towards understanding some things about how much memory Go programs use (or appear to use).
Link: Where Vim Came From
Where Vim Came From (via) is an interesting and thorough overview of the history of vim, vi, ed, and other predecessors (with copious footnotes). It's nice to see all of the pieces laid out this way, and I learned of some historical links that I hadn't already known.
(I do wonder what vi would have been like if ed had kept QED's multiple buffer support.)
Link: A Child’s Garden of Inter-Service Authentication Schemes
A Child’s Garden of Inter-Service Authentication Schemes is an opinionated overview of service to service authentication schemes from Latacora (via, which has comments worth reading for once, including from various Latacora people). As with pretty much everything Latacora writes on their blog, it's not just informative, it's entertaining too. I find it well worth reading.
Link: About the memory management in the Bourne shell
About the memory management in the Bourne shell (via) is a collection of discussions about the original Bourne shell's creative, interesting, and infamous approach to memory management in the original Unix memory allocation scheme. If you like this kind of thing, it's worth reading through and decoding things.
(It also links to a recording of Stephen Bourne's BSDCan 2015 talk "Early days of Unix and design of sh", which I haven't watched yet but keep seeing links to. Someday.)
Link: Parsing: a timeline
Jeffery Kegler's Parsing: a timeline (via) is what it says on the title; it's an (opinionated) timeline of various developments in computer language parsing. There are a number of fascinating parts to it and many bits of history that I hadn't known and I'm glad to have read about. Among other things, this timeline discusses all of the things that aren't actually really solved problems in parsing, which is informative all by itself.
(I've been exposed to various aspects of parsing and it's a long standing interest of mine, but I don't think I've ever seen the history of the field laid out like this. I had no idea that so many things were relatively late developments, or of all of the twists and turns involved in the path to LALR parsers.)
Link: Closing the Loop: The Importance of External Engagement in Computer Science Research
Professor John Regehr's Closing the Loop: The Importance of External Engagement in Computer Science Research is an excellent article on the general spots where academic computer science can become disconnected with the real world and the engineering problems that are found there. Since I work in academia (and have read Greg Wilson for some time), this is an issue relatively near to my heart and I quite liked how he presents things in the article. It's a new framing of the issues, one that puts things in a clear way.
He's also written a followup post, Paths to External Engagement in Computer Science Research. This one is probably mostly of interest to people inside the sausage factory who want to interact with the outside, as opposed to people on the outside wondering why on earth academic computer science isn't more useful to them.
Link: Some fascinating details of cellular data transmission
Part of Ilya Grigorik’s “High Performance Browser Networking” is a fascinating section on the Radio Resource Controller (RRC):
Both 3G and 4G networks have a unique feature that is not present in tethered and even WiFi networks. The Radio Resource Controller (RRC) mediates all connection management between the device in use and the radio base station. Understanding why it exists, and how it affects the performance of every device on a mobile network, is critical to building high-performance mobile applications. The RRC has direct impact on latency, throughput, and battery life of the device in use.
As someone who just has a smartphone but likes to peek under the covers, I found it compelling reading, even if I'm not directly building anything that is affected by this. If nothing else it gives me a greater appreciation of what my smartphone is doing and what sort of things in applications (and my own usage) may be using up extra battery.
(Via Can You Afford It?: Real-world Web Performance Budgets, itself via lobste.rs.)