My Mastodon remark about tiling window managers
Over on Mastodon, I said:
Two reasons that I'm unlikely to like tiling window managers are that I like empty space (and lack of clutter) on my desktop and I don't like too-large windows. Filling all the space with windows of some size is thus very much not what I want, and I definitely have preferred sizes and shapes for my common windows.
On the one hand, I've already written an entry on my views on tiling window managers. On the other hand, I don't think I've ever managed to state them so succinctly, and I find myself not wanting to just leave that lost in the depths of Mastodon.
(Looking back at that entry caused me to re-read its comments and realize that they may be where I found out about Cinnamon's keybindings for tiling windows, which would answer a parenthetical mystery.)
Links: A Practitioner's Guide to System Dashboard Design (with a bonus)
A Practitioner's Guide to System Dashboard Design is a four article series on system dashboard design by Cory Watson of One Mo' Gin. The parts are:
(Via somewhere that I've now forgotten and can't find again. Perhaps it was Twitter or Mastodon.)
Link: What has your microcode done for you lately?
What has your microcode done for you lately? (via) starts out being about the low-level performance of scattered writes on x86 machines but develops into a story where, well, I'll just quote from the summary:
Where the microcode comes in, and what might make this more interesting than usual, is that performance on a purely CPU-bound benchmark can vary dramatically depending on microcode version. In particular, we will show that the most recent Intel microcode version can significantly slow down a store heavy workload when some stores hit in the L1 data cache, and some miss.
I found the whole thing fascinating and I feel it deserves a wider audience. It's a bit challenging to follow if you don't already know some of the details of low-level CPU and memory access operation (it casually throws around terms like RFO), but working to understand it was interesting and taught me things, and I quite enjoyed the coverage of the issues involved in scattered write performance.
(Of course one has to speculate that the slowdown on recent microcode is due to either deliberate changes due to all of the speculative execution issues or side effects from those changes.)
Link: Vim anti-patterns
Tom Ryder's Vim anti-patterns is nominally about anti-patterns but it's really about teaching various better, more advanced ways to do things in vim. For each anti-pattern, Ryder shows a positive pattern (or several of them), and they're worth thinking about and maybe adopting. If nothing else, I learned new bits of vim.
(This is from 2012 and I believe I read it then, but it's worth re-reading every so often. Indeed I sort of think of it as a little reference for Vim things I want to remember.)
PS: I don't necessarily agree with all of Ryder's views here, but then I'm a lightweight vim user. In particular, I've not adopted either avoiding the arrow keys or using ctrl-[ for ESC, and I'm still perfectly willing to move around in insert mode to some degree, because it fits how I edit things.
Link: The IOCCC 2018 "Best of show" program
The program delivered here is both a full machine emulation of the original PDP-7 that Ken Thompson used to write the first version of UNIX and a full machine emulation of the PDP-11/40 used by subsequent UNIXes. The Makefile can build versions that can run each of the following:
- UNIX v0 for the PDP-7 (circa 1969)
- Research UNIX Version 6 (circa 1975)
- BSD 2.9 (circa 1983)
This is one of those IOCCC entries where you absolutely do want to read the full author's description of their entry, because it is fascinating all by itself. For instance, until I read this I didn't know that Unix v0 had been reconstructed from original printouts of the assembly, and it's even on Github. That is just one small part of a fascinating journey.
(The IOCCC is the International Obfuscated C Code Contest, and here are the entire 2018 results. Mills' work may be even more impressive once you know that IOCCC entries must be 4096 bytes or less of C code.)
Link: Everything you should know about certificates and PKI but are too afraid to ask
Mike Malone's Everything you should know about certificates and PKI but are too afraid to ask (via, also, also) starts off slow (and with one simplification that irritated me) but very soon gets rolling into things like X.509 and PKCS, and then gets into a thorough and solid discussion of PKI (Public key infrastructure) and the considerations of running your own internal one for (mutual) TLS authentication. I was very pleased to see this recommendation:
In any case, if you run your own internal PKI you should maintain a separate trust store for internal stuff. That is, instead of adding your root certificate(s) to the existing system trust store, configure internal TLS requests to use only your roots. [...]
Separating public 'web PKI' from your own internal PKI is an important measure to keep compromises in your internal PKI from leaking into your use of web PKI (both through browsers and through programs). It also keeps compromises in web PKI from hurting your internal PKI, which I believe is Malone's main focus.
The article isn't perfect, but it's a great introduction and overview with solid practical recommendations that goes into significant depth on some important issues.
(I'm fairly certain that I learned some new things from it, even though I'm pretty well exposed to all of this stuff already.)
Link: HTTPS in the real world
Robert Heaton's article HTTPS in the real world (via) is about the difference between HTTPS in theory, in the cryptographic world of Alice and Bob, and HTTPS in practice, in the messy real world where CAs cannot be fully trusted and people lose their keys and so on. To pick one little bit to quote:
[...] But the real world has still managed to piece together a very serviceable public-key cryptography system by patching over the holes and omissions and naivety of the introductory world with a tartan of secondary systems known collectively as “Public Key Infrastructure” (PKI).
The whole article is a clear, short, amusing, and interesting summary of the whole practical mess of HTTPS and TLS. Even though I'm pretty up on all of the issues it talks about, I still found it well worth reading.
Link: HiDPI on dual 4K monitors with Linux
Vincent Bernat's article HiDPI on dual 4K monitors with Linux (via) is about what you'd expect it to be about and is, as they say, relevant to my interests. Especially relevant to me is the section on HiDPI support on Linux with X11, which runs down a collection of issues and contains a very useful chart about what is supported in what application and toolkit, which added some information that I hadn't known.
Note that Bernat's experience with xterm and rxvt don't match mine, perhaps because we're setting the X-level DPI information in somewhat different ways. My experience, as covered here, is that plain X applications using XFT fonts scale them appropriately once you get the DPI set everywhere (ie, if you tell xterm to use Monospace-12, you will get an actual 12 point size on your HiDPI monitor, not 12 points at 96 DPI and thus tiny fonts). If you use bitmap fonts, though, you're in trouble and unfortunately xterm still uses those by default for some things, like its popup menus.
(It's the nature of these articles to become out of date over time as HiDPI support improves and changes, but it's still a useful snapshot and some of these applications will probably never change.)
Link: Vectorized Emulation [of CPUs and virtual machines]
Vectorized Emulation: Hardware accelerated taint tracking at 2 trillion instructions per second (via) is about, well, let me quote from the introduction rather than try to further summarize it:
In this blog I’m going to introduce you to a concept I’ve been working on for almost 2 years now. Vectorized emulation. The goal is to take standard applications and JIT them to their AVX-512 equivalent such that we can fuzz 16 VMs at a time per thread. The net result of this work allows for high performance fuzzing (approx 40 billion to 120 billion instructions per second [the 2 trillion clickbait number is theoretical maximum]) depending on the target, while gathering differential coverage on code, register, and memory state.
Naturally you need to do all sorts of interesting tricks to make this work. The entry is an overview, and the author is going to write more entries later on the details of various aspects of it, which I'm certainly looking forward to even if I'm not necessarily going to fully follow the details.
I found this interesting both by itself and for giving me some more insight into modern SIMD instructions and what goes into using them. SIMD and GPU computing feel like something that I should understand some day.
(I find SIMD kind of mind bending and I've never really dug into how modern x86 machines do this sort of stuff and what you use it for.)
Link: "The History of a Security Hole" (in various *BSD kernels)
To yank my words from Twitter, Michal Necasek's The History of a Security Hole is a fascinating exploration of both the arcana of the x86 and what C can innocently do to you. Watching the code train barrel down the tracks towards its doom was decidedly compelling. There are also some useful lessons for long term software development that can be extracted here, since many of the mistakes made were entirely natural ones.
I often find this sort of stuff fascinating, so I really liked reading this entry and found I couldn't look away once things got going and mistakes piled up on top of misunderstandings. By the way, don't read this as a slam on the *BSDs; this sort of cascading misunderstanding can happen in any software, and undoubtedly has happened in non-BSD kernels as well in spots. It's simply easy to miss things in large, complex software (see eg).