Link: Eric Rescorla's "DNS Security, Part II: DNSSEC"
(Rescorla also has a Part I: Basic DNS for people who need that.)
A thesis: large, significant open source projects must keep moving or die
One of the things periodically heard about large, fundamental open source projects is a wish that they would slow down, stop changing things all the time, and work on stability (both low level with bugs and higher level with fixing rough edges and polishing things). However, I've come around to a cynical view that this may not be possible, partly sparked by the various discussions around open source maintenance in the wake of the recent log4j issue. Instead, I have a thesis: large open source software must keep moving forward with general development or die.
Large open source software is pretty much guaranteed to have bugs, and lots of them; all software has bugs (more or less), generally in proportion to how big it is. These bugs are found over time and need to be fixed, which means you need people working on the project who fix bugs. However, very few people are motivated by doing nothing but fixing an endless stream of bugs (cf open source and the problem of pure maintenance). Instead, developers want to do something, whatever that is for each person, and they fix bugs in the process.
This means that large open source projects need to be moving forward, developing and changing and expanding and so on, in order to attract and keep the developers who will do the necessary work of fixing bugs. If a large open source project attempts to stop changing and stabilize, it will lose the people who are fixing things and then increasingly stagnate with its crop of existing, never to be fixed bugs. The moment a big project declares itself more or less done for now except for future bug fixes is the moment it starts to lose the people it needs to make those bug fixes.
(A small open source project can hope to be essentially free of significant bugs, but even there many projects stagnate with known issues. A small project can also potentially get by with much less bug fixing resources than a large project, due to the generally much smaller number of bugs. This can make it feasible for a person or a small dedicated group to keep the lights on, fixing bugs at a fast enough rate to keep potential users of the software happy. Of course these people may not wind up very happy, much like the log4j maintainers (also).)
In an ideal world, 'moving forward' would reliably translate to the project improving (in the eyes of people using it). As we know from both open source and commercial software, this is not at all guaranteed. Plenty of changes are at best neutral, and all too often are net negatives in most people's view. But that doesn't matter, because we can't get what we really want, which is either only good changes or just bug fixes. Our real choices are a probably buggy stagnation or whatever developers feel motivated by.
Security systems and requiring attacks instead of accidents to evade them
Very recently, in the course of a conversation on Twitter that more or less about our internal network access authentication needs, it struck me that sometimes that part of the purpose of a security system is to make it so that an actual attack is required to get past the security, instead of just an accident. I am considering attack in a broad sense, in the sense that someone who wants to sidestep your security needs to actively do something unusual.
There are two useful things that come from this simple dividing line. On the technical side, your security system is avoiding accidents. Here, for example, we don't want the "accident" of a new person plugging their laptop into our network (or getting on to our wifi) and immediately getting Internet access. In practice our network access system may not be throwing a big roadblock in their way, but it is throwing some sort of roadblock, one that they can't just go right over without noticing.
(Our wifi network has a network password, but you can imagine situations where the network password might get posted on a sign on the wall and lead visitors to think it was an open-access network. And a visitor might well have heard the instruction 'plug your laptop into any red network cable', which is a common one that people are told.)
On the social side, it makes a social and policy difference that a person has taken active steps to evade your security. Such a person can't claim to have made an innocent mistake, like plugging their laptop into a handy network cable and then accepting the result. They've taken active steps to bypass security. Because this is the case, you can also react to any unauthorized activities that you notice with the pretty sure knowledge that this isn't an innocent mistake. The person involved has little to no cover and you have more certainty about what's going on.
To use a metaphor, even if a fence is low, it means that people have to actively step over it instead of merely walking along.
(I've probably had something like this realization in the past, but I don't think I've written it down before.)