A pleasant surprise with a Thunderbolt 3 10G-T Ethernet adapter
Recently, I tweeted:
I probably shouldn't be surprised that a Thunderbolt 10G-T Ethernet adapter can do real bidirectional 10G on my Fedora laptop (a Dell XPS 13), but I'm still pleased.
(I am still sort of living in the USB 2 'if it plugs in, it's guaranteed to be slow' era.)
There are two parts to my pleasant surprise here. The first part is simply that a Thunderbolt 3 device really did work fast, as advertised, because I'm quite used to nominally high-speed external connection standards that do not deliver their rated speeds in practice for whatever reason (sometimes including that the makers of external devices cannot be bothered to engineer them to run at full speed). Having a Thunderbolt 3 device actually work feels novel, especially when I know that Thunderbolt 3 basically extends some PCIe lanes out over a cable.
(I know intellectually that PCIe can be extended off the motherboard and outside the machine, but it still feels like magic to actually see it in action.)
The second part of the surprise is that my garden variety vintage 2017 Dell XPS 13 laptop could actually drive 10G-T Ethernet at essentially full speed, and in both directions at once. I'm sure that some of this is in the Thunderbolt 3 10G-T adapter, but still; I'm not used to thinking of garden variety laptops as being that capable. It's certainly more than I was hoping for and means that the adapter is more useful than we expected for our purposes.
This experience has also sparked some thoughts about Thunderbolt 3 on desktops, because plugging this in to my laptop was a lot more pleasant an experience than opening up a desktop case to put a card in, which is what I'm going to need to do on my work desktop if I need to test a 10G thing with it someday. Unfortunately it's not clear to me if there even are general purpose PC Thunderbolt 3 PCIe cards today (ones that will go in any PCIe x4 slot on any motherboard), and if there are, it looks like they're moderately expensive. Perhaps in four or five years, my next desktop will have a Thunderbolt 3 port or two on the motherboard.
(We don't have enough 10G cards and they aren't cheap enough that I can leave one permanently in my desktop.)
PS: My home machine can apparently use some specific add-on Thunderbolt 3 cards, such as this Asus one, but my work desktop is an AMD Ryzen based machine and they seem out of luck right now. Even the addon cards are not inexpensive.
Open protocols can evolve fast if they're willing to break other people
A while back I read an entry from Pete Zaitcev, where he said, among other things:
I guess what really drives me mad about this is how Eugen [the author of Mastodon] uses his mindshare advanage to drive protocol extensions. All of Fediverse implementations generaly communicate freely with one another, but as Pleroma and Mastodon develop, they gradually leave Gnusocial behind in features. In particular, Eugen found a loophole in the protocol, which allows to attach pictures without using up the space in the message for the URL. When Gnusocial displays a message with attachment, it only displays the text, not the picture. [...]
When I read this, my immediate reaction was that this sounded familiar. And indeed it is, just in another guise.
Over the years, there have been any number of relatively open protocols for federated things that were used by more or less commercial organizations, such as XMPP and Signal's protocol. Over and over again, the companies running major nodes have wound up deciding to de-federate (Signal, for example). When this has happened, one of the stated reasons for it has been that being federated has held back development (as covered in eg LWN's The perils of federated protocols, about Signal's decision to drop federation). At the time, I thought of this as being possible because what was involved was a company moving to a closed product, sometimes the company doing much of the work (as in Signal's case).
What Mastodon (and Pleroma) illustrate here is that this sort of thing can be done even in open protocols where some degree of federation is still being maintained. All it needs is for the people involved being willing to break protocol compatibility with other implementations that aren't willing to follow along and keep up (either because of lack of time or disagreements in the direction that the protocol is being dragged). Of course this is easier when the people making the changes are the dominant implementations, but anyone can do it if they're willing to live with the consequences, primarily a slow tacit de-federation where messages may still go back and forth but increasingly they're not useful for one or both sides.
Is this a good thing or not? I have no idea. On the one hand, Mastodon is moving the protocol in directions that are clearly useful to people; as Pete Zaitcev notes:
[...] But these days pictures are so prevalent, that it's pretty much impossible to live without receiving them. [...]
On the other hand things are clearly moving away from a universal federation of equals and an environment where the Fediverse and its protocols evolve through a broad process of consensus among many or all of the implementations. And there's the speed of evolution too; faster evolution privileges people who can spend more and more time on their implementation and people who can frequently update the version they're running (which may well require migration work and so on). A rapidly evolving Fediverse is one that requires ongoing attention from everyone involved, as opposed to a Fediverse where you can install an instance and then not worry about it for a while.
(This split is not unique to network protocols and federation. Consider the evolution of programming languages, for example; C++ moves at a much slower pace than things like Go and Swift because C++ cannot just be pushed along by one major party in the way those two can be by Google and Apple.)
A touchpad is not a mouse, or at least not a good one
One of the things about having a pretty nice work laptop with a screen that's large enough to have more than one real window at once is that I actually use it, and I use it with multiple windows, and that means that I need to use the mouse. I like computer mice in general so I don't object to this, but like most modern laptops my Dell XPS 13 doesn't have a mouse, it has a trackpad (or touchpad, take your pick). You can use a modern touchpad as a mouse, but over my time in using the XPS 13 I've come to understand (rather viscerally) that a touchpad is not a mouse and trying to act as if it was is not a good idea. There are some things that a touchpad makes easy and natural that aren't very natural on a mouse, and a fair number of things that are natural on a mouse but don't work very well on a touchpad (at least for me; they might for people who are more experienced with touchpads).
(There is also a continuum of 'not a mouse'-ness. Physical mouse buttons made my old Thinkpad's touchpad more mouse-like than the multi-touch virtual mouse buttons do on the XPS 13.)
Things like straightforward mouse pointer tracking and left button clicks are not all that different between the two and so I can mostly treat things the same (although I think that moving more or less purely vertically or horizontally is harder on a touchpad). What is increasingly different on a touchpad is things like right or middle clicks, and moving the mouse pointer while a nominal button is 'down'. And of course there's no such thing as chorded mouse clicks on a touchpad, while at the same time a mouse has no real equivalent of multi-finger movement and swiping. The different physical locations of a laptop touchpad and a physical mouse also make a difference in what is comfortable and what isn't.
(On a touchpad, I think the more natural equivalent of moving the mouse with a button down is a single finger touchpad move with some keyboard key. Of course this changes things because now both hands are involved, but at the same time your hands aren't moving as far to reach the 'mouse'.)
For me, the things that are significantly different are moving the pointer while a mouse button is down and middle and right button clicks. For instance, with a physical mouse I'm very fond of mouse gestures in Firefox, but they're made with the right mouse button held down; as a result, I basically don't use them on my laptop touchpad. I'm also thankful that in Firefox, left clicking and middle clicking a link are equivalent if you use keyboard modifiers, because that lets me substitute an easy single finger tap for an uncertain multi-finger tap.
All of this has slowly led me to doing things differently when I'm using the laptop's touchpad, rather than trying to pretend that the touchpad is a mouse and stick to my traditional mouse habits and practices. This is a work in progress for various reasons, and on top of that I'm not sure that the X environment I have on my laptop is entirely well adopted to touchpad use.
(I know that some of my programs aren't. For one glaring example, the very convenient xterm copy and paste model is all about middle mouse clicks and being able to easily move the mouse pointer with the left button down. Selecting and copying text from one terminal window to another with the touchpad is both more awkward and more hit and miss. Probably this means I should set up some keyboard bindings for 'paste', so I can at least avoid wrangling with the multi-finger tap necessary to emulate the middle mouse.)
(On the one hand this feels pretty obvious now that I've written it down. On the other hand, it's not something that I've really thought about before now and I'm pretty sure that I'm still trying to do a certain amount of 'mouse' things with the touchpad and being frustrated by the so-so results. Hopefully UI designers have been considering this more than I have.)
My temptation of getting a personal laptop
Despite being a computer person and a sysadmin, I don't have a personal laptop; my personal computing is a desktop and, these days, an iPad. The big reason for this is that most of the time, I don't really have a use for a laptop (well, a personal use, especially since a laptop will never be my primary machine). However, when I do have such a use and take my work laptop home, I always wind up getting reminded of how nice it is and how nice it is to have the laptop available, and in turn that tempts me with the thought of getting a personal laptop.
(This is on my mind because I had such a need this weekend, and as result wound up writing yesterday's entry entirely on the laptop. Doing so was a far more pleasant experience than I've had with drafting entries on my iPad, mostly but not entirely for software reasons. My natural way of writing entries involves a bunch of windows and generally a certain amount of cut and paste, and the iPad does not make this easy or natural. I drafted most of this entry on my iPad under similar circumstances as yesterday's entry, and it was slower and less fluid.)
On some sort of purely objective basis, it doesn't make sense for me to give in to this temptation and get a personal laptop. As mentioned, I usually don't have a use for it and when I do have a planned use, I can usually take home one from work, and a decent ultrabook style laptop (which is what I want) is not cheap. On the other hand, one of my ways of evaluating this sort of decision is to ask myself what I would do if money wasn't an issue, and here my answer is absolutely clear; I would immediately go get such a laptop, by default some version of the Dell XPS 13. I definitely have some uses for it, and in general it would be reassuring to have a second fully capable machine at home (and a portable one, which I could set up and use wherever I wanted to). So on a subjective basis, yes, absolutely, I should at least consider this and do things like look up how long lightly used ultrabooks tend to last these days.
(Now that I have a 4K display, any laptop I get should be able to drive a 4K display at 60 Hz with suitable external hardware. Fortunately I believe this is common these days.)
It's also possible that having a readily available personal laptop would change my behavior, by opening up new options and so on. Right now this seems unlikely, but I've been blind to this sort of thing in the past so it's at least something for me to think about, along side the countervailing thoughts of how little I would probably use a personal laptop in practice.
(Given my usual habits of not getting around to getting things regardless of how I feel about them, I am unlikely to actually move on getting a laptop any time soon even if I talk myself into it. But writing this entry may have made it slightly more likely, which is part of why I did; I want to at least think about the whole issue, and write down my current thoughts so I can look back at them later.)
PS: I might feel more temptation and interest in a personal laptop if I did things like travel to conferences, but I don't and I'm unlikely to any time in the future.
Perhaps you no longer want to force a server-preferred TLS cipher order on clients
To simplify a great deal, when you set up a TLS connection one of the things that happens in the TLS handshake is that the client sends the server a list of the cipher suites it supports in preference order, and then the server picks which one to use. One of the questions when configuring a TLS server is whether you will tell the server to respect the client's preference order or whether you will override it and use the server's preference order. Most TLS configuration resources, such as Mozilla's guidelines, will implicitly tell you to prefer the server's preference order instead of the client's.
In the original world where I learned 'always prefer the server's cipher order', the server was almost always more up to date and better curated than clients were. You might have all sorts of old web browsers and so on calling in, with all sorts of questionable cipher ordering choices, and you mostly didn't trust them to be doing a good job of modern TLS. Forcing everyone to use the order from your server fixed all of this, and it put the situation under your control; you could make sure that every client got the strongest cipher that it supported.
That doesn't describe today's world, which is different in at least two important ways. First, today many browsers update every six weeks or so, which is probably far more often than most people are re-checking their TLS best practices (certainly it's far more frequently than we are). As a result, it's easy for browsers to be the more up to date party on TLS best practices. Second, browsers are running on increasingly varied hardware where different ciphers may have quite different performance and power characteristics. An AES GCM cipher is probably the fastest on x86 hardware (it can make a dramatic difference), but may not be the best on, say, ARM based devices such as mobile phones and tablets (and it depends on what CPUs those have, too, since people use a wide variety of ARM cores, although by now all of them may be modern enough to have ARMv8-A AES-NI crypto instructions).
If you're going to consistently stay up to date on the latest TLS developments and always carefully curate your TLS cipher list and order, as Mozilla is, then I think it still potentially makes sense to prefer your server's cipher order. But the more I think about it, the more I'm not sure it makes sense for most people to try to do this. Given that I'm not a TLS expert and I'm not going to spend the time to constantly keep on top of this, it feels like perhaps once we let Mozilla restrict our configuration to ciphers that are all strong enough, we should let clients pick the one they think is best for them. The result is unlikely to do anything much to security and it may help clients perform better.
(If you're CPU-constrained on the server, then you certainly want to pick the cheapest cipher for you and never mind what the clients would like. But again, this is probably not most people's situation.)
PS: As you might guess, the trigger for this thought was looking at a server TLS configuration that we probably haven't touched for four years, and perhaps more. In theory perhaps we should schedule periodic re-examinations and updates of our TLS configurations; in practice we're unlikely to actually do that, so I'm starting to think that the more hands-off they are, the better.
Why your fresh new memory pages are zero-filled
When you (or your programs) obtain memory directly from the operating system, you pretty much invariably get memory that is filled with zero bytes. The same thing is true if you ask for fresh empty disk space, on systems where you can do this (Unix, for example); by specification for Unix, if you extend a file without writing data, the 'empty space' is all 0 bytes. You might wonder why this is. The answer is pretty straightforward; the operating system has to put some specific value into the new memory and disk space, and people have historically picked all 0 bytes as that value.
(I am not dedicated enough to try to research very old operating system history to see if I can find the first OSes to do this. For reasons we're about to cover, it probably started no later than the 1960s.)
There is a story in this, although it is a short one. Once upon a time, when you asked the operating system for some memory or some disk space, the operating system didn't fill it with any defined value; instead it gave it to you with whatever random values it had had before. Since you were about to write to the memory (or disk space), the operating system setting it to a specific value before you overwrote it with your data was just a waste of CPU. This worked fine for a while, and then people on multi-user systems noticed that you could allocate a bunch of RAM or disk space, not write to it, and search through it to see if the previous users had left anything interesting there. Not infrequently they had. Very soon after people started doing this, operating systems stopped giving you new memory or disk space without clearing its old contents. The simplest way to clear the old contents is to overwrite them with some constant value, and apparently the simplest constant value (or at least the one everyone settled on) is 0.
(Since then, hardware and software have developed all sorts of high speed ways of setting memory to 0, partly because it's become such a common operation as a result of this operating system behavior. Some operating systems even zero memory in the background when idle so they can immediately hand out memory instead of having to pause to clear it.)
This behavior of clearing (new) memory to 0 bytes has had some inobvious consequences in places you might not think of immediately, but that's another entry.
Note that this is only what happens when you get memory directly from the operating system, generally with some form of system call. Most language environments don't return memory to the operating system when your code frees it (either explicitly or, in garbage collected languages, implicitly); instead they keep holding on to the now-free memory and recycle it when your code asks for more. This reallocated memory normally has the previous contents that your own code wrote into it. Although this can be a security issue too, it's not something the operating system deals with; it's your problem (or at least a problem for the language runtime).
Why I'm usually unnerved when modern SSDs die on us
Tonight, one of the SSDs on our new Linux fileservers died. It's not the first SSD death we've seen and probably not the last one, but as almost always, I found it an unnerving experience because of a combination of how our SSDs tend to die, how much of a black box they are, and how they're solid-state devices.
Like most of the SSDs deaths that we've had, this one was very abrupt; the drive went from perfectly fine to completely unresponsive in at most 50 seconds or so, with no advance warning in SMART or anything else. One moment it was serving read and write IO perfectly happily (from all external evidence, and ZFS wasn't complaining about read checksums) and the next moment there was no Crucial MX300 at that SAS port any more. Or at least at very close to the next moment.
(The first Linux kernel message about failed IO operations came at 20:31:34 and the kernel seems to have declared the drive officially vanished at 20:32:15. But the actual drive may have been unresponsive from the start; the driver messages aren't clear to me.)
What unnerves me about these sorts of abrupt SSD failures is how inscrutable they are and how I can't construct a story in my head of what went wrong. With spinning HDs, drives might die abruptly but you could at least construct narratives about what could have happened to do that; perhaps the spindle motor drive seized or the drive had some other gross mechanical failure that brought everything to a crashing halt (perhaps literally). SSDs are both solid state and opaque, so I'm left with no story for what went wrong, especially when a drive is young and isn't supposed to have come anywhere near wearing out its flash cells (as this SSD was).
(When a HD died early, you could also imagine undetected manufacturing flaws that finally gave way. With SSDs, at least in theory that shouldn't happen, so early death feels especially alarming. Probably there are potential undetected manufacturing flaws in the flash cells and so on, though.)
When I have no story, my thoughts turn to unnerving possibilities, like that the drive was lying to us about how healthy it was in SMART data and that it was actually running through spare flash capacity and then just ran out, or that it had a firmware flaw that we triggered that bricked it in some way.
(We had one SSD fail in this way and then come back when it was pulled out and reinserted, apparently perfectly healthy, which doesn't inspire confidence. But that was a different type of SSD. And of course we've had flaky SMART errors with Crucial MX500s.)
Further, when I have no narrative for what causes SSD failures, it feels like every SSD is an unpredictable time bomb. Are they healthy or are they going to die tomorrow? It feels like I really have to hope in statistics, namely that not too many will fail not too fast before they can be replaced. And even that hope relies on an assumption that failures are uncorrelated, that what happened to this SSD isn't likely to happen to the ones on either side of it.
(This isn't just an issue in our fileservers; it's also something I worry about for the SSDs in my home machine. All my data is mirrored, but what are the chances of a dual SSD failure?)
In theory I know that SSDs are supposed to be much more reliable than spinning rust (and we have lots of SSDs that have been ticking along quietly for years). But after mysterious abrupt death failures like this, it doesn't feel like it. I really wish we generally got some degree of advance warning about SSDs failing, the way we not infrequently did with HDs (for instance, with one HD in my office machine, even though I ignored its warnings).
A spate of somewhat alarming flaky SMART errors on Crucial MX500 SSDs
We've been running Linux's smartd on all of our Linux machines for a long time now, and over that time it's been solidly reliable (with a few issues here and there, like not always handling disk removals and (re)insertions properly). SMART attributes themselves may or may not be indicative of anything much, but smartd does reliably alert on the ones that it monitors.
Except on our new Linux fileservers. For a significant amount
of time now,
smartd has periodically been sending us email about
various drives now having one 'currently unreadable (pending)
sectors' (which is SMART attribute 197). When we go look at the
affected drive with
smartctl, even within 60 seconds of the event
being reported, the drive has always reported that it now has no
unreadable pending sectors; the attribute is once again 0.
These fileservers use both SATA and SAS for connecting the drives, and we have an assorted mixture of 2TB SSDs; some Crucial MX300s, some Crucial MX500s, and some Micron 1100s. The errors happen for drives connected through both SATA and SAS, but what we hadn't noticed until now is that all of the errors are from Crucial MX500s. All of these have the same firmware version, M3CR010, which appears to be the only currently available one (although at one point Crucial apparently released version M3CR022, cf, but Crucial appears to have quietly pulled it since then).
These reported errors are genuine in one sense, in that it's not
smartd being flaky. We also track all the SMART attributes
through our new Prometheus system, and
it also periodically reports a temporary '1' value for various
MX500s. However, as far as I can see the Prometheus-noted errors
always go away right afterward, just as the
smartd ones do. In
addition, no other SMART attributes on an affected drive show any
unexpected changes (we see increases in eg 'power on hours' and
other things that always count up). We've also done mass reads,
SMART self-tests, and other things on these drives, always without
problems reported, and there are no actual reported read errors at
the Linux kernel level.
(And these drives are in use in ZFS pools, and we haven't seen any ZFS checksum errors. I'm pretty confident that ZFS would catch any corrupted data the drives were returning, if they were.)
Although I haven't done extensive hand checking, the reported errors
do appear to correlate with read and write IO happening on the
drive. In spot checks using Prometheus disk metrics, none of the
drives appeared to be inactive at the times that
us, and they may all have been seeing a combination of read and
write IO at the time. Almost all of our MX500 SSDs are in the two
in-production fileservers that have been reporting errors; we have
one that's in a test machine that's now basically inactive, and
while I believe it reported errors in the past (when we were testing
things with it), it hasn't for a while.
(Update: It turns out that I was wrong; we've never had errors reported on the MX500 in our test machine.)
I see at least two overall possibilities and neither of them are entirely reassuring. One possibility is that the MX500s have a small firmware bug where occasionally, under the right circumstances, they report an incorrect 'currently unreadable (pending) sectors' value for some internal reason (I can imagine various theories). The second is that our MX500s are detecting genuine unreadable sectors, but then quietly curing them somehow. This is worrisome because it suggests that the drives are actually suffering real errors or already starting to wear out, despite a quite light IO load and being in operation for less than a year.
We don't have any solutions or answers, so we're just going to have to keep an eye on the situation. All in all it's a useful reminder that modern SSDs are extremely complicated things that are quite literally small computers (multi-core ones at that, these days), running complex software that's entirely opaque to us. All we can do is hope that they don't have too many bugs (either software or hardware or both).
(I have lots of respect for the people who write drive firmware. It's a high-stakes environment and one that's probably not widely appreciated. If it all works people are all 'well, of course', and if any significant part of it doesn't work, there will be hell to pay.)
(Open)SSH quiet connection disconnects in theory and in practice
Suppose, not entirely hypothetically, that you are making frequent health checks on your machines by connecting to their SSH ports to see if they respond. You could just connect, read the SSH banner, and then drop the connection, but that's abrupt and also likely to be considered a log-worthy violation of the SSH protocol (in fact it is considered such by OpenSSH; you get a log message about 'did not receive identification string'). You would like to do better, in the hopes of reducing your log volume. It turns out that the SSH protocol holds out the tantalizing prospect of doing this in a protocol-proper way, but it doesn't help in practice.
The first thing we need to do as a nominally proper SSH client is to send a protocol identification string; in the SSH transport layer protocol this is covered in 4.2 Protocol Version Exchange. This is a simple CR LF delimited string that must start with 'SSH-2.0-'. The simple version of this is, say:
SSH-2.0-Prometheus-Checks CR LF
After the client sends its identification string, the server will
begin the key exchange protocol by sending a SSH_MSG_KEXINIT
packet (section 7.1).
If you use
nc or the like (I have my own preferred tool) to feed a suitable client
version string to a SSH server, you can get this packet dumped back
at you; conveniently, almost all of it is in text.
(In theory your client should read this packet so that the TCP connection doesn't wind up getting closed with unread data.)
At this point, according to the protocol the client can immediately send a disconnect message, as the specification says it is one of the messages that may be sent at any time. A disconnect message is:
byte SSH_MSG_DISCONNECT uint32 reason code string description in ISO-10646 UTF-8 encoding [RFC3629] string language tag
How these field types are encoded is covered in RFC 4251 section 5, and also the whole disconnect packet then has to be wrapped up in the SSH transport protocol's Binary Packet Protocol. Since we're early in the SSH connection and have not negotiated message authentication (or encryption), we don't have to compute a MAC for our binary packet. If we're willing to not actually use random bytes in our 'random' padding, this entire message can be a pre-built blob of bytes that our checking tool just fires blindly at the SSH server.
In practice, this doesn't work because OpenSSH logs disconnect messages; in fact, it makes things worse because OpenSSH logs both that it received a disconnect message and a 'Disconnect from <IP>' additional message. We can reduce the logging level of the 'received disconnect' message by providing a reason code of SSH_DISCONNECT_BY_APPLICATION instead of something else, but that just turns it down to syslog's 'info' level from a warning. Interesting, OpenSSH is willing to log our 'description' message, so we can at least send a reason of 'Health check' or something. I'm a little bit surprised that OpenSSH is willing to do this, given that it provides a way for Internet strangers to cause text of their choice to appear in your logs. Probably not very many people send SSH_MSG_DISCONNECT SSH messages as part of their probing.
On the one hand, this is perfectly reasonable on OpenSSH's part. On the other hand, I think it's probably not useful any more to log this sort of thing by default, especially for services that are not infrequently exposed to the Internet.
(I was going to confidently assert that there are a lot of SSH scanners out there, but then I started looking at our logs. There certainly used to be a lot, but our logs are now oddly silent, at least on a first pass.)
Sidebar: Constructing an actual disconnect message
I was going to write out a by-hand construction of an actual sample message, but in the end I had so much trouble getting things encoded that I wrote a Python program to do it for me (through the struct module). Generating and saving such messages is pointless anyway, since they don't reduce the log spam.
Still, building an actual valid SSH protocol message more or less by hand was an interesting exercise, even if having no MAC, no encryption, and no compression makes it the easiest case possible.
(I also left out the 'language tag' field by setting its string length to zero. OpenSSH didn't care, although other SSH servers might.)
The needs of Version Control Systems conflict with capturing all metadata
In a comment on my entry Metadata that you can't commit into a VCS is a mistake (for file based websites), Andrew Reilly put forward a position that I find myself in some sympathy with:
Doesn't it strike you that if your VCS isn't faithfully recording and tracking the metadata associated with the contents of your files, then it's broken?
Certainly I've wished for VCSes to capture more metadata than they do. But, unfortunately, I've come to believe that there are practical issues for VCS usage that conflict with capturing and restoring metadata, especially once you get into advanced cases such as file attributes. In short, what most users of a VCS want are actively in conflict with the VCS being a complete and faithful backup and restore system, especially in practice (ie, with limited programming resources to build and maintain the VCS).
The obvious issue is file modification times. Restoring file
modification time on checkout can cause many build systems (starting
make) to not rebuild things if you check out an old version
after working on a recent version. More advanced build systems that
don't trust file modification timestamps won't be misled by this,
but not everything uses them (and not everything should have to).
More generally, metadata has the problem that much of it isn't portable. Non-portable metadata raises multiple issues. First, you need system-specific code to capture and restore it. Then you need to decide how to represent it in your VCS (for instance, do you represent it as essentially opaque blobs, or do you try to translate it to some common format for its type of metadata). Finally, you have to decide what to do if you can't restore a particular piece of metadata on checkout (either because it's not supported on this system or because of various potential errors).
(Capturing certain sorts of metadata can also be surprisingly expensive and strongly influence certain sorts of things about your storage format. Consider the challenges of dealing with Unix hardlinks, for example.)
You can come up with answers for all of these, but the fundamental problem is that the answers are not universal; different use cases will have different answers (and some of these answers may actually conflict with each other; for instance, whether on Unix systems you should store UIDs and GIDs as numbers or as names). VCSes are not designed or built to be comprehensive backup systems, partly because that's a very hard job (especially if you demand cross system portability of the result, which people do very much want for VCSes). Instead they're designed to capture what's important for version controlling things and as such they deliberately exclude things that they think aren't necessary, aren't important, or are problematic. This is a perfectly sensible decision for what they're aimed at, in line with how current VCSes don't do well at handling various sorts of encoded data (starting with JSON blobs and moving up to, say, word processor documents).
Would it be nice to have a perfect VCS, one that captured everything, could restore everything if you asked for it, and knew how to give you useful differences even between things like word processor documents? Sure. But I can't claim with a straight face that not being perfect makes a VCS broken. Current VCSes explicitly make the tradeoff that they are focused on plain text files in situations where only some sorts of metadata are important. If you need to go outside their bounds, you'll need additional tooling on top of them (or instead of them).
(Or, the short version, VCSes are not backup systems and have never claimed to be ones. If you need to capture everything about your filesystem hierarchy, you need a carefully selected, system specific backup program. Pragmatically, you'd better test it to make sure it really does back up and restore unusual metadata, such as file attributes.)
OpenSSH 7.9's new key revocation support is welcome but can't be a full fix
I was reading the OpenSSH 7.9 release notes, as one does, when I ran across a very interesting little new feature (or combination of features):
- sshd(8), ssh-keygen(1): allow key revocation lists (KRLs) to revoke keys specified by SHA256 hash.
- ssh-keygen(1): allow creation of key revocation lists directly from base64-encoded SHA256 fingerprints. This supports revoking keys using only the information contained in sshd(8) authentication log messages.
Any decent security system designed around Certificate Authorities needs a way of revoking CA-signed keys to make them no longer valid. In a disturbingly large number of these systems as people actually design and implement them, you need a fairly decent amount of information about a signed key in order to revoke it (for instance, its full public key). In theory, of course you'll have this information in your CA system's audit records because you'll capture all of it in your audit system when you sign a key. In practice there are many things that can go wrong even if you haven't been compromised.
Fortunately, OpenSSH was never one of these systems; as covered in ssh-keygen(1)'s 'Key Revocation Lists', you could specify keys in a variety of ways that didn't require a full copy of the key's certificate (by serial number or serial number range, by 'key id', or by its SHA1 hash). What's new in OpenSSH 7.9 is that they've reduced the amount of things you need to know in practice, as now you can revoke a key given only the information in your ordinary log messages. This includes but isn't limited to CA-signed SSH keys (as I noticed recently).
(This took both the OpenSSH 7.9 change and an earlier change to log the SHA256 of keys, which happened in OpenSSH 6.8.)
This OpenSSH 7.9 new feature is a very welcome change; it's now
much easier to go from a log message about a bad login to blocking
all future use of that key, including and especially if that key
is a CA-signed key and so you don't (possibly) have a handy copy
of the full public key in someone's
However, this isn't and can't be a full fix for the tradeoff of
having a local CA. The tradeoff is still there,
it's just somewhat easier to deal with either a compromised signed
key or the disaster scenario of a compromised CA (or a potentially
With a compromised key, you can immediately push it into your system for distributing revocation lists (and you should definitely build such a system if you're going to use a local CA); you don't have to go to your CA audit records first to fish out the full key and other information. With a potentially compromised CA, it buys you some time to roll over your CA certificate, distribute the new one, re-issue keys, and so on, without being in a panic situations where you can't do anything but revoke the CA certificate immediately and invalidate everyone's keys. Of course, you may want to do that anyway and deal with the fallout, but at least now you have more options.
(If you believe that your attacker was courteous enough to use unique serial numbers, you can also do the brute force approach of revoking every serial number range except the ones that you're using for known, currently valid keys. Whether or not you want to use consecutive serial numbers or random ones is a good question, though, and if you use random ones, this probably isn't too feasible.)
PS: I continue to believe that if you use a local CA, you should be doing some sort of (offline) auditing to look for use of signed keys or certificates that are not in your CA audit log. You don't even have to be worried that your CA has been compromised, because CA software (and hardware) can have bugs, and you want to detect them. Auditing used keys against issued keys is a useful precaution, and it shouldn't need to be expensive at most people's scale.