Wandering Thoughts

2022-09-25

What can a compromised TLS Certificate Transparency Log do?

One of the potential concerns in the Certificate Transparency ecosystem is that a CT Log could be compromised. But what can an attacker who's in control of a CT log actually do? That's a question both of how CT logs work in general and of the current uses that people make of them, both clients (ie browsers) and Certificate Authorities. So here's what I can see about that, based partly on the TLS client's view of CT logs. To start with, let's restate an obvious thing: a CT Log cannot by itself create a valid TLS certificate. Any real attack requires not just a compromised CT log (or several), but a Certificate Authority that's either compromised or can be used to mis-issue some certificates for you.

(The other thing to say is that no browser relies on a single CT log, and neither should you. Currently, both Chrome and Safari require validation from at least two CT logs, which means that an attacker needs to compromise two of them in order to have a chance of fooling a browser.)

Even without a compromised CA, an attacker in control of a CT Log can make it not work globally in some way or ways. The log can stop giving Signed Certificate Timestamps (SCTs) to CAs when they ask for them as the CAs issue new TLS certificates, either for all certificates or for some of them. It can not actually add some or all TLS certificates to the log, although it gave out SCTs for them. It can stop answering some or all queries about the log (what RFC 9162 calls Log Client Messages), from some or all parties. If detected, all of these would be taken as a sign of a malfunctioning log and would eventually result in the log being dropped from CAs and browsers.

I believe that not answering some queries globally can be used to (temporarily) hide the specifics of some TLS certificates in the log. As far as I can see from RFC 9162, the only way for an outside party to see that specific TLS certificates are in the log is to ask for them (by index position) with a Receive Entries and STH from Log request. If a compromised log wants to hide the presence of some TLS certificates, it can refuse to answer entry queries for any ranges that include the TLS certificates. These certificates are in the log's Merkle tree and it can provide valid proofs of their inclusion, but you can only discover them if you know some details about them (for example, a TLS server gave them to you). To other people trying to audit the log, it might look like ongoing load problems or some other more innocent excuse.

(I suspect that there are enough heavyweight CT log auditors that this excuse wouldn't pass muster for very many days, but it might last long enough for a very special TLS certificate the attacker got to be of some use.)

The simplest non-global thing for a compromised CT log to do is to give the attacker SCTs for special certificates (which can then be put in those certificates) and then not add the certificates to its public, global log. Depending on how much use a browser makes of CT, a browser may well accept a properly signed TLS certificate with these SCTs in it, because all the browser checks is that the SCTs are properly formed and signed. If this was detected, a compromised CT Log could then try to blame the lack of inclusion on general operational issues. Right now, it appears that this SCT-only compromise would probably fool browsers while hiding those bad TLS certificates from all of the people who watch CT logs.

I believe that a compromised CT Log can generate a special version of its Merkle tree that includes extra certificates any time it wants to. This special tree could then be used to create a Signed Tree Head (STH), proofs of inclusion for those certificates in the tree, and a proof that this special tree is an update of a previous legitimate, globally visible tree. However, the careful examination of the STH information might reveal more and more oddities as the tree is generated further and further away from the times in the STHs for the TLS certificates. This special tree's STH will also not be related to the next legitimate, globally visible STH; it will be a one time, frozen fork of the CT log's log.

With more work, a compromised CT log could fork its log into two versions, a private one with extra TLS certificates that were added at the time they were nominally generated and a public version without them. Since this is a fork, the public and private Signed Tree Heads aren't compatible with each other (there's no path from one to the other), so the CT log would have to make sure that any given client only saw either the public version or the private version. How difficult this is depends on how identifiable the client is to the CT log, how it queries the log, and whether or not the client talks to TLS servers that include STHs from the log in their TLS handshake or stapled OCSP response.

Finally, a compromised CT log could return special replies (possibly corrupt ones) to log queries to some clients in an attempt to exploit bugs in the clients. Right now I believe this wouldn't affect browsers, which work only with SCTs and which don't directly get them from the CT logs. It might affect CAs, who directly request SCTs from CT logs and who may audit the logs to make sure that the CA's certificates are present as they're supposed to be, although you would hope that the CA's code would be hardened and well confined, and it could definitely affect anyone who audits CT logs. I suspect that current CT log auditing programs aren't particularly hardened, except through basic implementation language safety (one log monitoring program I know of is written in Go, for example).

(This entry was sparked by Emily M. Stark's Certificate Transparency is really not a replacement for key pinning which got me to start thinking about the cluster of issues around CT logs and CT log compromise.)

TLSCertTransBadLogOptions written at 22:49:05; Add Comment

2022-09-22

The TLS client's view of Certificate Transparency and CT Logs

TLS Certificate Transparency is a system where browser vendors require TLS Certificate Authorities to publish information about all of their TLS certificates in cryptographically validated logs, which are generally run by third parties (see also Wikipedia). This raises the question of how clients (generally browsers) interact with Certificate Transparency. As far as I can tell, it depends on how thorough a client wants to be about verifying that a TLS certificate really is in a given CT log.

The current version of Certificate Transparency is described in RFC 9162. Following RFC 9162, when a client gets a TLS certificate issued by a participating CA (which is all of them that want to work with Chrome and Safari), it will also receive (in one way or another) some number of Signed Certificate Timestamps (SCTs). Each SCT is a promise by some CT log to include the certificate (broadly speaking) in the log within a time interval specified by the log, and is signed by the CT's private key. A garden variety client can verify the SCT signatures (for CT logs that it knows of and accepts) and stop there. Generating a valid SCT requires (some) control of that log's private key and its activities, and if the key or the log is compromised, there's potentially not lots of point in going further.

(A client may also receive additional CT related information from the TLS server, up to all of the information it needs to validate things more thoroughly, See TLS Client in the RFC.)

A client that wants to be more thorough can then request a proof of inclusion in the CT log from the log operator, provided that enough time has gone by since the SCT's timestamp. I believe that it may need to bootstrap a Signed Tree Head (STH) from the CT log, unless it got one (and an inclusion proof) from the TLS server. That the TLS server can provide the STH and inclusion proof from the CT log is good for privacy but potentially bad for your confidence in the SCT, because it means that your client has no outside check on all of them. If an attacker had access to the CT log's private keys, they could potentially manufacture a STH and inclusion proof along side their signed SCT and have their server give all of them to you.

(I don't know how common it is for TLS servers to provide the additional CT information to clients. In modern usage TLS certificates have embedded SCTs, so they take no extra configuration work to provide; the other information requires the server operator to set it up and do things, and possibly for the client to have special features.)

I believe this means that a thorough client must learn and save the STHs for CT logs, and then (periodically) get a Merkle consistency proof between two STHs, possibly along side getting the latest STH for the CT log. Assuming that STH's are relatively coarse grained and aren't issued for every new TLS certificate sent to the CT log, it presumably leaks less information to the log operator to ask for a consistency proof between some STH and the current STH than it does to ask for a proof of inclusion of some specific certificate (if the server gave you that).

(Since this thoroughness requires state and state management, it's probably mostly restricted to browsers.)

Periodically verifying that two STHs are properly related to each other means that if someone has lied to you about a proof of inclusion (which requires a false STH), they have to keep lying from then onward in order to remain undetected. Otherwise, you will someday get a current STH for the real CT log (without the TLS certificate) and then there will be no path between your latest false STH and the real one.

Continually lying to you this way will be very difficult (if not impossible) if a bunch of TLS servers provide you with proofs of inclusion and their view of the log's STH during your TLS conversations. These TLS servers are seeing the true log and so getting true STHs from it and then providing these true STHs to you. Really, you don't need a bunch of TLS servers to be doing this, you just need some really popular ones to be doing it for common CT logs.

PS: RFC 9162 is surprisingly readable, especially its general discussion sections on server and client stuff. Interested people may want to at least skim them.

TLSCertTransLogsClientView written at 22:38:15; Add Comment

2022-09-21

Some notes on the readings you get from USB TEMPer2 temperature sensors

A while back, I tweeted something that has a story attached:

A person with a single machine room temperature sensor knows the room temperature (where the sensor is). A person with three temperature sensors lined up next to each knows only uncertainty (and has a wish for a carefully calibrated and trustworthy thermometer).

If you set out to get some inexpensive USB temperature sensors to supplement a more well developed and expensive temperature sensor system, it's quite likely that you'll wind up with the PCsensor TEMPer2 or something in that line, and then you might be curious about how accurate their readings are. Having now collected readings from three of them over a while, my summary is that you shouldn't expect industrial or lab grade results from them, although the results are probably useful if handled cautiously. So here are some observations, which are almost certainly specific to our version of the TEMPer2 (which Linux reports as having USB vendor and product IDs of 1a86:e025).

(It's clear that 'TEMPer2' is a brand name that's been used on a variety of different pieces of electronics over the years, even if the external appearance and general functionality has stayed the same.)

Since this is long and observational, I'll put the summary up front. If you're going to use TEMPer2s, you need to test the behavior of each of your specific units, you probably want to trust the probe temperature more than the internal temperature, and you want to use a USB extender cable (partly to get the probe far enough away from your computer, since the TEMPer2 only comes with a relatively short wire for the probe).

The TEMPer2 has two temperature sensors, one inside the USB stick and another that uses a temperature probe on a wire. As far as the internal USB or 'inner' temperature goes, you definitely want to read Halestrom's article on indoor air sensor products and then get yourself a USB extender cable so that you aren't plugging the USB stick directly into your computer. Although both temperature sensors have anomalies, you're probably better off with the probe's temperature if you have to use one (although you want to test this). When I initially had all of our TEMPer2s plugged directly into computers (two servers and one desktop), their USB temperature was extremely stable and unchanging; using a USB extender cable gave them more realistic variation.

At this point, we have three TEMPer2 units in use, all with the probe fitted, the USB stick on an extender cable, and the probe sensor next to the USB stick (so they should read the same). One is in a machine room right next to our regular machine room sensor for that room, one is in a machine room relatively near our existing sensor, and the third is attached to an office desktop. Both machine room TEMPer2s show changes in both USB and probe temperature readings that track temperature changes from our regular sensor, and they appear to be similar in magnitude (although we haven't had any big temperature swings). None of the TEMPer2 readings agree with the regular sensor readings, not even the one where the three sensors are next to each other; however, they aren't that far off (it looks like typically around 1 C below our regular sensor reading). One machine room TEMPer2 has the probe reading higher than the USB one (by about 0.5 C I think), while the other one has them generally the same or very close to it. The desktop TEMPer2 has the probe reading below by about 1 C.

(When I've looked at my desktop thermometer, which is near the desktop TEMPer2 USB and probe, it's been quite close to the USB's measured temperature.)

All three TEMPer2 units sometimes show what I consider to be reading anomalies where the USB temperature, the probe temperature, or both will latch on to some value for a long period of time, with absolutely no variation in reading for hours on end. One machine room unit does this quite frequently for the USB temperature (and sometimes for the probe), the desktop unit does it quite frequently for the probe temperature (and sometimes for the USB), and the other machine room unit doesn't seem to do it very much or for very long.

(The probe temperature on the desktop unit commonly latches at 21.06 C and its USB temperature at 21.68 C, while the machine room TEMPer2 latches the USB temperature at 19.56 C and sometimes latches the probe temperature at 19.81 C. The other machine room TEMPer2 sometimes latches the probe temperature at 23.56 C. As you can see, there's a lot of variation here. These TEMPer2 units apparently report their temperatures over USB in hundredths of a degree C, so the two digits are authentically what they're reporting.)

Unless something changes when the desktop TEMPer2 unit is moved into a machine room and connected to a different computer, it seems like it's a less trustworthy unit than the other two. It definitely seems to have a different behavior where the USB sensor varies its readings more often than the probe and may be more trustworthy (my office is probably not the same temperature to the hundredth of a degree for hours on end). Given this unit's behavior and the varied behavior of all three of them, we definitely need to test any future units under controlled circumstances to see how they behave, even if that's just putting them next to another unit with more known behavior to compare.

In the past, people have measured various PCsensor branded hardware (including 'TEMPer2' units) and found that its accuracy varied with the true temperature. If precise accuracy is important to you, you probably don't want to use a TEMPer2 in the first place but if you have to, I think you should calibrate it over the temperature range you care about. Given our results so far, the only use we'd make of TEMPer2 units is to get a vague idea of the temperature of some place and be able to tell if it's gone up a lot.

PS: The TEMPer2 units can report small temperature variations from reading the reading; for example, I've seen probe readings of 19.62 C, 19.68 C, and 19.75 C, and USB readings of 22.87 C and 22.93 C. So in general the 'latching' isn't as simple as the internal precision being way less than .01 C (although I can't imagine it's that precise, so presumably there's some rounding and noise happening).

(Although the raw data is in our metrics system, there's no convenient way to find out how many different sensor readings we've seen. It does appear that the smallest variation between any two readings is 0.06 C, and the largest one is 0.75 C (observed for the internal sensor; the largest variation for a probe is 0.56 C). This comes from readings that are generally taken 30 seconds apart.)

USBTemper2ReadingsNotes written at 23:40:17; Add Comment

2022-09-16

The problem of network tunnels and (asymmetric) routing

Let's suppose that you have an inside network I on which you have a bunch of things used in your environment, like central syslog servers, your client mail gateway machine, and so on, and you also have an external machine E, that you would like to be able to use those services. One obvious seeming way that you could do this is by setting up some form of network tunnel between E and a touchdown machine T that has access to your inside network I (these days you might use WireGuard, for example). Your exterior machine E sets a network route to I that goes through the tunnel, the traffic pops out from T (behind your perimeter firewall), and everything is happy, right?

Unfortunately, often not so much these days, because of stateful firewalls. The problem is that the nice simple model I've described here has asymmetric routing. E's packets to the inside network I goes via the tunnel and T, but under normal circumstances the return traffic from machines on I will go out their regular network gateway, not back via T. If there is a stateful firewall between your I machines and E, it may well become unhappy and start blocking this traffic, since it looks like a half-open connection (since it's only seeing half of the traffic).

One answer to this is to have the tunnel endpoint T be a NAT gateway, not just a simple tunnel endpoint that passes traffic in the clear; then E's traffic to machines on I emerges with T's IP address on it, so return packets go back through T. However this leaves you with a different asymmetric routing problem if machines on I ever reach out to E on their own (for example to SSH in to it or to collect metrics from it). Their packets will flow out normally, un-NAT'd, but E will try to send packets for them back through the tunnel and T's NAT'ing. You can solve this with simple "policy based" routing on E, so that reply packets go out the interface they came in on.

(You can also solve this by having E only route through the tunnel for machines on I that never reach out to it, setting host routes instead of network routes, although this is potentially fragile.)

Another solution is to teach all machines on your internal network I that the external machine E is actually reached through a special route to the tunnel gateway T. If T is not on I itself and is reached through the same router as the default route, you might be able to do this by a change to your gateway router alone. The obvious drawback to this is that now the tunnel gateway T becomes an additional point of failure for reaching E (well, for machines on the internal network I).

(In a sufficiently complex environment, this can be automated through routing announcements; T and your collection of other tunnel gateways all announce what IP addresses are reachable through them, and various things listen and respond appropriately. When a tunnel is down, the routes are withdrawn and things go back to the defaults.)

Your life is generally easier if the external machine E is not directly reachable from the Internet and instead you have to go through a different IP address to reach it (such as a gateway); often this means you have no conflicting routes, since you can't reach E's (private) IP address except through the tunnel. Of course you may have a similar problem if you need to manage the gateway machine itself, since that machine definitely has a public IP address.

IPTunnelsAndRouting written at 22:45:27; Add Comment

2022-09-11

The amount of memory in basic 1U servers and our shifting views of it

One of the things happening here is that we're in the process of rolling over our Ubuntu 18.04 servers on to our current generations of server hardware as we rebuild them as Ubuntu 22.04 based machines. This (and other local events) has caused us to take a look at what older servers we want to keep and what ones we want to get rid of, or at least exile to the depths of the back shelves. Surprisingly, one factor is their CPU performance, but another one is how much RAM they have, and this has set me thinking about the (slowly) shifting scales of how much memory basic 1U servers come with and how much is what we consider 'adequate'.

These days, going through Dell's configurator for R350s and R250s (the current generation of what we have), it seems hard to get something with less than 16 GB of RAM and impossible to get something with less than 8 GB. What we consider to be our current servers all came with 8 GB of RAM as our floor amount (I don't know if we could have gotten them with less, but I suspect not). However, our smallest older servers go as low as 2 GB of RAM, with some having 4 GB. According to DMI information, the smallest DIMMs we have in any Ubuntu server are 2 GB DIMMs, so based on that we couldn't have servers with less than 2 GB of RAM in general.

As a practical matter, I don't think we'd deploy any reused server with less than 4 GB of RAM, and we might take the effort to bring them up to 8 GB. We have very few machines with less than 8 GB now, and it's not just because of the hardware generation they're on. We've simply wound up in a situation where we default to thinking that 8 GB is the minimum amount of RAM that a server should have (and we add more if it seems called for). Of course this isn't absolutely necessary; we probably have plenty of servers that don't really need 8 GB, and I've never had problems on my virtual machines with 4 GB.

I'm not energetic enough to trawl our records to see how much RAM various generations of servers were bought with, but there certainly was a day when 1 GB or 2 GB was what they came in the door with. Some very quick exploration suggests that we were getting basic servers with 1 GB of RAM as far back as fifteen years ago, and ten years ago we seem to have been on the cusp of only being able to get new servers with a minimum of 2 GB of RAM. We probably had a while when servers came in the door with 4 GB, but for the past few years 8 GB has been the minimum.

I suspect that this shift in server RAM sizes is driven by a similar effect to the shift in hard drive sizes (both SSDs and HDDs), where manufacturers mostly hold the price constant and keep increasing the DIMM size. I do sort of wish that memory DIMM sizes had risen at the same rate that SSD sizes did; instead, they seem to have stagnated for a while, and certainly didn't rise aggressively. (This is one reason that the current generation and the past generation of my desktops have had the same 32 GB of RAM, although my current generation is getting a bit old by now and prices might have shifted lately.)

(It certainly would be nice for 32 GB or 64 GB or even 128 GB to be a standard, inexpensive memory size. But not so much, at least for us, although it is now much more reasonable to have 32 GB machines and we have a number of them.)

ServerMemoryShiftingAmounts written at 23:21:32; Add Comment

2022-08-25

U.2, U.3, and other server NVMe drive connector types (in mid 2022)

The other day I casually looked around to see how readily available U.2 NVMe drives were compared to SATA SSDs. In the process I saw some mention of '2.5" U.3' NVMe drives, which was a connector type I'd never heard of, and did some digging.

The short summary of U.2 is that it's NVMe drives in more or less the 2.5" SSD form factor (although according to Wikipedia, U.2 can also deliver two SATA lanes), with a different edge connector. Our recent experience with some U.2 based servers says that this works; our U.2 NVMe drives in drive carriers look and handle basically the same as SATA SSDs in drive carriers in other servers. To tell them apart, you have to either look at the back of the drive where the connectors are or notice the big 'NVMe' sticker on the front of the drive carrier.

U.3 is sort of an evolution of the U.2 connector and form factor, but it's sufficiently unloved that Wikipedia barely has a mention of it. The goal of U.3 is to create a 'tri-mode' standard where the same server drive bay can support U.3 NVMe, 2.5" SAS, or 2.5" SATA drives (and also the same server backplane and controller; see here, here, and here). A U.3 NVMe drive is backward compatible to U.2 drive bays, but a U.2 NVMe drive can't be used in a U.3 drive bay, presumably for reasons.

For people like us, ordinary 1U servers with U.3 drive bays would be reasonably attractive. We'd mostly use them with SATA SSDs, but if we had a server that could benefit from NVMe it would be easy to switch over to it. If we had an NVMe drive failure and had no spare for some reason, we could swap in a SATA SSD to get the server back on the air. And we wouldn't need specific spares for NVMe servers the way we do with U.2, because any server could be an NVMe server if we needed it to be.

(The natural number of 2.5" drive bays for a 1U server seems to be four, and with NVMe drives that only needs PCIe x16, which is pretty widely available.)

However if you do Internet searches for U.3 you'll soon discover that there's a competing set of standards for NVMe disks on servers, the EDSFF series, and some people feel that U.2 and U.3 are doomed in the face of them. The EDSFF form factors are specific to NVMe SSDs; there's no concession for backward compatibility to the 2.5" form factor.

I have no idea how this is going to shake out. People are still announcing new U.3 NVMe drives today, but there's EDSFF activity too. My biased perspective is that right now we're more interested in U.3's flexibility to choose between SATA and NVMe for SSDs than what EDSFF might deliver. But if EDSFF causes NVMe prices to drop to the level of SATA SSDs, sure, we'd be happy to go NVMe. It's flash storage either way, so if everything else is equal we'd rather have the faster version.

ServerNVMeU2U3AndOthers2022 written at 22:40:31; Add Comment

2022-08-24

We now have some 1U servers with U.2 NVMe SSDs and they're okay

Back in early 2021 I wrote about my impressions of NVMe versus SATA (or SAS) SSDs for basic servers. At that point I didn't expect us to get NVMe based servers any time soon, especially for servers not focused on fast storage. Well, times change, and we now have a number of 1U servers with U.2 NVMe drives. These aren't really "basic" servers in our usual sense; instead they tend to be pretty powerful compute servers. But they're still 1U servers and in theory there's nothing to stop people from having lower end ones with NVMe SSDs. Our experiences with these servers have been positive, in that everything works as we expect and basically how things would be if these were SATA SSDs instead.

(Obviously the U.2 NVMe drives are a lot faster and have lower latency, but these servers mostly don't put any real stress on their storage.)

We didn't get these servers with NVMe disks instead of SATA (or SAS) disks because we had some attraction to NVMe; if anything, we prefer SATA SSDs to U.2 NVMe SSDs because it's much easier to get spares and replacements (SATA SSDs are commodity items; U.2 NVMe SSDs are more expensive and harder to find). Instead, we got these servers with U.2 NVMe drives because that's the configuration they really wanted to come in. All of these servers have four hot swap drive bays (taking their own proprietary drive carriers), although we normally only use two (for a mirrored pair of system disks), and we opted to get them with four U.2 NVMe drives each in order to build up a pool of spares.

Physically and in operation these are just like conventional SATA or SAS drive carriers (from this particular system vendor) and more or less just like conventional 2.5" SATA and SAS drives (they may be thicker, but I don't pay close attention to that). In fact they're so physically similar that I'm glad the vendor puts a big 'NVMe' label on the front, because otherwise we could easily get confused about which drive carrier is U.2 NVMe and which drive carrier is SATA SSD.

One particular area which they are just like SATA drives in drive carriers is that we've hot-swapped inactive U.2 NVMe drives without problems. Linux certainly didn't explode. This gives me hope that we'll be able to deal gracefully if a system drive fails and has to be replaced. Hopefully, a failed NVMe drive won't have adverse consequences for the PCIe fabric it's connected to.

(Our hot-swapping of inactive drives came about because we left all four drives inserted in some servers, although we were only using two, and then later wanted to pull the two inactive drives out.)

I don't know why this particular vendor decided to make these systems be basically native U.2, although they're not really storage servers (being 1U systems with only four drive bays). All of the systems that are this way are dual-socket AMD Zen3 Epyc based ones, so maybe it's partly because they have so many PCIe lanes available.

ServerWithU2NVMeIn2022 written at 22:51:48; Add Comment

2022-08-16

The names of disk drive SMART attributes are kind of made up (sadly)

A well known part of SMART is its system of attributes, which provide assorted information about the state of the disk drive. When we talk about SMART attributes we usually use names such as "Hardware ECC Recovered", as I did in my entry on how SMART attributes can go backward. In an ideal world, the names and meanings of SMART attributes would be standardized. In a less than ideal world, at least each disk drive would tell you the name of each attribute, similar to how x86 CPUs tell you their name. Sadly we don't live in either such world, so in practice those nice SMART attribute names are what you could call made up.

The only actual identification of SMART attributes provided by disk drives (or obtained from them) is an ID number. Deciding what that ID should be called is left up to programs reading SMART data (as is how to interpret the raw value). Because of this flexibility in the standard, disk drive makers have different views on both the proper, official names of their SMART attributes as well as how to interpret them. Some low-numbered SMART attributes have almost standard names and interpretations, but even that is somewhat variable; SMART ID 9 is commonly used for 'power on hours', but both the units and the name can vary from maker to maker.

Disk drive makers may or may not share information on SMART ID names and interpretations with people; usually it's not, except perhaps to some favoured drive drive diagnostic programs. Often, information about the meaning and names of SMART attributes must be reverse engineered from various sources, especially in the open source world. Open source programs such as smartmontools often come with an extensive database of per-model attribute names and meanings; in smartmontools' case, you probably want to update its database every so often.

As a corollary of this, names for SMART attributes aren't necessarily unique; the same name may be used for different SMART IDs across different drives. Across our collection of disk drives, "Total LBAs Written" may be any of SMART ID 233 (some but not all Intel SSDs), 241 (most brands and models of our SSDs and even some HDDs), or 246 (Crucial/Micron). Meanwhile, SMART IDs 241 and 233 have five different names across our fleet, according to smartmontools.

(SMART ID 233 is especially fun; the names are "media wearout indicator", "nand gb written tlc", "sandforce internal", "total lbas written", and "total nand writes gib". The proper interpretation of values of SMART ID 233 thus varies tremendously.)

Fortunately, NVMe is more sensible about its drive health information. The NVMe equivalent of (some) SMART attributes are standardized, with fixed meanings and no particularly obvious method for expansion.

PS: Interested parties can peruse the smartmontools drivedb.h to find all sorts of other cases.

SMARTAttributeNamesMadeUp written at 22:13:49; Add Comment

2022-08-15

Disk drive SMART attributes can go backward and otherwise be volatile

Recently, we had a machine stall hard enough that I had to power cycle it in order to recover it. Since the stall seemed to be related to potential disk problems, I took a look at SMART data from before the problem seemed to have started and after the machine was back (this information is captured in our metrics system). To my surprise, I discovered that several SMART attributes had gone backward, such as the total number of blocks read and written (generally SMART IDs 241 and 242) and 'Hardware ECC Recovered' (here, SMART ID 195). I already knew that the SMART 'power on hours' value was unreliable, but I hadn't really thought that other attributes could be unreliable this way.

This has lead me to look at SMART attribute values over time across our fleet, and there certainly do seem to be any number of attributes that see 'resets' of some sort despite being what I'd think was stable. Various total IO volume attributes and error attributes seem most affected, and it seems that the 'power on hours' attribute can be affected by power loss as well as other things.

Once I started thinking about the details of how drives need to handle SMART attributes, this stopped feeling so surprising. SMART attributes are changing all the time, but drives can't be constantly persisting the changed attributes to stable storage, whether that's some form of NVRAM or the HDD itself (for traditional HDDs with no write endurance issues). Naturally drives will be driven to hold the current SMART attributes in RAM and only persist them periodically. On an abrupt power loss they may well not persist this data, or at least only save the SMART attributes after all other outstanding IO has been done (which is the order you want, the SMART attributes are the least important thing to save). It also looks like some disks may sometimes not persist all SMART attributes even during normal system shutdowns.

This probably doesn't matter very much in practice, especially since SMART attributes are so variable in general that it's hard to use them for much unless you have a very homogenous set of disk drives. There's already no standard way to report the total amount of data read and written to drives, for example; across our modest set of different drive models we have drives that report in GiB, MiB, or LBAs (probably 512 bytes).

(Someday I may write an entry on fun inconsistencies in SMART attribute names and probably meaning that we see across our disks.)

PS: I don't know how NVMe drives behave here, since NVMe drives don't have conventional SMART attributes and we're not otherwise collecting the data from our few NVMe drives that might someday let us know for sure, but for now I'd assume that the equivalent information from NVMe drives is equally volatile and may also go backward under various circumstances.

SMARTAttributesVolatile written at 21:51:50; Add Comment

2022-07-10

My distrust of multi-factor authentication's account recovery story

A bunch of third party websites really want you to use multi-factor authentication these days. Some of them aren't giving some people a choice about it; for example, PyPI recently mandated MFA for sufficiently popular projects. I have decidedly mixed feelings about this in general, and I've realized that one reason for them is that I don't trust the some of the potential failure modes of multi-factor authentication. Specifically, the ones related to 'account recovery', also known as what happens when things go wrong with your MFA-related devices.

There's no general account recovery problem with MFA. For example, if the MFA hardware token from my employer was lost or destroyed, I'd report it and various processes would happen and a new one would show up and get registered to me. If the MFA I used with my bank was lost, I'd go to my bank branch to talk to them, and eventually things would get reset. But both of these situations have some things in common. I can actually talk to real people in both situations, and both have out of band means of identifying me (and communicating with me).

Famously, neither of these is the case with many large third party websites, which often have functionally no customer support and generally no out of band ways of identifying you (at least not ones they trust). If you (I) suffer total loss of all of your means of doing MFA, you are probably completely out of luck. One consequence of this is that you really need to have multiple forms of MFA set up before you make MFA mandatory on your account (better sites will insist on this). People advise things like multiple hardware tokens, with some of them carefully stored offsite in trusted locations. This significantly (or vastly) raises the complexity of using MFA with these sites.

More broadly, this is a balance of risks issue. I care quite a bit about the availability of my accounts, and I feel that it's much more likely that I will suffer from MFA issues than it is that I will be targeted and successfully phished for my regular account credentials (or that someone can use 'account recovery' to take over the account). If loss of MFA is fatal, my overall risks go way up if I use MFA, although the risk of account compromise goes way down.

(As a side note, this is likely not PyPI's situation. PyPI is apparently giving people security keys, and is clearly in touch with these people through additional channels. If PyPI considers you and your package critical, it's very likely that you can recover from an MFA loss. PyPI here is much more like my employer than it is like, say, Google. But most random websites that ask me to enable MFA are much more like Google than PyPI.)

(This isn't my only issue with 'you must have MFA' requirements, but it's a starting point.)

MFAAccountRecoveryDistrust written at 23:31:11; Add Comment

(Previous 10 or go back to July 2022 at 2022/07/03)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.