Wandering Thoughts

2020-07-04

How you get multiple TLS certificate chains from a server certificate

I've known and read for some time that a single server certificate can have more than one chain to a root certificate that you trust, but I never really thought about the details of how this worked. Then the AddTrust thing happened, I started writing about how Prometheus's TLS checks would have reacted to it, and Guus left a comment on that entry that got me thinking about what else Prometheus could sensibly look at here. So now I want to walk through the mechanics of multiple TLS chains to get this straight in my head.

Your server certificate and the other TLS certificates in a chain are each signed by an issuer; in a verified chain, this chain of issuers eventually reaches a Certificate Authority root certificate that people have some inherent trust in. However, a signed certificate doesn't specifically name and identify the issuer's certificate by, say, its serial number or hash; instead issuers are identified by their X.509 Subject Name and also at least implicitly by their keypair (and sometimes explicitly). By extension, your signed certificate also identifies the key type of the issuer's certificate; if your server certificate is signed by RSA, an intermediate certificate with an ECDSA keypair is clearly not the correct parent certificate.

(Your server certificate implicitly identifies the issuer by keypair because they signed your certificate with it; an intermediate certificate with a different keypair can never validate the signature on your certificate.)

However, several certificates can have the same keypair and X.509 Subject Name, provided that other attributes differ. One such attribute is the issuer that signed them (including whether this is a self-signed CA root certificate). So the first thing is that having more than one certificate for an issuer is generally required to get multiple chains. If you only have one certificate for each issuer, you can pretty much only build a single chain.

There are three places that these additional certificates for an issuer can come from; they can be sent by the server, they can be built into your certificate store in advance, or they can be cached because you saw them in some other context. The last is especially common with browsers, which often cache intermediate certificates that they see and may use them in preference to the intermediate certificate that a TLS server sends. Other software is generally more static about what it will use. My guess is that we're unlikely to have multiple certificates for a single CA root issuer, at least for modern CAs and modern root certificate sets as used by browsers and so on. This implies that the most likely place to get additional issuer certificates is from intermediate certificates sent by a server.

(In any case, it's fairly difficult to know what root certificate sets clients are using when they talk to your server. If your server sends the CA root certificate you think should be used as part of the certificate chain, a monitoring client (such as Prometheus's checks) can at most detect when it's got an additional certificate for that CA root issuer in its own local trust store.)

One cause of additional issuer certificates is what's called cross-signing a CA's intermediate certificate, as is currently the case with Let's Encrypt's certificates. In cross-signing, a CA generate two versions of its intermediate certificate, using the same X.509 Subject Name and keypair; one is signed by its own CA root certificate and one is signed by another CA root certificate. A CA can also cross-sign its own new root certificate (well, the keypair and issuer) directly, as is the case with the DST Root CA X3 certificate that Let's Encrypt is currently cross-signed with; one certificate for 'DST Root CA X3' is self-signed and likely in your root certificate set, but two others existed that were cross-signed by an older DST CA root certificate.

(As covered in the certificate chain illustrations in Fixing the Breakage from the AddTrust External CA Root Expiration, this was also the case with the expiring AddTrust root CA certificate. The 'USERTrust RSA Certification Authority' issuer was also cross-signed to 'AddTrust External CA Root', a CA root certificate that expired along with that cross-signed intermediate certificate. And this USERTrust root issuer is still cross signed to another valid root certificate, 'AAA Certificate Services'.)

This gives us some cases for additional issuer certificates:

  • your server's provided chain includes multiple intermediate certificates for the same issuer, for example both Let's Encrypt intermediate certificates. A client can build one certificate chain through each.

  • your server provides an additional cross-signed CA certificate, such as the USERTrust certificate signed by AddTrust. A client can build one certificate chain that stops at the issuer certificate that's in its root CA set, or it can build another chain that's longer, using your extra cross-signed intermediate certificate.

  • the user's browser knows about additional intermediate certificates and will build additional chains using them, even though your server doesn't provide them in its set of certificates. This definitely happens, but browsers are also good about handling multiple chains.

In a good world, all intermediate certificates will have an expiration time no later than the best certificate for the issuer that signed them. This was the case with the AddTrust expiration; the cross-signed USERTrust certificate expired at the same time as the AddTrust root certificate. In this case you can detect the problem by noticing that a server provided intermediate certificate is expiring soon. If only a CA root certificate at the end of an older chain is expiring soon and the intermediate certificate signed by it has a later expiration date, you need to check the expiration time of the entire chain.

As a practical matter, monitoring the expiry time of all certificates provided by a TLS server seems very likely to be enough to detect multiple chain problems such as the AddTrust issue. Competent Certificate Authorities shouldn't issue server or intermediate certificates with expiry times later than their root (or intermediate) certificates, so we don't need to try to find and explicitly check those root certificates. This will also alert on expiring certificates that were provided but that can't be used to construct any chain, but you probably want to get rid of those anyway.

Sidebar: Let's Encrypt certificate chains in practice

Because browsers do their own thing, a browser may construct multiple certificate chains for Let's Encrypt certificates today even if your server only provides the LE intermediate certificate that is signed by DST Root CA X3 (the current Let's Encrypt default for the intermediate certificate). For example, if you visit Let's Encrypt's test site for their own CA root, your browser will probably cache the LE intermediate certificate that chains to the LE CA root certificate, and then visiting other sites using Let's Encrypt may cause your browser to ignore their intermediate certificate and chain through the 'better' one it already has cached. This is what currently happens for me on Firefox.

TLSHowMultipleChains written at 16:03:00; Add Comment

What a TLS self signed certificate is at a mechanical level

People routinely talk about self signed TLS certificates. You use them in situations where you just need TLS but don't want to set up an internal Certificate Authority and can't get an official TLS certificate, and many CA root certificates are self signed. But until recently I hadn't thought about what a self signed certificate is, mechanically. So here is my best answer.

To simplify a lot, a TLS certificate is a bundle of attributes wrapped around a public key. All TLS certificates are signed by someone; we call this the issuer. The issuer for a certificate is identified by their X.509 Subject Name, and also at least implicitly by the keypair used to sign the certificate (since only an issuer TLS certificate with the right public key can validate the signature).

So this gives us the answer for what a self signed TLS certificate is. It's a certificate that lists its own Subject Name as the issuer and is signed with its own keypair (using some appropriate key signature algorithm, such as SHA256-RSA for RSA keys). It still has all of the usual TLS certificate attributes, especially 'not before' and 'not after' dates, and in many cases they'll be processed normally.

Self signed certificates are not automatically CA certificates for a little private CA. Among other things, the self-signed certificate can explicitly set an 'I am not a CA' marker in itself. Whether software respects this if someone explicitly tells it to trust the self-signed certificate as a CA root certificate is another matter, but at least you tried.

Self-signed certificates do have a serial number (which should be unique), and a unique cryptographic hash. Browsers that have been told to trust a self-signed certificate are probably using either these or a direct comparison of the entire certificate to determine if you're giving them the same self-signed certificate, instead of following the process used for identifying issuers (of checking the issuer Subject Name and so on). This likely means that if you re-issue a self-signed certificate using the same keypair and Subject Name, browsers may not automatically accept it in place of your first one.

(As far as other software goes, who knows. There are dragons all over those hills, and I suspect that there is at least some code that accepts a matching Subject Name and keypair as good enough.)

TLSWhatIsSelfSignedCert written at 00:25:10; Add Comment

2020-06-12

The safety of GMail's POP server TLS certificate verification (or lack of it)

A while back I wrote an entry on how GMail hadn't been doing full TLS server certificate verification when fetching mail from remote POP servers. GMail may have verified that the POP server's TLS certificate was properly signed by a CA, but it didn't check the server name, which is the second part of server verification. This is not safe in general (even if you verify the IP address), but Google (and GMail) aren't everyone and they sit in a very special position in several ways.

I don't know if GMail's lack of verification was truly safe, and certainly it skips part of the purpose of verifying the TLS server hostname, but Google skipping this check can be safer than it is for almost anyone else. The basic reason why is that Google is in a position to be very confident that it's not talking to an impostor, if it wants to go to the effort. First, Google can check what it sees for DNS lookups, network routing, and TLS certificates from multiple vantage points around the Internet. This means that any tampering and MITM attacks must be global, not local, which generally means very close to the final network connection to the target.

(Of course, doing this sort of global check can run into issues with services that give you localized DNS results, with anycast routing, and so on. Nothing is perfect here.)

Second, Google can keep a history of all of this. If everything is consistent over time (and your previous connections worked and gave sensible results), you can be relatively confident that you're still connecting to the same thing. If you accepted the thing before, you can keep accepting it now. We weren't presenting the same TLS server key every time (as far as I know, Certbot generates a new keypair every time it renews your TLS certificate, which is about every 60 days), but we were presenting a valid TLS certificate for the same set of TLS names (that were valid DNS names for our IMAP and POP server).

None of this could make GMail's lack of full checking completely safe. But it at least could make it a lot safer than an isolated program or service trying to do the same thing. Google's in a position to have a lot of information that let it 'authenticate' (in some sense) your server, which is part of the reasons for verifying the server name.

(At the same time, I expect that GMail's behavior was ultimately for pragmatic reasons. It seems likely that they found that too many people had POP servers with TLS certificates that didn't include the right name. I can't throw stones about this, since we accidentally did this, as covered in my first entry.)

GMailPopTLSVerificationII written at 23:14:03; Add Comment

2020-06-10

A dual display setup creates a natural split between things

Sometimes you notice things only when you don't have them. At work I have a dual display setup on my work desktop (arranged horizontally), but I only have a single display at home (mostly for space reasons). One of the differences I've noticed in how I use my screen space is that dual displays provide a natural division and split between things, because of the literal physical split between the two displays.

(I've been noticing this lately because I'm working from home, so for once I'm spending a lot of time doing the same sort of things at home that I normally do at work.)

This split tends to manifest in two ways, which I can call active and passive. The active type split is how I often wind up dynamically using windows as I work on something. On a dual display system, it's natural to open up a full 'screen' (really display) view of a Grafana dashboard on one display while using the second display to look into the specifics of what the dashboard is showing me, through terminal windows and other things. Similarly it feels natural to park documentation on one screen while actively working on the other, or use one display to monitor logs while I'm making a change on the other one. The passive type split is how I organize iconified or idle windows; rather than sprawl across both displays, they tend to wind up entirely on one or the other.

In theory I could split my display at home in the same way (it'd take some window manager support to make it convenient, but I use a very flexible window manager). In practice such a split would feel artificial. I'd be drawing an arbitrary line down my screen somewhere, with no particularly good reason for it except that I wanted it. The split in a dual display setup is anything but arbitrary, because there's a clear discontinuity and visual gap (one created by the bezels of the two displays). You can't have something straddle the gap and look normal.

I suspect that I'd still feel this way even if I had a single display at home that was the size of my dual displays at work. I would probably start splitting up the layout so that some things consistently went to the left, some to the right, and some in the center, and I definitely would have a 'maximize to one half (or one third) horizontally' option in my window manager (because a true full screen window would be far too big). But I suspect that things would wind up passively sprawled out all over, instead of grouped into areas. It would just be too tempting to expand things into some of that empty space with no obvious division between it and the occupied space.

DualDisplaysNaturalSplit written at 23:44:51; Add Comment

2020-06-09

The practical people problem with instance diversity in the Fediverse

Recently I was reading Kev Quirk's Centralisation and Mastodon (via), which notes how central Mastodon.social and a few other big instances are to the overall Fediverse, making it hardly a decentralized network in practice. The article concludes with a call to action:

If you’re thinking about joining Mastodon, don’t just join the first instance you come across. Take a look at the sign up section of the Mastodon homepage. There is a list [of] alternative instances that you can join, all arranged by topic.

I think that more genuine decentralization in the Fediverse isn't a bad thing, but I also think that there are practical considerations pushing against it. To put it one way, if you're joining the Fediverse your choice of instance is a risky decision that you're mostly not interested in and are generally not well equipped to make.

Your choice of instance is risky in that if you pick badly, you'll wind up having to go through various sorts of annoyance and pain. Picking what is clearly a big and popular instance has an intuitive appeal to reduce those risks; a popular instance is probably not a bad choice. As far as actively choosing an instance goes, this is usually not what you're interested in. Most people are interested in joining the Fediverse as a whole, and one of the points of it being a decentralized network is that it isn't supposed to matter where you join. So you might as well take a low risk choice.

Finally, if you're trying to actively pick a good instance, most people have the twin problems that they don't know what they care about (or should care about) in instances, and even if they do know they have things they care about they don't know enough to how to evaluate instances. Oh, you can read an instance's policies and poke around a bit, but that may not give you clear and honest answers, and on top of that a lot of things in the Fediverse are only clear to people who are immersed in the Fediverse already. To put it one way, there are a lot of problems with instances (and problem instances) that aren't obvious and clear to outsiders.

All of this should be unsurprising, because it's all a version of the problem of forcing users to make choices in security. People mostly don't care, and even if they do care they mostly don't know enough to make good choices. This is especially the case if they're new to the Fediverse.

FediverseDiversityChallenge written at 23:45:28; Add Comment

2020-05-28

The surprising persistence of RSA keys in SSH

Generally speaking, SSH the protocol and OpenSSH the leading implementation of it have (or have had) four different types of SSH keys: RSA, DSA (which has now been deprecated), ECDSA, and Ed25519, listed here in the order of when they were added to SSH (with Ed25519 being the most recent). Since RSA is the oldest, you might reasonably expect that it was the least used of the three that are still actively supported. Instead it remains extremely common to see RSA keys in use, both old ones and even newly generated ones. There are a number of reasons for this, but a good part of it boils down to that RSA is the universal default key type that every SSH implementation supports, and there are more SSH implementations out there than you might expect.

The small reason is that people often don't change what works, especially in how they authenticate to things (just look at all of the old passwords out there). Existing RSA keypairs work, so many people feel little need to change them. However, this only applies in environments where old people are using old keys (which can include keys used on personal machines and for personal things, like your personal Github account).

Next, there is still a certain amount of SSH helper software out there that doesn't support the full range of SSH key types. Some of it doesn't support Ed25519 keys (which are generally everyone's preferred choice of key types), and some of it only supports RSA keys, not even ECDSA keys. If this sounds impossible and absurd, well, it took until 2017 for Gnome Keyring to support ECDSA keys (cf bug #641082).

Beyond this, not all of the world is OpenSSH (and things that talk to it); there are a variety of additional SSH libraries and full implementations, both in (and for) C and in other languages. These implementations all support RSA, because RSA is effectively the universal key type for SSH (everyone supports it on both the client and server side), but their support of other key types is often spotty. You have to implement one key type and it's going to be RSA; you may not implement more than that. Then once you're in an ecology where some things (either clients or servers) only deal with RSA keys, you start defaulting to RSA keys for safety.

This is how I've wound up with recent RSA keys myself. My iOS to Unix file copy environment could only generate RSA keys on my iOS devices (for whatever reasons), and my Yubikey 4 doesn't support Ed25519 keys (and if I have to pick between RSA and ECDSA, I prefer RSA).

The good news is that support for Ed25519 keys is increasingly common. It will probably never be completely universal, but I'm hoping that an increasing number of programs will feel that they can default to generating Ed25519 keys and then offer people an 'if this doesn't work on your ancient SSH device, pick RSA' option.

PS: Because I looked this up, ECDSA keys were added in OpenSSH 5.7, which had its upstream release in January of 2011 (via). That's less than a decade old so it's probably not surprising that they didn't supplant RSA keys, especially since Ed25519 key support came along only a few years later.

SSHRSAKeysPersistence written at 23:55:25; Add Comment

2020-05-27

What I think OpenSSH 8.2+'s work toward deprecating 'ssh-rsa' means

Today I discovered about what was to me a confusing and alarming note in the OpenSSH 8.3 release notes (via), which has actually been there since OpenSSH 8.2. Here is the text (or the start of it):

Future deprecation notice

It is now possible to perform chosen-prefix attacks against the SHA-1 algorithm for less than USD$50K. For this reason, we will be disabling the "ssh-rsa" public key signature algorithm by default in a near-future release.

[...]

For a sysadmin this is somewhat hair raising on initial reading. We have a lot of users with ssh-rsa keys in their authorized keys files, and it would be very disruptive if they someday suddenly had to update those files, either to have their current keys accepted or to change over to new ones. However, there seemed to be a lot of confusion among people discussing this about what it affected (with some people saying that it only affected host keys and personal keypairs should be fine, for example). So I did my best to dig into this, and the summary is I don't think this requires most people to change host key or personal key configurations. I have two reasons for coming to believe this.

On a practical level, the announcement specifically says that one of your alternatives is that you can continue to use 'the same key type' (ie RSA keys) but with RFC8332 RSA SHA-2 signature algorithms. Then if we look in the very latest OpenSSH sshd manpage, its section on the authorized_keys file format doesn't have a special key type for 'RSA with RFC8332 RSA SHA-2 signature algorithms'; the only RSA key type is our old friend 'ssh-rsa'. Nor does ssh-keygen have an option for key generation other than 'generic RSA'. Since there's no particular way to specify or generate RSA keys in a key format other than what we already have, it seems that existing existing ssh-rsa authorized_keys lines pretty much have to keep working.

On a general level, there are two things involved in verifying public keys, the pair of keys themselves and then a signature algorithm that's used to produce a proof that you have access to the private key. While each type of keypair needs at least one signature algorithm in order to be useful, it's possible to have multiple algorithms for a single type of key. What OpenSSH is moving toward deprecating is (as they say) the signature algorithm for RSA keys that uses SHA-1; they continue to support two further SHA-2 based signature algorithms for RSA keys. Where the confusion comes in is that OpenSSH uses the label 'ssh-rsa' for both the name of of the RSA keytype in authorized_keys files and similar places and the name of the original signature algorithm that uses RSA keys. In the beginning this was fine (there was only a single signature algorithm for RSA keys), but now this is confusing if you don't read carefully and see the difference.

For OpenSSH host keys specifically, a server with RSA host keys is fine if one of two things are true; if it supports the additional signature algorithms for RSA keys, or if it has additional host key types. A server is only in trouble (in the future) if it has only a RSA host key and only supports the SHA-1 based 'ssh-rsa' signature algorithm. For personal RSA keypairs, the basic logic is the same; you're fine if the server (and your client) support the additional signature algorithms for RSA keys or if you have non-RSA keys of a keytype that both ends support (sadly not everyone supports Ed25519 keys).

OpenSSHAndSHA1Deprecation written at 19:35:32; Add Comment

2020-05-24

Security questions and warnings are effectively confirmation requests

Every so often, well intentioned people throw up security questions and warning messages and so on in an attempt to help people, as in the recent case of the new warning on many extensions on addons.mozilla.org. These don't work in practice, as I've written about before (for example, that asking users questions never increases security). However there is an important reason for this beyond things like users not knowing enough to make the right choice, which I want to mention explicitly and clearly for once.

To put it simply:

Security questions and warnings are a form of requesting confirmation, and people almost always say yes to that in general.

When Firefox throws up a 'this addon requests these permissions, do you agree' dialog when you install an addon, what it really asking in practice is 'do you want to install this addon?' Of course most people are going to say yes. Installing the addon is what they set out to do, so yes of course they want to do it, can you please stop asking all the time.

The one time requesting confirmation can work is when the person actually did something different from what they intended to. They wanted to delete file A, but now you're warning them that they're also deleting files B, C, and D. If they're deleting file A and you only ask them 'are you sure you want to delete file A', they're going to be annoyed with your interruption (which is why systems have mostly moved away from this sort of interface).

(Also, if you ask people these questions all the time, question fatigue sets in and people develop the reflex of saying yes without reading the questions.)

But most security questions and warnings are not telling you that you've done something different than you wanted to do. Instead they're of the 'do you really want to delete file A, are you sure' form, and so people automatically say yes, just as they automatically say yes to all of the other confirmation popups and so on that they deal with. Do you want to install this addon that asks for these permissions? Yes, that's why I I clicked on '+ Add to Firefox' button.

PS: The application of this to rewording various browser TLS warnings is left as an exercise to the reader, although such rewording would probably be somewhat controversial because it might wind up having to say things that aren't always true, like 'you have connected to something other than website <X> because the TLS certificate says this is <Y> and <Z>'.

SecurityQuestionsAsConfirmation written at 00:05:35; Add Comment

2020-05-17

Syndication feeds (RSS) and social media can be complementary

Every so often I read an earnest plea to increase the use of 'RSS', by which the authors mean syndication feeds of all formats (RSS, Atom, and even JSON Feed). Some times, as in this appeal (via), it's accompanied by a plea to move away from getting things to read through social media (like Twitter) and aggregators (like lobste.rs). I'm a long term user and fan of syndication feeds, but while I'm all in favour of more use of them, I feel that abandoning social media and aggregators is swinging the pendulum a bit too far. In practice, I find that social media and aggregators are a complement to my feed reading.

(From now on I'm just going to talk about 'social media' and lump aggregators in with them, so I don't have to type as much.)

The first thing I get through social media is discovering new feeds that I want to subscribe to. There's no real good substitute for this, especially for things that are outside my usual areas of general reading (where I might discover new blogs through cross links from existing ones I read or Internet searches). For instance, this excellent blog looking at the history of battle in popular culture was a serendipitous discovery through a link shared on social media.

The second and more important thing I get through social media is surfacing the occasional interesting to me content from places where I don't and wouldn't read regularly. If I'm only interested in one out of ten or fifty or a hundred articles in a feed, I'm never going to add it to my feed reader; it simply has too much 'noise' (from my perspective) to even skim regularly. Instead, I get to rely on some combination of people I follow on normal social media and the views of people expressed through aggregator sites to surface interesting reading. I read quite a lot of articles this way, many more than I would if I stuck only to what I had delivered through feeds I was willing to follow.

(Aggregator sites don't have to involve multiple people; see Ted Unangst's Inks.)

So, for me subscribing to syndication feeds is for things have a high enough hit rate that I want to read their content regularly, while social media is a way to find some of the hits in a sea of things that I would not read regularly. These roles are complementary. I don't want to rely on social media to tell me about things I'm always going to want to read, and I don't want to pick through a large flood of feed entries to find occasional interesting bits. I suspect that I'm not alone in this pattern.

A corollary of this is that social media is likely good for people with syndication feeds even in a (hypothetical) world with lots of syndication feed usage. Your articles appearing on Twitter and on lobste.rs both draws in new regular readers and shares especially interesting content with people who would at best only read you occasionally.

SyndicationFeedsAndSocialMedia written at 21:42:56; Add Comment

2020-04-24

Accepting TLS certificate hostnames based on IP address checks is not safe

The people running GMail are neither stupid nor ignorant, so presumably they had good reason for GMail not verifying TLS server hostnames until recently. When I was thinking about this yesterday, it occurred to me that one approach to safely accept TLS certificates even with mis-matched hostnames might be to look up the IP address of the host the certificate is for and accept it if the IP address is the same as what you're connecting to. Unfortunately, I then realized that this is not safe, at least in general; in fact accepting TLS certificate hostnames based on matching IP addresses is very dangerous and wrong.

The easiest way to see why this is very bad is to consider a captive portal with a DNS server that maps all hostname lookups to the same IP address, on which it has a web server listening. If this web server has a valid TLS certificate for its official public name, using 'is this the same IP address as the certificate hostname' will accept the server's TLS certificate for every HTTPS connection, for all hosts. After all, they all map to the same IP address. Facebook? Twitter? GMail? You name it, the captive portal's TLS certificate is valid for all of them under that test.

But suppose that you could be very confident that you have the genuine IP address for both hostnames (perhaps due to DNSSEC) and they're the same. This is no longer the general situation and the TLS certificate is now strong evidence that you're talking to the server (well, the IP address) that handles the host you want. However, my view is that you still shouldn't accept the TLS certificate, because verifying that you have the right server is only half of why you verify the hostname; the other reason is to verify that the server thinks it's supposed to handle this hostname. Since the TLS certificate lacks the right hostname, you don't have good assurance of that and shouldn't normally proceed.

Since people do sometimes get this wrong (as we did), it would be nice to have an additional way to verify a TLS certificate. But if there could be one, the IP address of the hostnames involved is not it.

(I'm writing this down partly so that perhaps I can remember the logic here the next time I have this clever idea.)

TLSVerifyByIPNotSafe written at 00:35:28; Add Comment

(Previous 10 or go back to April 2020 at 2020/04/19)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.