I want my signed email to work a lot like SSH does

September 15, 2014

PGP and similar technologies have been in the news lately, and as a result of this I added the Enigmail extension to my testing Thunderbird instance. Dealing with PGP through Enigmail reminded me of why I'm not fond of PGP. I'm aware that people have all sorts of good reasons and that PGP itself has decent reasons for working the way it does, but for me the real strain point is not the interface but fundamentally how PGP wants me to work. Today I want to talk just about signed email, or rather however I want to deal with signed email.

To put it simply, I want people's keys for signed email to mostly work like SSH host keys. For most people the core of using SSH is not about specifically extending trust to specific, carefully validated host keys but instead about noticing if things change. In practical use you accept a host's SSH key the first time you're offered it and then SSH will scream loudly and violently if it ever changes. This is weaker than full verification but is far easier to use, and it complicates the job of an active attacker (especially one that wants to get away with it undetected). Similarly, in casual use of signed email I'm not going to bother carefully verifying keys; I'm instead going to trust that the key I fetched the first time for the Ubuntu or Red Hat or whatever security team is in fact their key. If I suddenly start getting alerts about a key mismatch, then I'm going to worry and start digging. A similar thing applies to personal correspondents; for the most part I'm going to passively acquire their keys from keyservers or other methods and, well, that's it.

(I'd also like this to extend to things like DKIM signatures of email, because frankly it would be really great if my email client noticed that this email is not DKIM-signed when all previous email from a given address had been.)

On the other hand, I don't know how much sense it makes to even think about general MUA interfaces for casual, opportunistic signed email. There is a part of me that thinks signed email is a sexy and easy application (which is why people keep doing it) that actually doesn't have much point most of the time. Humans do terribly at checking authentication, which is why we mostly delegate that to computers, yet casual signed email in MUAs is almost entirely human checked. Quick, are you going to notice that the email announcement of a new update from your vendor's security team is not signed? Are you going to even care if the update system itself insists on signed updates downloaded from secure mirrors?

(My answers are probably not and no, respectively.)

For all that it's nice to think about the problem (and to grumble about the annoyances of PGP), a part of me thinks that opportunistic signed email is not so much the wrong problem as an uninteresting problem that protects almost nothing that will ever be attacked.

(This also ties into the problem of false positives in security. The reality is that for casual message signatures, almost all missing or failed signatures are likely to have entirely innocent explanations. Or at least I think that this is the likely explanation today; perhaps mail gets attacked more often than I think on today's Internet.)


Comments on this page:

By Ewen McNeill at 2014-09-15 02:14:34:

It seems to me we need a good name for the concept of "assume the first one you see is good, warn on changes" mode of operation; it's useful, in practice, for a surprisingly large number of problems: for instance it'd be much more useful if TLS certificates were validated that way. (I actually started out assuming there was a name already for this behaviour, which I think of as "baby duck" behaviour -- ie, the first thing it sees "must be my mother". But I can't find any useful references to that name, or any other, so maybe that's just me.)

I do regularly hand-check a few things which I consider high value (eg, bank account TLS certificates before each login), but mostly the "default to first seen is valid" seems good enough for typical use. Essentially I only want to check in detail when I'm placing non-trivial trust in something; and I'd like to know some automated checking was happening in situations where I'm placing a bit of trust in something -- for which "same as last seen" seems reasonable.

FWIW, you're not alone in thinking PGP's model isn't idea. PGP's model was a useful experiment, and is still useful for some things (eg, Debian use the web of trust fairly well), but mostly seems overkill for email. I find in practice I don't use PGP at all except for certain "sending sensitive information via email" scenarios -- and even then I'd usually try to find another way, as I've seen too many UI failures result in something being sent in the clear when it was supposed to be encrypted (eg, in one client someone used, if you re-edited after encrypting, you had to manually re-encrypt or it'd silently send in the clear...).

Ewen

By Mark Harrison at 2014-09-15 16:31:21:

Ewan - there is a term for what you describe: TOFU - Trust on First Use (or TUFU - Trust Upon First Use). Supposedly it was coined here: https://www.youtube.com/watch?v=DIPrkVys72I.

iOS 8 seems to do this with S/MIME: the first time a cert comes in it appears in red, you click on "Trust", and then you can "Install" it so that it's saved for future use.

Of course if it's signed by a known CA, then it's probably already trusted by default.

If the iOS device is connected to Exchange then a key lookup will occur in the GAL.

Written on 15 September 2014.
« My current hassles with Firefox, Flash, and (HTML5) video
My collection of spam and the spread of SMTP TLS »

Page tools: View Source, View Normal.
Search:
Login: Password:

Last modified: Mon Sep 15 01:42:10 2014
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.