The problem of forcing users to make choices (in security)
Sometimes, faced with an apparently intractable problem in security, people say 'I know, we'll leave it up to the users to decide what to do'. What they really mean is 'we don't know what the right thing to do is, so we're going to make the users figure it out'. As the saying goes, now they have two problems.
Let's look at some big reasons that this is a terrible idea, both in general and specifically for security systems.
First, the users don't actually care about the issue, or indeed anything besides getting done what they want to, so they are just going to make a choice that makes things work and get out of their way. Since this choice is at best random (and at worst whatever is easiest), it is unlikely to be the right choice. Whatever that is.
(It doesn't help that security is a pain, and that people usually resent being forced to make choices and answer questions that they have no idea about. I suspect that this gets worse if you try to scare them about how these choices are important for security, because then you make them nervous when they don't know the right answer or even how to choose.)
Second, even if a user does care, they are unlikely to be able to make the right choice because only experts are competent to make complex choices. Unless the user is already an expert, they don't have the knowledge to figure out the choice; even with the best intentions in the world, the odds of them picking the right choice are probably not much better than chance. The less knowledge a user has, the less well they can make choices even with good intentions. Even a relatively knowledgeable user can be stymied by a complex situation, and a user that almost entirely lacks the background can get lost even in what the designers think of as 'simple' situations.
(Designers, being very close to the system, often underestimate all aspects of this: how complex the question is, how much knowledge users will have, and how much they will care.)
This is bad enough for ordinary software. For security systems, it can be catastrophic; if you are asking questions where the answers affect the system's security, pushing these choices off on to the users basically guarantees that your security system is in fact completely pointless, since there's not likely to be much security left.
It is my personal belief that those critical for security questions are exactly the questions most likely to be pushed off to the users, because they are the ones for which picking a default answer (and letting users change it if they need to) does not work. For example, the choice between two equally strong cipher algorithms is mostly arbitrary, so one can pick a default, while the choice of who to trust is not arbitrary and often has no default (at least not in the world of mathematical security, which generally demands strong correctness if you are going to decide things for people).
(Disclaimer: these are not novel ideas; I just want to write them down somewhere that I can find for later use. See eg Informed choices and real security, which I just found by Googling for likely discussion terms. Note that the first reason implies that a lot of people are simply not interested in becoming informed about the questions you're asking, even if you make it possible.)
|
|