Understanding the modern view of security

May 31, 2016

David Magda wrote a good and interesting question in a comment on my entry on the browser security dilemma:

I'm not sure why they can't have an about:config item called something like "DoNotBlameFirefox" (akin to Sendmail's idea).

There is a direct answer to this question (and I sort of wrote it in my comment), but the larger answer is that there has been a broad change in the consensus view of (computer) security. Browsers are a microcosm of this shift and also make a great illustration of it.

In the beginning, the view of security was that your job was to create a system that could be operated securely (often but not always it was secure by default) and give it to people. Where the system ran into problems or operating issues, it would tell people and give them options for what to do next. In the beginning, the diagnostics when something went wrong were terrible (which is a serious problem), but after a while people worked on making them better, clearer, and more understandable by normal people. If people chose to override the security precautions or operate the systems in insecure ways, well, that was their decision and their problem; you trusted people to know what they were doing and your hands were clean if they didn't. Let us call this model the 'Security 1' model.

(PGP is another poster child for the Security 1 model. It's certainly possible to use PGP securely, but it's also famously easy to screw it up in dozens of ways such that you're either insecure or you leak way more information than you intend to.)

The Security 1 model is completely consistent and logical and sound, and it can create solid security. However, like the 'Safety-I' model of safety, it has a serious problem: it not infrequently doesn't actually yield security in real world operation when it is challenged with real security failures. Even when provided with systems that are secure by default, people will often opt to operate them in insecure ways for reasons that make perfect sense to the people on the spot but which are catastrophic for security. Browser TLS security warnings have been ground zero for illustrating this; browser developers have experimentally determined that there is basically no level of strong warnings that will dissuade enough people from going forward to connect to what they think is eg Facebook. There are all sorts of reasons for this, including the vast prevalence of false positives in security alerts and the barrage of warning messages that we've trained people to click through because they're just in the way in the end.

The security failures of the resulting total system of 'human plus computer system' are in one sense not the fault of the designers of the computer system, any more than it is your fault if you provide people with a saw and careful instructions to use it only on wood and they occasionally saw their own limbs off despite your instructions, warnings, stubbornly attached limb guards, and so on. At the same time, the security failures are an entirely predictable failure of the total system. This has resulted in a major shift in thinking about security, which I will call 'Security 2'.

In Security 2 thinking, it is not good enough to have a secure system if people will wind up operating it insecurely. What matters and the goal that designers must focus on is making the total system operate securely, even in adverse conditions; another way to put this is that the security goal has become protecting people in the real world. As a result, a Security 2 focused designer shouldn't allow security overrides to exist if they know those overrides will wind up being (mis)used in a way that defeats the overall security of the system. It doesn't matter if the misuse is user error on the part of the people using the security system; the result is still an insecure total system and people getting owned and compromised, and the designer has failed.

Security 2 systems are designed not necessarily so much to be easy to use as to be hard or impossible to screw up in such a way that you get owned (although often this means making them easy to use too). For example, all the time, automatic end to end encryption of messages in an instant messaging system is a Security 2 feature; optional, must be selected or turned on by hand end to end encryption of messages is a Security 1 feature.

Part of the browser shift to a Security 2 mindset has been to increasingly disallow any and all ways to override core security precautions, including being willing to listen to websites over users when it comes to TLS failures. This is pretty much what I'd expect from a modern Security 2 design, given what we know about actual user behavior.

(The Security 2 mindset raises serious issues when it intersects with user control over their own devices and software, because it more or less inherently involves removing some of that control. For example, I cannot tell modern versions of Firefox to do my bidding over some TLS failures without rebuilding them from source with increasing amounts of hackery applied.)


Comments on this page:

it more or less inherently involves removing some of that control

This is even starker when it comes to extensions. Chrome has steadily ratcheted up the difficulty of using Greasemonkey-style scripts. First they were installable by clicking a link on the web, then they had to be downloaded and drag-and-dropped into the extensions manager, then they had to be packaged as extensions, then packed extensions were made to require signing by the Chrome web store, then a warning about unpacked extensions was added that you have to click away every time the browser launches.

Of course the problem is no matter what override is provided, malware authors will find a way to toggle it in order to inject their wares into users’ browsers. This is an even worse situation than with crypto failures: users have to be protected not just from themselves, but from superior adversaries.

But personally I know how to avoid the problem, I know how to audit my extensions, I know how to kill off extensions I didn’t install. I have never had a true malware extension injected on me and I have removed extensions installed into Firefox by third-party software on Windows. I don’t need the protection.

And I do want control of how my browser works.

But it’s likely I will soon be forced to leave mainstream browsers behind in order to get that. Which has many undesirable implications.

There seems to be no way to protect normal users while also catering to the likes of me (when it comes to scripting) or you (when it comes to crypto). :-/

Security 2 is all fine and well, but I don't like my software dictating my policy to me (per your last paragraph).

Unless the Mozilla folks are willing to cut me a cheque to replace all the old embedded (e.g. HVAC) code so that I can get all the New and Shiny Things that internally generate self-signed SHA-2 certs et al, then they should provide an override.

Or I'll just use the older software, which has other vulnerabilities, to accomplish the things I need to.

I'm the guy that babysits the load balancers and a lot of the TLS stuff at work, so I know all about ratcheting things down over the last few years, but we've tried to allow for maximum compatibility where possible on our sites (getting As and A+s but still allowing IE8 on XP to work, according to SSL Labs).

Written on 31 May 2016.
« The browser security dilemma
Spammers can abandon SMTP connections not infrequently »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Tue May 31 23:03:58 2016
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.