Part of (computer) security is convincing people that it works

July 19, 2024

One of the ways that security is people, not math is that as part of security being ultimately about people, part of the work of computer security is convincing people that your security measures actually work. I don't mean this in a narrow technical sense of specific technical features working as designed; I mean this in the broader sense that they achieve the general goals of security, which is really about safety. People want to know that their data and what they do on the computer is safe, in the full sense of the confidentiality, integrity, and availability triad.

Often, convincing people that your security works requires making it legible to them, in the "Seeing Like a State" sense. One way to describe this situation is that partly due to the sorry history of computer security and people not doing effective computer security, many people and organizations have adopted a view that they assume computer security measures don't work or aren't effective until proven otherwise. If you can't convince them that your security measure works, in the process making it legible to them, they assume it doesn't. Historically they were often right.

One complication is that the people you're trying to make your security measures convincing and legible to are almost always people who don't have specialist knowledge in computer security. Often they have little to no knowledge in the field at all (just like you don't have expert-level knowledge in their fields). This means that you generally can't convince them by explaining the technical details, because they don't have the knowledge and experience they'd require to evaluate those details. Handling this has no straightforward solution, but it will often require some degree of building their trust in your skill and honest coupled with some degree of using things that other independent and trusted (by the people you're trying to convince) parties have already called secure. This is part of what it means to have legibility in your security measures; you're making something that other people can understand and assess, even if it's not what you'd make for yourself.

Some system administrators and other computer people can wind up feeling upset about this, because from their perspective the organization is preferring inferior outside solutions (that have the social approval of the crowd) to the superior home grown work. However, all of us inclined to see things from this angle really should turn around and look at it from the organization's perspective. For the organization, it's not a choice between inferior but generally approved security and home grown 'real security', it's a choice between known (although maybe flawed) security and an unknown state where they may be more secure, as secure, less secure, or completely exposed. It's perfectly sensible for the organization to choose a known state over a risky unknown one.

(It's taken me a long time to come around to this perspective over the course of my career, because of course in the beginning I was solidly in the 'this is obviously better security, because of ...' camp. Even today I'm in the camp of 'real security', it's just that I've come to appreciate that part of my job is convincing the organization that what we're offering is not a 'risky unknown' state.)

Written on 19 July 2024.
« The Linux Out-Of-Memory killer process list can be misleading
My home wireless network and convenience versus security »

Page tools: View Source.
Search:
Login: Password:

Last modified: Fri Jul 19 22:17:01 2024
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.