Security often involves not doing things
Here is something obvious that I feel like stating outright today:
Increasing security often involves not doing things, even when they're attractive things.
Security is in part about saying no. It is not just 'no, that introduces an obvious security exposure', which is in many ways the easy case; it is also various forms of 'no, that adds too much risk'. This means that security is partly about not doing things, and making security decisions is partly about reluctantly deciding not to do stuff that you do want to do. If it never hurts, you are perhaps not as secure as you would like to think.
(If you didn't care about doing something and weren't going to do it in the first place, the security issues it creates wouldn't come up at all.)
Sometimes you can have your cake and eat it too; the very best security wins are of this nature (as I've mentioned before). But not always. You cannot always find a good way to securely do what you want, and at that point you face a necessary choice between security and what you'd like to do. Some of the time (although not always) the answer is that you don't get whatever it is. It's a neat feature or a neat system or whatever, but nope.
(Perhaps you can build a lesser version with enough security that you can do it. Or maybe the whole thing is a bad idea once you take the security implications into account, and it's just too hard or too complex or too much work to do it sufficiently securely, if that's even possible.)
(Perhaps saying 'often' here is overstating the case. I don't know; my environment is not entirely typical.)