The problem with security alerts, and indeed all alerts
We know that in practice, users have been conditioned to treat security alerts and other caution dialogs as obstacles; they will blindly click whatever option makes the dialog get out of the way of whatever they're doing, no matter what. It has recently struck me that there is an obvious reason why, and it's even something I've written about before: it's the problem of false positives.
Unless you have really serious problems, your system is almost always doing what you want and you almost never are experiencing genuine security issues; in other words, true problems are very, very rare. Thus, a false positive rate that is at all detectable implies that almost all alerts are false positives. In an environment where even nine in ten alerts are false positives, it shouldn't be any surprise that users just click on things to make them go away. And nine in ten is being optimistic; I expect that the figure is more like 9,999 in ten thousand.
(I'm counting all 'caution! you might be doing something bad!' alerts in this figure, not just security alerts, partly because they all look and behave mostly the same to users.)
Unfortunately, fixing this is likely to be remarkably hard. Dealing with the false positive rate runs into the problem that further reducing an already small number is usually highly non-trivial, especially if you need to make it several orders of magnitude lower. That leaves reducing the number of 'events' in general; since you can't exactly tell users to do less with their computers, this is probably going to require fundamental rethinks that make entire classes of problems just impossible (so that they do not have to be checked and thus cannot have false positives).
(Undoable operations are one way of making entire classes of alerts disappear. Unfortunately it is hard to have undoable security sensitive operations; once your password is sent somewhere, well, it's sent.)