2013-05-10
Illustrating the tradeoff of security versus usability
One of the sessions of the university's yearly technical conference that I went to today was on two-factor authentication using USB crypto tokens (augmented by software on the client). In the talk, it came up that token-aware software can notice when the USB token is removed and do things like de-authenticate you or break a VPN connection. It struck me that this creates a perfect illustration of the tradeoff between security and usability, which I will frame through a question:
When the screen locker activates, should a token-aware application break its authenticated connection to whatever it's talking to and deauthenticate the user, forcing them to reauthenticate by re-entering their token PIN when they come back to the machine? This is clearly the most secure option; otherwise there's no proof that the person who unlocked the screen and is now using the computer is the person who owns the USB token and passed the two-factor authentication earlier.
Some people are enthusiastically saying 'yes' right now. Now, imagine that you're using this two-factor system to authenticate your SSH connections to your servers. Does your opinion change? In fact, does your opinion change about how the system should behave if the token is removed?
The usability issue is pretty simple: tearing down VPNs and breaking SSH
sessions and logging you out of applications is secure but disruptive.
In some situations it would be actively dangerous, because you'd be
interrupting something halfway through an operation (although in this
sort of environment all sysadmins would rapidly start using screen
or tmux everywhere in self defense). You probably don't want this
disruption every time you step away from your machine to go to the
office coffee pot, the washroom, or whatever. At the same time you don't
want to leave your machine exposed with its screen unlocked.
(In fact the most secure thing to do would be to both lock your screen and take the USB crypto token with you. This is also likely to be maximally disruptive.)
It's worth noting that the more you use your USB token, the more disruptive this is. This is especially punishing to the power users who run authenticated applications all the time and who often or always have multiple ones active at once, possibly with complex state (such as sysadmins with SSH sessions). Unfortunately these may be exactly the people you want to be most secure.
It's tempting to say that way to improve this situation is to improve the usability by suspending secured sessions instead of breaking them and deauthenticating the user; then users merely have to re-enter their PIN (hopefully only once) instead of re-opening all their secured applications and re-establishing their VPN and SSH connections and so on. In theory you can make this work. In practice, doing this securely requires that the server side of everything supports the equivalent of screen, letting you disconnect and later reconnect.
(If the suspension is done only by client software bad guys can use various physical attacks to compromise an exposed machine, bypass the client suspension, and directly use the established VPN, SSH session, or whatever. You need the server software to force the client to re-authenticate.)
PS: I suspect that you can predict the result of having the screen locker activating causing sessions to be broken and people to be deauthenticated. For that matter, you can likely predict the result of having this happen when the USB token is removed (and it involves a surprising number of unattended USB tokens, especially in areas that people feel are physically secure (like lockable single-person offices)).
2013-05-05
The original vision of RISC was that it would be pervasive
In the middle of an excellent comment gently correcting my ignorance, a commentator on yesterday's entry wrote:
[RISCs requiring people to recompile programs] has some truth to it, but I would disagree that it was a big bet of RISC. Rather, it was a function of the market niche that classic RISC was confined to: Customers that paid tens or hundreds of thousands of dollars for a high-performance RISC machine would be willing to recompile their code to eke out the best possible performance.
I have to disagree with this because I don't think it matches up with the actual history of what I've been calling performance RISC. To put it one way, RISC was not originally intended to be just for high performance computing; in the beginning and for a fairly long time, RISC was intended to be pervasive. This is part of why in 1992 Andrew Tannenbaum could seriously say (and have people agree with him) that x86 would die out and RISC would become pervasive (cf). He did not mean 'pervasive in HPC'; he meant pervasive in general, across at least the broad range of machines used to run Unix.
The early vision of (performance) RISC was that RISC would supplant the current CISC architectures just as the current 16 and 32-bit CISC architectures had supplanted earlier 8-bit ones. The RISC pioneers may not have been thinking about 'PC' class machines (although Acorn gave it a serious try) but they were certainly thinking about and targeting garden variety Unix workstations and servers. And even in 1990, everyone knew and understood that most Unix servers were not HPC machines and they spent their time doing much more prosaic things. To really be successful and meaningful, RISC needed to be good for those machines at least as much as it needed to be good for the uncommon HPC server.
(Everyone also understood that these machines cost a lot less than full out everything for speed HPC servers. DEC, MIPS, Sun, and so on sold plenty of lower end servers and workstations, so they were well aware of this. I would guess that by volume, far more RISC machines were lower end machines than were high end ones through at least 1995 or so.)
RISC certainly did wind up fenced into the high performance computing market niche in the relatively long run, but that was because it failed outside that niche (run over by the march of the cheap x86 machines everywhere else). The HPC niche was not the original intention and had it been, RISC would have been much less exciting and interesting for everyone.
(And in this general market it was empirically not the case that most people were running code that was compiled specifically for their CPU's generation of optimizations, scheduling, and so on.)
2013-05-04
What I see as RISC's big bets
At the time, performance oriented RISC was presented as the obviously correct next step for systems to take. Even today I can casually say that x86 won against RISC mostly because Intel spent more money and have people nod along with it. But I'm not sure that this is really the case, because I think you can make an argument that the whole idea behind (performance) RISC rested on some big bets.
As I see them, the main big bets were:
- CPU speeds would continue to be the constraint on system performance.
This one is obvious; the only part of the system that a fast RISC
improved was the CPU itself and that only mattered if the CPU was the
limiting factor.
- Compilers could statically extract a lot of parallelism and scheduling
opportunities, because this is what lets classic RISC designs omit the
complex circuitry for out of order dynamic instruction scheduling.
(Itanium is an extreme example of this assumption, if you consider it a RISC.)
- People would recompile their programs frequently for new generations of CPU chips, because this is a consequence of static scheduling and newer CPUs have different scheduling opportunities from older ones. If you can't make this assumption, new CPUs either run old code unimpressively or need to do dynamic instruction scheduling for old code. Running old code unimpressively (if people care about old code) does not sell new CPUs.
The third bet proved false in practice for all sorts of pragmatic reasons. My impression is that the second bet also wound up being false, with dynamic out of order instruction scheduling able to extract significantly more parallelism than static compiler analysis could. My memory is that together these two factors pushed later generation RISC CPU designs to include more and more complex instruction scheduling, diluting their advantages over (theoretically) more complex CISC designs.
(I'm honestly not sure how the first bet turned out for fast RISC over its era (up to the early 2000s, say). CPUs weren't anywhere near fast enough in those days but my impression is that as the CPU speed ramped up, memory bandwidth and latency issues increasingly became a constraint as well. This limited the payoff from pure CPU improvements.)
I don't see the RISC emphasis on 64 bit support as being a bet so much as an attempt to create a competitive advantage. (I may be underestimating how much work it took AMD to add 64 bit support to the x86 architecture.)
Update: I'm wrong about some of this. See the first comment for a discussion, especially about the out-of-order issue.