2006-03-31
The perfection trap: a lesson drawn from Worse is Better
I've mentioned Richard Gabriel's famous The Rise of "Worse is Better" before (back here), but only recently did one of its important lessons coalesce in my thoughts.
"Worse is Better" contrasts what Gabriel calls the MIT approach, with the cornerstone of 'the right thing', against the New Jersey approach, with a 'worse is better' minimalism (this is a simplified summary). Gabriel argued that despite its flaws, the New Jersey approach had significantly better survival characteristics than the MIT approach, for reasons he described and I'm not going to try to repeat here.
The MIT versus New Jersey divide can be portrayed as a choice between the right thing and a so-so thing that's maybe good enough. When you put it that way, a lot of people will naturally tilt towards the MIT approach; who doesn't want to do the right thing? But this is wrong.
The perfect is the enemy of the good. - Voltaire
In reality it's not actually a choice between right and worse; it's really a choice between nothing, worse, and right. And over and over, aiming for right has been an excellent way to wind up with nothing (for reasons that Richard Gabriel outlines nicely).
The easiest place to see this is computer security, where insistence on perfection (or some excellent approximation) is one of the holy tenets. As a result we have a few very secure systems and a lot of almost completely unsecured ones.
The difficulty of punishing people at universities
One of the quiet little secrets of university computing is just how difficult it is to actually punish people for doing bad stuff with computing resources. Really bad stuff, things that are criminal or have serious civil liabilities, can be punished. But mere violations of policies or bad network behavior (including spamming) can run into a series of problems.
Tenured professors might as well be the left hand of God, of course, especially if they get grant money. But even students (grad and undergrad both) are heavily protected, because many universities have strict policies on imposing 'academic sanctions'; this almost always includes not just direct loss of marks but also anything that is necessary to pass the course. This makes removal of computer access an academic sanction in many cases, subject to the requirements and the elaborate procedures.
Staff are theoretically the least protected, except that removing someone's computing access often makes them unable to do their job, which is not popular (to say the least) with their management chain. This can result in the only real options being either a slap on the wrist or a firing, and firings are often a hard sell (and often require their own large set of procedures, time, and repeated incidents).
(To be fair, the staff issue is probably the same for companies.)
This isn't to say that stern computing policies and AUPs aren't useful; if nothing else they can be used to scare people. But for some time I've wondered what we'd be able to do if, for example, someone started spamming for a religion and showed no inclination to stop.
(The more likely scenario is probably an undergrad that likes poking things with sticks; there is certainly no shortage of places to irritate and troll on the Internet.)