Why thin clients are doomed (part 2)
In an earlier entry I gave short and long term reasons why I think that thin clients were doomed. Now it's time for the high-level third reason (which occurred to me only after I'd written ThinClientDoom, or it might have been in there too).
So far, people have always found productive uses for new computing capabilities and more computing power; time and time again, silly capabilities have turned out to have important uses, to the point where they've become ubiquitous. Betting on thin clients amounts ultimately to betting either that this has stopped or that the difference in capabilities and productivity is not big enough to be important.
Both bets seem rather dubious. History is strongly against the former and the latter is at least a dangerous assumption, especially given our history of underestimating the usefulness of things (and how bad we seem to be at straightforward cost/benefit analysis when it comes to computers).
I think this also explains why thin clients are so seductive. At any given time, computers usually have some capabilities that we're not using very well; looking just at that moment in time, it's awfully tempting to say 'we'll never need that', and go for the machines without the capability.
(And the problem with going with thin clients, even for a nominal short term, is that while hardware can turn over relatively rapidly, infrastructure design does not. You could probably build an infrastructure where you can flipflop between thin clients and dataless clients, but I don't think very many people do, especially if they start with just one or the other.)