One of the things that killed network computers (aka thin clients)

January 20, 2010

Here is a thesis about network computing's lack of success:

The compute power to deliver your applications has to live somewhere, whether that is in the machine in front of you or on a server that sits in a machine room somewhere. It turns out that the cost of delivering compute power in one box does not scale linearly; at various points, it turns up sharply. For various reasons, there is also a minimum amount of computing power that gets delivered in boxes; it is generally impossible to obtain a box for a cost below $X (for a moderately variable $X over time), and at that price you can get a certain amount of computing.

The result of these two trends is that it is easier and more predictable to supply that necessary application compute power in the form of a computer on your desk than a terminal ('network computer') on your desk and 1/Nth of a big computer in the server room. The minimum compute unit that you can buy today is quite capable (we are rapidly approaching the point where the most costly component of a decent computer is the display, and you have to buy that either way), and buying N of the necessary minimum compute power in the form of a compute server or three is uneconomical by comparison. This leaves you relying on over-subscribing your servers for the theoretical peak usage, except that you will sooner or later actually hit peak usage (or at least enough usage) and then things stop working. You wind up not delivering predictable compute power to people, power that they can always count on having.

(This issue hits much harder in environments where there is predictable peak usage, such as undergraduate computing. We know that sooner or later all undergraduate stations will be in use by people desperately trying to finish their assignments at the last moment.)

I don't think that cloud computing is going to fundamentally change this, because cloud computing still does a significant amount of work on the clients and probably always will. (In fact I think that there are strong economic effects pushing cloud computing applications to put as much of the work on the client side as they can; the more work you can have the client browser do in various ways, the less server computing power you need.)

(This was somewhat sparked from reading this.)

Written on 20 January 2010.
« OpenSolaris versus Solaris
The argument for not managing systems via packages »

Page tools: View Source, Add Comment.
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Wed Jan 20 02:23:15 2010
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.