One of the things that killed network computers (aka thin clients)

January 20, 2010

Here is a thesis about network computing's lack of success:

The compute power to deliver your applications has to live somewhere, whether that is in the machine in front of you or on a server that sits in a machine room somewhere. It turns out that the cost of delivering compute power in one box does not scale linearly; at various points, it turns up sharply. For various reasons, there is also a minimum amount of computing power that gets delivered in boxes; it is generally impossible to obtain a box for a cost below $X (for a moderately variable $X over time), and at that price you can get a certain amount of computing.

The result of these two trends is that it is easier and more predictable to supply that necessary application compute power in the form of a computer on your desk than a terminal ('network computer') on your desk and 1/Nth of a big computer in the server room. The minimum compute unit that you can buy today is quite capable (we are rapidly approaching the point where the most costly component of a decent computer is the display, and you have to buy that either way), and buying N of the necessary minimum compute power in the form of a compute server or three is uneconomical by comparison. This leaves you relying on over-subscribing your servers for the theoretical peak usage, except that you will sooner or later actually hit peak usage (or at least enough usage) and then things stop working. You wind up not delivering predictable compute power to people, power that they can always count on having.

(This issue hits much harder in environments where there is predictable peak usage, such as undergraduate computing. We know that sooner or later all undergraduate stations will be in use by people desperately trying to finish their assignments at the last moment.)

I don't think that cloud computing is going to fundamentally change this, because cloud computing still does a significant amount of work on the clients and probably always will. (In fact I think that there are strong economic effects pushing cloud computing applications to put as much of the work on the client side as they can; the more work you can have the client browser do in various ways, the less server computing power you need.)

(This was somewhat sparked from reading this.)


Comments on this page:

From 66.134.136.68 at 2010-01-20 04:51:49:

This mostly reminds me of how different the computing environment is where I work. If we tried to do the bulk of our work on our desktops, the consequences would be hilarious.

Smarasderagd

From 67.110.150.66 at 2010-01-20 13:04:18:

I'm not really sure about that. Through virtualization, and especially automated resource schedulers like VMware DRS, we're reaching a point where compute resources are becoming far more elastic than they've been in the past, which makes virtual desktop infrastructure seem pretty appealing now, because all of the difficulty in scaling it has pretty much disappeared. Running a ton of computers as totally underutilized desktops for many organizations is going to seem like a tremendous waste of compute resources, by comparison.

Through better video compression technologies, we've got companies like OnLive offloading video game rendering to farms in the cloud and streaming full-resolution graphics across the Internet. On the desktop computing side, VMware is using similar technology in its VMware View product, their approach to Virtual Desktop Infrastructure to stream entire desktops across the network. VDI has the benefit that you can very easily restrict resource utilization for a single host, manage it with the same management tools you're using now for your desktop clients, segment out users into appropriate VLANs (undoable with traditional thin-client technology) and users will still have the ability to install and run programs where necessary in a completely sandboxed environment.

The part I take the most issue with is your argument that people need dedicated, always-available compute resources. I think that if this was the case, virtualization would have failed a long time ago in favor of power-hungry servers. Instead, it's presenting a very strong economic case, because there's a lot of economic incentive in squeezing every drop of utility out of your resources. Until CPUs hit an idle power economy where their draw is literally 0 when the system goes unused, there's going to be a compelling case for virtualization. Likewise, if vendors can actually provide a significant cost savings, there's no reason for virtual desktop infrastructure to not be just as popular. It's true that displays are expensive, but you're sidestepping the issue that the material cost of a display is coming down the same way that a desktop computer is. If you went to Microcenter five years ago and complained that a 20" widescreen LCD monitor was over $150, you'd be laughed out of the room.

I don't think that the impending VDI movement is going to completely change the game at all, and I don't think that desktop computers are going anywhere for a long time, but thin-client computing is definitely far from dead if you follow the industry. Absolutely nothing has "killed" them.

--Jeff

By cks at 2010-01-20 13:35:50:

I should have been more clear; here, I was just talking about 'desktop' applications and the power required to deliver them; things like browsers, spreadsheets, email programs, and so on. There is also a lot of computing that cannot be done on desktops at all (even here) and intrinsically requires server computing infrastructure.

On virtualization: flexible virtualization cannot make the ultimate capacity issue disappear. Either you have enough server compute power to support all of your users at peak usage all at once, or you are overcommitting the infrastructure. If you have that peak capacity, I don't think it can be delivered more economically in servers than in desktops due to the scaling issues with the cost of computing.

However, I can see at least two ways that I can be wrong here:

  • moderate servers could now many times more powerful than moderate desktops, yet cost only a small multiple more. If you build your (desktop) computing environment out of them, you can deliver N units of user desktop computing power for significantly less than N desktops.
  • you have such a large organization that you can realize significant savings by power management; you power down virtualization servers when they aren't needed to support the load, and power them up when they are, all without users noticing.

I do think that by and large virtualization is not the right answer for systems that use significant amounts of CPU (or disk bandwidth). I maintain that virtualization is essentially predicted on overselling the underlying system capacity (or of operating in such a constrained environment where other costs of server ownership, like power and rack space, dwarf the extra cost of very powerful virtualization servers).

(You can escape overselling the underlying capacity, but only by selling very constrained systems where you deliver very little CPU capacity and memory and so on. 256 MB virtual systems with virtual 5200 RPM IDE drives, anyone?)

Written on 20 January 2010.
« OpenSolaris versus Solaris
The argument for not managing systems via packages »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Wed Jan 20 02:23:15 2010
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.