Chris's Wiki :: blog/tech/NetworkComputingLocation Commentshttps://utcc.utoronto.ca/~cks/space/blog/tech/NetworkComputingLocation?atomcommentsDWiki2010-01-20T18:35:50ZRecent comments in Chris's Wiki :: blog/tech/NetworkComputingLocation.By Chris Siebenmann on /blog/tech/NetworkComputingLocationtag:CSpace:blog/tech/NetworkComputingLocation:51ab2d7675f88e6af96a649f41d313e503aae72cChris Siebenmann<div class="wikitext"><p>I should have been more clear; here, I was just talking about 'desktop'
applications and the power required to deliver them; things like browsers,
spreadsheets, email programs, and so on. There is also a lot of computing
that cannot be done on desktops at all (even here) and intrinsically
requires server computing infrastructure.</p>
<p>On virtualization: flexible virtualization cannot make the ultimate
capacity issue disappear. Either you have enough server compute power
to support all of your users at peak usage all at once, or you are
overcommitting the infrastructure. If you have that peak capacity, I
don't think it can be delivered more economically in servers than
in desktops due to the scaling issues with the cost of computing.</p>
<p>However, I can see at least two ways that I can be wrong here:</p>
<ul><li>moderate servers could now many times more powerful than moderate
desktops, yet cost only a small multiple more. If you build your
(desktop) computing environment out of them, you can deliver N
units of user desktop computing power for significantly less than N
desktops.</li>
<li>you have such a large organization that you can realize significant
savings by power management; you power down virtualization servers
when they aren't needed to support the load, and power them up when
they are, all without users noticing.</li>
</ul>
<p>I do think that by and large virtualization is not the right answer
for systems that use significant amounts of CPU (or disk bandwidth).
I maintain that virtualization is essentially predicted on overselling
the underlying system capacity (or of operating in such a constrained
environment where other costs of server ownership, like power and rack
space, dwarf the extra cost of very powerful virtualization servers).</p>
<p>(You can escape overselling the underlying capacity, but only by
selling very constrained systems where you deliver very little
CPU capacity and memory and so on. 256 MB virtual systems with virtual
5200 RPM IDE drives, anyone?)</p>
</div>2010-01-20T18:35:50ZFrom 67.110.150.66 on /blog/tech/NetworkComputingLocationtag:CSpace:blog/tech/NetworkComputingLocation:81f07ed210963586db66fd6f1e135caabb3d51d4From 67.110.150.66<div class="wikitext"><p>I'm not really sure about that. Through virtualization, and especially automated resource schedulers like VMware DRS, we're reaching a point where compute resources are becoming far more elastic than they've been in the past, which makes virtual desktop infrastructure seem pretty appealing now, because all of the difficulty in scaling it has pretty much disappeared. Running a ton of computers as totally underutilized desktops for many organizations is going to seem like a tremendous waste of compute resources, by comparison.</p>
<p>Through better video compression technologies, we've got companies like OnLive offloading video game rendering to farms in the cloud and streaming full-resolution graphics across the Internet. On the desktop computing side, VMware is using similar technology in its VMware View product, their approach to Virtual Desktop Infrastructure to stream entire desktops across the network. VDI has the benefit that you can very easily restrict resource utilization for a single host, manage it with the same management tools you're using now for your desktop clients, segment out users into appropriate VLANs (undoable with traditional thin-client technology) and users will still have the ability to install and run programs where necessary in a completely sandboxed environment.</p>
<p>The part I take the most issue with is your argument that people need dedicated, always-available compute resources. I think that if this was the case, virtualization would have failed a long time ago in favor of power-hungry servers. Instead, it's presenting a very strong economic case, because there's a lot of economic incentive in squeezing every drop of utility out of your resources. Until CPUs hit an idle power economy where their draw is literally 0 when the system goes unused, there's going to be a compelling case for virtualization. Likewise, if vendors can actually provide a significant cost savings, there's no reason for virtual desktop infrastructure to not be just as popular. It's true that displays are expensive, but you're sidestepping the issue that the material cost of a display is coming down the same way that a desktop computer is. If you went to Microcenter five years ago and complained that a 20" widescreen LCD monitor was over $150, you'd be laughed out of the room.</p>
<p>I don't think that the impending VDI movement is going to completely change the game at all, and I don't think that desktop computers are going anywhere for a long time, but thin-client computing is definitely far from dead if you follow the industry. Absolutely nothing has "killed" them.</p>
<p>--Jeff</p>
</div>2010-01-20T18:04:18ZFrom 66.134.136.68 on /blog/tech/NetworkComputingLocationtag:CSpace:blog/tech/NetworkComputingLocation:4f298dadb8eef7091562b2f4e39bd34a44f6726cFrom 66.134.136.68<div class="wikitext"><p>This mostly reminds me of how different the computing environment is where I work. If we tried to do the bulk of our work on our desktops, the consequences would be <em>hilarious</em>.</p>
<p>Smarasderagd</p>
</div>2010-01-20T09:51:49Z