The argument for not managing systems via packages
Although I haven't changed my mind in general, I think that there are arguments for configuration management systems like Cfengine and Puppet over packaging systems. Apart from their actual existence in usable form, one of them is that CM systems are going to be inherently more agile than packaging systems.
The drawback of packaging systems is that they fundamentally work on packages. This means that doing things via them requires a multi-step process; you must assemble your files, build one or more packages from them, and finally propagate and install the packages. Because they work directly, CM systems generally require less overhead to propagate a file (or to add a file to be propagated); you put the file in the appropriate place, modify your master configuration, and things go immediately.
(Because it does not have to build static packages, a CM system can also pull various tricks to make your life simpler, such as supporting not just static files but various sorts of templated files that are expanded for each specific system.)
A CM system also gives you a natural place to express the meta-information about what files get to which machines, letting you do so directly and without adding any extra mechanisms. A packaging system can express all of this (for example, you can have meta-packages for various sorts of machines and create dependencies and conflicts), but you have to do so indirectly, which involves increasing amounts of somewhat shaky magic.
One of the things that killed network computers (aka thin clients)
Here is a thesis about network computing's lack of success:
The compute power to deliver your applications has to live somewhere, whether that is in the machine in front of you or on a server that sits in a machine room somewhere. It turns out that the cost of delivering compute power in one box does not scale linearly; at various points, it turns up sharply. For various reasons, there is also a minimum amount of computing power that gets delivered in boxes; it is generally impossible to obtain a box for a cost below $X (for a moderately variable $X over time), and at that price you can get a certain amount of computing.
The result of these two trends is that it is easier and more predictable to supply that necessary application compute power in the form of a computer on your desk than a terminal ('network computer') on your desk and 1/Nth of a big computer in the server room. The minimum compute unit that you can buy today is quite capable (we are rapidly approaching the point where the most costly component of a decent computer is the display, and you have to buy that either way), and buying N of the necessary minimum compute power in the form of a compute server or three is uneconomical by comparison. This leaves you relying on over-subscribing your servers for the theoretical peak usage, except that you will sooner or later actually hit peak usage (or at least enough usage) and then things stop working. You wind up not delivering predictable compute power to people, power that they can always count on having.
(This issue hits much harder in environments where there is predictable peak usage, such as undergraduate computing. We know that sooner or later all undergraduate stations will be in use by people desperately trying to finish their assignments at the last moment.)
I don't think that cloud computing is going to fundamentally change this, because cloud computing still does a significant amount of work on the clients and probably always will. (In fact I think that there are strong economic effects pushing cloud computing applications to put as much of the work on the client side as they can; the more work you can have the client browser do in various ways, the less server computing power you need.)
(This was somewhat sparked from reading this.)