2009-06-07
It's important to get the real costs right
Here is an obvious yet important thing:
When you make decisions between the costs of development and the costs of operation, it is quite important to get the actual costs right (on both sides); otherwise you will be balancing things based on bad data, which usually doesn't end well. One would like to think that this is easy, but in fact there has usually been a lot of mythology about these costs floating around (I suspect especially so when the actual costs are changing rapidly).
The classical example is the 'cost' of garbage collection. For a long time people argued that automatic garbage collection was both significantly less efficient than manual storage management and unnecessary because it was easy to manage storage by hand. Actual practice has shown that both are false; in large scale programs it's clear that manual storage management is too error prone, and I believe that modern GC systems actually have a lower overhead (in both code execution time and space) than manual storage allocation.
Another, older example is the arguments between assembly programming and high level languages (by which we mean things like C, not what is called 'high level' today). Although the debate was won long before then, starting in the early 1990s I think that the efficiency argument actually went in favour of the compiled languages, as compilers got increasingly sophisticated and started doing global optimizations that were just not feasible if you were coding assembly by hand. These days modern Just-In-Time environments have pushed this even further, since the JIT can produce ultra-specialized code on the fly.
In a way, similar things are happening on the 'cost of operation' side now, where things like detailed charging systems for cloud computing or strict limits on what you can do are making people conscious of just how much their code is actually doing.
2009-06-03
The costs of development versus the costs of operation
At one level, the whole issue of program energy efficiency is nothing new; it is yet another round of the eternal conflict between the costs of development and the costs of operation. These have pretty much always been in conflict, in that you could do more development work to lower the costs of running a program, but since development isn't free there is always a point where more development is not economically justifiable, where you can't lower your cost of operation by more than you'd spend on development.
(Where this point is depends in part on what scale you operate on. For example, a big datacenter cares about efficiency gains that someone with one machine wouldn't even notice.)
Where this point is exactly has been teetering back and forth for at least as long as high level languages have existed. Roughly speaking, I think that it has usually tilted towards development (ie cheaper development but higher operational costs) at times of rapid change and of technology growth, which is what we've had for going on two decades now.
(Rapid change means that your more efficient code may not run for long enough to pay back the investment in development; consider the fate of, say, the world's most efficient HTTP/0.9 server for static content. Rapid growth means that your development work is in effect in a race with the reduced costs of operation that time will bring all on its own, which makes the relative return on investment lower.)
Things like program energy efficiency may be pushing this balance back towards favouring the cost of operations, where developers will do more work in order to make their programs cost less to run. If so, it's not a revolutionary change (or an inevitable return to the way that things should be); instead, it's a natural shift, of a sort that has happened before and will likely happen again.