Dynamic web pages can be viewed as a form of compression (sometimes)
If you're in a situation where static files are an alternative to dynamically generating web pages, one of the ways to view dynamic web pages is through the lens of compression, where actually generating any particular dynamic web page is 'decompressing' it from the raw data (and code) involved. This can provide a useful perspective for thinking about when dynamic web pages are attractive and when they're not.
(Not all uses of dynamic web pages are possible as static files; there may not be a finite number of possible static files to be created, for example.)
In some cases a single web page can be smaller as a dynamic web page's data and code than as a static file. For an extreme example, consider a plain text web page that is a million lines of 'yes' (which is to say, a million repeats of 'yes\n'). The code and the trivial data needed to dynamically generate this content on demand is clearly smaller than the static file version, and it might even use fewer CPU cycles and other resources to generate it than to read the file (once you count all of the CPU cycles and resources the kernel will use, and so on).
(In general, when considering compression and decompression efficiency you need to include the code as well as the 'compressed' data. You can do the same thing for static files by including all of the code involved to read them, but in many cases this code is free because it's already needed for other things or you couldn't remove it from your web serving environment even if you tried.)
More often a single dynamic web page may not be a space savings, especially once you count the code involved as well, but there is an overall space saving across the entire site due to the reuse of templates and page elements, and perhaps the true dynamic generation of some information. Any individual web page or the overall site as a whole might be more CPU and RAM efficient if served as a static files instead (even including the kernel overhead), but this is less important than the compression achieved by dynamic rendering.
(At this point we may also want to consider the resources required to generate all of the static file version as compared to the resources required to serve only the pages that people ask for. But on the modern web everything gets requested sooner or later. Still, there may well be a saving in the resource of human time.)
One thing this implies is that the more incompressible a URL or a website area is, the less useful dynamic generation may be. If you're dynamically serving essentially incompressible blobs, like images, the only thing that you can really change is how the blobs are stored. On the other hand, images can be 'compressible' in some sense in that, for example, you can store only a high resolution version and then generate smaller sized ones on demand. This will cost you CPU and other resources during the generation but may save you a lot of static data space.
Actually doing this 'decompression' efficiently in general has some issues in the web context. For example, static files trivially support efficiently starting from any byte offset, for resumed requests and partial requests. This can be efficiently supported in some dynamic generation code with extra work (the 'yes' example could support this), but is much harder in others (leading to the irony of conditional GET for dynamic websites, where the only resource it saves is network bandwidth). This suggests another situation where static files may be better in practice than dynamic generation even if they take more space.
(This collection of thoughts was sparked by writing yesterday's entry on service static files being driven by efficiency.)