Serving static files versus dynamic web server APIs
The world has changed since the static/dynamic divide was created. There have now been a significant number of attempts to "platformitize" dynamic computation — "serverless" programming like AWS Lambda, WASM runtimes like those provided by CloudFlare and Fastly, full container deployment via Fly.io and cloud providers. None of these have an API nearly as stable as the filesystem API used to host static files, but it's a matter of time before APIs end up well-established. [...]
I don't think this is going to happen, in large part because I don't think the filesystem is an API. The filesystem is an idea, an idea that is an especially good fit for the web because the web was in some sense designed to serve static files. In addition the web doesn't even use most of 'the filesystem API', in as much as one exists. Web servers mostly don't allow you to perform a wide variety of filesystem and file manipulation; instead they serve blobs of data in a hierarchical namespace. The blobs have some metadata, but some of that metadata is actually invented on the fly (for example, the content type).
A dynamic web site (or portion of it) is intrinsically more complicated than this simple model. Some blob of code must be located, invoked with an assortment of information about the HTTP request, possibly be allowed to read some or all of the request body, likely be allowed to access an assortment of external resources in some way or ways, and then return HTTP response code, headers, and response body. Much of this should be streamed, instead of happening in single block transfers. This blob of code runs in some environment that needs to be defined.
The collective surface area of this dynamic 'API' is much larger than the little bit of the filesystem idea that static web serving uses, and it must be much more tightly specified in order to be useful. In turn this makes it hard to have the sort of freedom of implementation of 'filesystem APIs' that exist in the web serving world. You can reduce the surface area of the API by making it lower level, but this is only moving the complexity of dynamic web serving around; now more of it must be handled in the 'application' instead of in the 'dynamic web server'.
(For example, if your API is containers, you effectively offload the entire HTTP handling stack to the application.)
It's easy to transform one representation of a hierarchical set of blobs to another representation of hierarchical blobs; we do it all the time (between a directory tree and a ZIP archive, for example). This makes it straightforward to develop or author a static website in one representation and then publish it in another, which means that static web environments have a lot of freedom about what representation they accept from people (and also about what one they use internally). That everything has a specific name in a (hierarchical) namespace also intuitively allows for selective publishing and updating in various natural manners and interfaces.
You can't really do this with a dynamic API and thus a dynamic web site using it. The large surface area and tight specification of any dynamic API make it both harder and more laborious to 'transform', which is really creating an interface layer in the middle. Unlike the 'filesystem API' web server approach, this is likely to create a single winning 'native' implementation of the API and then various increasingly awkward adaptations to it to different dynamic web serving environments. Since there is one winner and many losers, there's little incentive to standardize. There's also a conflict between API simplicity and selectively updating (and generally modularizing) different parts of your dynamic site.
(It's worth noting that effective standardization requires the winner to agree with it, and the winner may not since standardization does open the door for more effective competitors. One tactic a winner has to stop de facto standardization is simply to keep evolving and improving their API and the services around it.)