2024-01-23
CGI programs have an attractive one step deployment model
When I wrote about how CGI programs aren't particularly slow these days, one of the reactions I saw was to suggest that one might as well use a FastCGI system to run your 'CGI' as a persistent daemon, saving you the overhead of starting a CGI program on every request. One of the practical answers is that FastCGI doesn't have as simple a deployment model as CGIs generally offer, which is part of their attractions.
With many models of CGI usage and configuration, installing a CGI, removing a CGI, or updating it is a single-step process; you copy a program into a directory, remove it again, or update it. The web server notices that the executable file exists (sometimes with a specific extension or whatever) and runs it in response to requests. This deployment model can certainly become more elaborate, with you directing a whole tree of URLs to a CGI, but it doesn't have to be; you can start very simple and scale up.
It's theoretically possible to make FastCGI deployment almost as simple as the CGI model, but I don't know if any FastCGI servers and web servers have good support for this. Instead, FastCGI and in general all 'application server' models almost always require at least a two step configuration, where you to configure your application in the application server and then configure the URL for your application in your web server (so that it forwards to your application server). In some cases, each application needs a separate server (FastCGI or whatever other mechanism), which means that you have to arrange to start and perhaps monitor a new server every time you add an application.
(I'm going to assume that the FastCGI server supports reliable and automatic hot reloading of your application when you deploy a change to it. If it doesn't then that gets more complicated too.)
If you have a relatively static application landscape, this multi-step deployment process is perfectly okay since you don't have to go through it very often. But it is more involved and it often requires some degree of centralization (for web server configuration updates, for example), while it's possible to have a completely distributed CGI deployment model where people can just drop suitably named programs into directories that they own (and then have their CGI run as themselves through, for example, Apache suexec). And, of course, it's more things to learn.
(CGI is not the only thing in the web language landscape that has this simple one step deployment model. PHP has traditionally had it too, although my vague understanding is that people often use PHP application servers these days.)
PS: At least on Apache, CGI also has a simple debugging story; the web server will log any output your CGI sends to standard error in the error log, including any output generated by a total failure to run. This can be quite useful when inexperienced people are trying to develop and run their first CGI. Other web servers can sometimes be less helpful.
2024-01-08
One of the things limiting the evolution of WebPKI is web servers
It's recently struck me that one of the things limiting the evolution of what is called Web PKI, the general infrastructure of TLS on the web (cf), is that it has turned out that in practice, almost anything that requires (code) changes to web servers is a non-starter. This is handily illustrated by the fate of OCSP Stapling.
One way to make Web PKI better is to make certificate revocation work better, which is to say more or less at all. The Online Certificate Status Protocol (OCSP) would allow browsers to immediately check if a certificate was revoked, but there are a huge raft of problems with that. The only practical way to deploy it is with OCSP Stapling, where web servers would include a proof from the Certificate Authority that their TLS certificate hadn't been revoked as of some recent time. However, to deploy OCSP Stapling, web servers and the environment around them needed to be updated to obtain OCSP responses from the CA and then include these responses as additional elements in the TLS handshake.
Before I started writing this entry I was going to say that OCSP Stapling is notable by its absence, but this is not quite true. Using the test on this OpenSSL cookbook page suggests that a collection of major websites include stapled OCSP responses but also that at least as many major websites don't, including high profile destinations that you've certainly heard of. Such extremely partial adoption of OCSP Stapling makes it relatively useless in practice, because it means that no web client or Certificate Authority can feasibly require it (a CA can issue certificates that require OCSP Stapling).
There are perfectly good reasons for this inertia in web server behavior. New code takes time to be written, released, become common in deployed versions of web server software, fixed, improved, released again, deployed again, and even then it often requires being activated through configuration changes. At any given time, most of the web servers in the world are running older code, sometimes very older code. Most people don't change their web server configuration (or their web server) unless they have to, and also they generally don't immediately adopt new things that may not work.
(By contrast, browsers are much easier to change; there are only a few sources of major browsers, and they can generally push out changes instead of having to wait for people to pull them in. It's relatively easy to get pretty high usage of some new thing in six months or a year, or even sooner if a few groups decide.)
The practical result of this is that any improvement to Web PKI that requires web server changes is relatively unlikely to happen, and definitely isn't going to happen any time soon. The more you can hide things behind TLS libraries, the better, because then hopefully only the TLS libraries have to change (if they maintain API compatibility). But even TLS libraries mostly get updated passively, when people update operating system versions and the like.
(People can be partially persuaded to make some web server changes because they're stylish or cool, such as HTTP/2 and HTTP/3 support. But even then the code needs to get out into the world, and lots of people won't make the changes immediately or even at all.)