What would a multi-user web server look like? (A thought experiment)
Every so often my thoughts turn to absurd ideas. Today's absurd idea is sparked by my silly systemd wish for moving processes between systemd units, which in turn was sparked by a local issue with Apache CGIs (and suexec). This got me thinking about what a modern 'multi-user' web server would look like, where by multi-user I mean a web server that's intended to serve content operated by many different people (such as many different people's CGIs). Today you can sort of do this for CGIs through Apache suexec, but as noted this has limits.
The obvious way to implement this would be to run a web server process for every different person's web area and then reverse proxy to the appropriate process. Since there might be a lot of people and not all of them are visited very often, you would want these web server processes to be started on demand and then shut down automatically after a period of inactivity, rather than running all of the time (on Linux you could sort of put this together with systemd socket units). These web server processes would run as appropriate Unix UIDs, not as the web server UID, and on Linux under appropriate systemd hierarchies with appropriate limits set.
(Starting web server units through systemd would also mean that your main web server process didn't have to be privileged or have a privileged helper, as Apache does with suexec. You could have the front end web server do the process starting and supervision itself, but then it would also need the privileges to change UIDs and the support for setting other per-user context information, some of which is system dependent.)
Although I'm not entirely fond of it, the simplest way to communicate between the main web server and the per-person web server would be through HTTP. Since HTTP reverse proxies are widely supported, this would also allow people to choose what program they'd use as their 'web server', rather than your default. However, you'd want to provide a default simple web server to handle static files, CGIs, and maybe PHP (which would be even simpler than my idea of a modern simple web server).
The main (or front-end) web server would still want to have a bunch of features like global rate limiting, since it's the only thing in a position to see aggregate requests across everyone's individual server. If you wanted to make life more complicated but also potentially more convenient, you could chose different protocols to handle different people's areas. One person could be handled via a HTTP reverse proxy, but another person might be handled through FastCGI because they purely use PHP and that's most convenient for them (provided that their FastCGI server could handle being started on demand and then stopping later).
While I started thinking of this in the context of personal home pages and personal CGIs, as we support on our main web server, you could also use this for having different people and groups manage different parts of your URL hierarchy, or even different virtual hosts (by making the URL hierarchy of the virtual host that was handed to someone be '(almost) everything').
With a certain amount of work you could probably build this today on Linux with systemd (Unix) socket activation, although I don't know what front-end or back-end web server you'd want to use. To me, it feels like there's a certain elegance to the 'everyone gets their own web server running under their own UID, go wild' aspect of this, rather than having to try to make one web server running as one UID do everything.
Comments on this page:
|
|