Some thoughts on why 'inetd activation' didn't catch on

October 12, 2024

Inetd is a traditional Unix 'super-server' that listens on multiple (IP) ports and runs programs in response to activity on them; it dates from the era of 4.3 BSD. In theory inetd can act as a service manager of sorts for daemons like the BSD r* commands, saving them from having to implement things like daemonization, and in fact it turns out that one version of this is how these daemons were run in 4.3 BSD. However, running daemons under inetd never really caught on (even in 4.3 BSD some important daemons ran outside of inetd), and these days it's basically dead. You could ask why, and I have some thoughts on that.

The initial version of inetd only officially supported running TCP services in a mode where each connection ran a new instance of the program (call this the CGI model). On the machines of the 1980s and 1990s, this wasn't a particularly attractive way to run anything but relatively small and simple programs (and ones that didn't have to do much work on startup). In theory you could possibly run TCP services in a mode where they were passed the server socket and then accepted new connections themselves for a while; in practice, no one seems to have really written daemons that supported this. Daemons that supported an 'inetd mode' generally meant the 'run a copy of the program for each connection' mode.

(Possibly some of them supported both modes of inetd operation, but system administrators would pretty much assume that if a daemon's documentation said just 'inetd mode' that it meant the CGI model.)

Another issue is that inetd is not a service manager. It will start things for you, but that's it; it won't shut down things for you (although you can get it to stop listening on a port), and it won't tell you what's running (you get to inspect the process list). On Unixes with a System V init system or something like it, running your daemons as standalone things gave you access to start, stop, restart, status, and so on service management options that might even work (depending on the quality of the init.d scripts involved). Since daemons had better usability when run as standalone services, system administrators and others had relatively little reason to push for inetd support, especially in the second mode.

In general, running any important daemon under inetd has many of the same downside as systemd socket activation of services. As a practical matter, system administrators like to know that important daemons are up and running right away, and they don't have some hidden issue that will cause them to fail to start just when you want them. The normal CGI-like inetd mode also means that any changes to configuration files and the like take effect right away, which may not be what you want; system administrators tend to like controlling when daemons restart with new configurations.

All of this is likely tied to what we could call 'cultural factors'. I suspect that authors of daemons perceived running standalone as the more serious and prestigious option, the one for serious daemons like named and sendmail, and inetd activation to be at most a secondary feature. If you wrote a daemon that only worked with inetd activation, you'd practically be proclaiming that you saw your program as a low importance thing. This obviously reinforces itself, to the degree that I'm surprised sshd even has an option to run under inetd.

(While some Linuxes are now using systemd socket activation for sshd, they aren't doing it via its '-i' option.)

PS: There are some services that do still generally run under inetd (or xinetd, often the modern replacement, cf). For example, I'm not sure if the Amanda backup system even has an option to run its daemons as standalone things.


Comments on this page:

By nell at 2024-10-13 17:04:23:

As a practical matter, system administrators like to know that important daemons are up and running right away, and they don't have some hidden issue that will cause them to fail to start just when you want them. […] I suspect that authors of daemons perceived running standalone as the more serious and prestigious option, the one for serious daemons like named and sendmail

My thinking is that these two things could be closely related. Were I writing a daemon, it wouldn't be about "prestige"; I just don't like the idea of being unable to tell, till it's too late, that something isn't working. (I'm the type of person who also wouldn't want to enable memory over-commitment when anything important is involved; it's unfortunate that Linux apparently doesn't make this a per-process or per-cgroup attribute.) With systemd, portability to non-Linux systems would be another concern.

I'd actually rather not write code to open ports, daemonize, and so on. It'd require my program to keep otherwise-unnecessary privilege, for example. I'd much prefer to have some manager open a listening socket and pass it to the program, provided it does so without waiting for a client to connect. (One of the systemd blog posts says adding "WantedBy=multi-user.target" will do this, and that's probably what I'd do; the FD-receiving interface is sane and could be provided by a simple wrapper on non-Linux systems.)

By cks at 2024-10-13 23:24:25:

This comment sparks another thought, which should have been obvious to me: if you're writing a daemon, you're going to need and want some way to debug it before you package it all up and deploy it in some service activation framework. Unless you already have good tooling for emulating some sort of socket activation, this pushes towards a self contained 'daemon' mode, since a debugging version of the daemon mode is just your code not daemonizing.

(And it's probably easier to run your program under a debugger that way, too, since you can directly start it and so on.)

By nell at 2024-10-14 15:14:13:

I guess that's a reasonable point. But for systemd users, it'd likely be almost trivial to run it out of the home directory via their systemd per-user instance, unless it depends on options such as "User=" and "SELinuxContext=" that aren't available in per-user instances. I don't imagine anything would stop a user from running their own inetd instance, either.

I don't think "good" tooling is really needed; the minimal wrapper wouldn't be much more than socket(…), bind(…), listen(…), setenv("LISTEN_FDS", …), and execve(), with some half-assed error handling. Or just use a shell script that exports the variable and runs socat.

If you publish something with a "proper" socket-listener, people are gonna ask for stuff like AF_UNIX support or binding only to specific IP addresses, and then you're blowing up your argument-parsing and man-page complexity and are expected to write code of a certain quality...

By nell at 2024-10-14 18:45:39:

I'm seeing now that the "minimal wrapper" I described is already provided by systemd-socket-activate, which will simulate an inetd launch if the "--inetd" option is given, or a systemd launch otherwise. It doesn't seem to require that systemd be running; but, due to some epoll references, it's not likely to build for non-Linux systems without some work.

(socat doesn't seem able to pass the listening socket to a sub-process—only the connected socket—but references the systemd tool in its man page.)

Written on 12 October 2024.
« Potential pragmatic handling of partial matches for HTTP conditional GET
Our local changes to standard (Ubuntu) installs are easy to forget »

Page tools: View Source, View Normal.
Search:
Login: Password:

Last modified: Sat Oct 12 22:06:48 2024
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.