I'm looking forward to using systemd's new IP access control features

October 12, 2017

These days, my reaction to hearing about new systemd features is usually somewhere between indifference and irritation (I'm going to avoid giving examples, for various reasons). The new IP access lists feature is a rare exception; as a sysadmin, I'm actually reasonably enthused about it. What makes systemd's version of IP access restrictions special and interesting is that they can be be imposed per service, not just globally (and socket units having different IP access restrictions than the service implementing them adds extra possibilities).

As a sysadmin, I not infrequently deal with services that either use random ports by default (such as many NFS related programs) or which have an irritating habit of opening up 'control' ports that provide extra access to themselves (looking at what processes are listening on what ports on a typical modern machine can be eye-opening and alarming, especially since many programs don't document their port usage). Dealing with this with general iptables rules is generally too much work to be worth it, even when things don't go wrong; you have to chase down programs, try to configure some of them to use specific ports, hope that the other ports you're blocking are fixed and aren't going to change, and so on.

Because systemd can do these IP access controls on a per service basis, it promises a way out from all of this hassle. With per-service IP access controls, I can easily configure my NFS services so that regardless of what ports they decide to wander off and use, they're only going to be accessible to our NFS clients (or servers, for client machines). Other services can be locked down so that even if they go wild and decide to open up random control ports, nothing is going to happen because no one can talk to them. And the ability to set separate IP access controls on .socket units and .service units opens up the possibility of doing something close to per-port access control for specific services. CUPS already uses socket activation on our Ubuntu 16.04 machines, so we could configure the IPP port to be generally accessible but then lock down the CUPS .service and daemon so we don't have to worry that someday it will sprout an accessible control port somewhere.

(There are also uses for denying outbound traffic to some or many destinations but only for some services. This is much harder to do with iptables, and sometimes not possible at all.)


Comments on this page:

From 193.219.181.222 at 2017-10-12 10:03:39:

I also have mixed feelings about this, but less due to "feature creep" and more due to "they're gonna reinvent netfilter, poorly". Still seems like cgroup-based iptables or nft rules would be a better fit and provide more flexibility. (I think the cgroup match was merged sometime this year?)

At the same time, I agree that it's still much nicer than port-based rules. (Just don't forget that nfsd itself is generally a kernel thread, not a service...)

By James (trs80) at 2017-10-12 11:37:00:

Isn't this just TCP wrappers?

By cks at 2017-10-12 12:20:48:

TCP wrappers requires active support from the program(s) involved, and not all programs support it (especially today). It also only confines inbound requests, not outbound ones.

The Fedora 26 manpage suggests that cgroup support in iptables/nftables is limited at the moment, especially for current cgroups (instead of cgroup2 cgroups). It's not clear how much inbound matching you can do, for example, and inbound matching is what I care most about. Still, being able to match on cgroups is potentially useful and I'll have to remember that it's there.

Written on 12 October 2017.
« Understanding M.2 SSDs in practice and how this relates to NVMe
Working to understand PCI Express and how it interacts with modern CPUs »

Page tools: View Source, View Normal.
Search:
Login: Password:

Last modified: Thu Oct 12 01:15:16 2017
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.