The easy way to wind up with multiple subnets on a single (V)LAN segment
In theory, a nice proper network is supposed to have only a single IP(v4) subnet running over any given network segment. But this is not actually fully required; if you're sufficiently perverse you can run multiple subnets over the same physical network. Some people are now asking how on earth you would ever get into such a crazy situation. Well, sit back, I have a story for you.
Suppose that you use private subnets and in particular you put each group in your organization in its own subnet (around here, a group is a research group). Often when you start doing this, you will give a group a /24 (because that's the simple approach). But the thing about groups is that sometimes they, well, grow. Some energetic professor will get a grant for some equipment here and another one will get a grant for a cluster there and before you know it, a /24 just isn't enough space.
In a perfectly ideal world you would have allocated all those initial /24s such that they could grow substantially, at least into /22s. In a less than ideal world you just allocated the /24s sequentially and there is no room to expand them. No problem; private IP address space is capacious, so you can just start over by giving a group, say, a /16. But this new /16 is not contiguous with the group's old /24; you can't just expand the segment's subnet mask and be done.
In an ideal world you could arrange with the group to have a flag day for the changeover between their old subnet and their new subnet; on that day, all machines would go down, all IP addresses would be changed, and the group would completely migrate from their /24 to their /16. In the real world the group is going to laugh politely at you when you propose this, because they (rightfully) have much more important things to do with their time than go through a large disruption just for your convenience. Instead you're forced into the obvious step: you just add the new /16 to their existing network segment alongside the old /24. The group will put new machines in the /16 and migrate old machines from the old /24 to the new /16 at their convenience (generally very slowly).
This does have some small drawbacks. The largest one is that all of the group's traffic between their two subnets is making an unnecessary round trip through your router (and if your router is a firewall, through your firewall rules; you'll wind up making a special exemption for their internal traffic). Hopefully there won't be much of it; if there is, you can sometimes use it to motivate the group into moving some machines.
(The obvious workaround for heavy-traffic machines is to give them a second IP address alias on the other subnet, so they know that they can reach it directly.)
Sidebar: easy IP address assignment in the new /16
We generally suggest to groups that the low /24 of the new /16 be reserved for a one to one mapping of machines in the old /24. Groups don't necessarily have to convert (or dual-home) machines this way, often they renumber them during a move in order to (re)organize their network, but it makes it easy to do a quick conversion or a dual-homing; just change the network without changing the host IP.
The systemd dependency problem
When I wrote about what systemd got right I also mentioned in passing that it wasn't without flaws. It's time (and really past time) to start doing some elaboration on that, and I'm going to start with systemd's problem with documenting dependencies.
Systemd is fundamentally a dependency-based init mechanism; it starts things and orders startup based on what service needs what other service (among other things this determines what can start in parallel). All of this is well and good but it means that in order to write good systemd units you need documentation on how startup is structured, which is to say the standard dependencies involved. Your unit almost certainly can't run at arbitrary times, so in order to make it run at the right time (neither too early nor unnecessarily late) you need to know what to make it depend on. You may also need to know how to tell systemd that your unit provides a particular generic service like, for example, DNS lookups.
I'm going to be blunt: systemd in Fedora is falling down on this today. Part of this is systemd's fault and part of this is Fedora's fault (since at least part of how systemd units are ordered is up to the distribution and thus up to the distribution to document). Today it is fairly hard for a sysadmin who wants to write a good unit that works right to find out what they are supposed to depend on under what circumstances (and I speak from experience). Often it isn't obvious that you're missing something and that as a result your unit is only working through coincidence on your particular system (and may stop working right if something changes how systemd orders things).
(This problem is in fact so difficult that some Fedora-supplied packages get it wrong. For example their package for unbound does not assert that it provides DNS lookups, which means that even if other units properly say that they want DNS lookups they may well start before unbound does because systemd doesn't know any better.)
Systemd provides some documentation for this in the systemd.special(7) manpage but it's relatively sketchy and incomplete (from my perspective); it lists things but doesn't provide much guidance on how you want to structure your units and what you want to depend on in practice. Also, the section on depending on the network is actively enraging for many server administrators. It's very nice for systemd to tell developers that they should make their programs be nice and flexible in the face of networks appearing and disappearing, but they generally don't on servers and system administrators have to deal with many, many programs that have not been updated to how systemd wants them to behave. Worse, sometimes there is simply no rational way to do this sort of update.
(A related lack is that the systemd documentation does not
clearly spell out how to tell it that your unit implements a
particular target. Apparently the way to do this is to specify both
Wants=something.target; this is
charmingly indirect, to put it one way.)
Related to this lack of documentation is a lack of tools for determining service dependency relations; systemd provides neither a 'what requires this' nor a 'what is required by this' query operation. Both are important in practice, especially if you're trying to audit your system to insure that its behavior is predictable and correct (ie, that you have the dependencies right so that everything deterministically starts when it should). Note that looking at the actual boot order is not sufficient for this because you don't know if the boot order is a product of actual dependencies or just how systemd decided to do things this time around.
(This collection of issues bit me in my recent upgrade to Fedora 18. Units that had been starting perfectly fine in Fedora 17 suddenly started not working; it turned out that they were missing dependencies and had only been working in Fedora 17 by coincidence. Trying to properly depend on DNS lookups being ready to go led me to discover the issue with unbound's own ordering.)
Related to this is the issue of missing dependencies. Systemd's selection of standard dependency targets and things that implement them is relatively sparse. Systemd provides neither tools nor good documentation for adding more, including targets that server administrators would like in practice. This is especially striking in the case of networking targets; if systemd is going to throw us to the wolves (as it does today), I would like it to provide some tools to at least help us implement our own meaningful targets (and yes, we're going to need them in practice). Even more useful would be standard fine-grained targets that systemd automatically notices and advertises (for example, 'IP address X has been assigned to an interface').
(I suspect that the best way to do this would be for systemd to support dependencies on DBus information, since I believe that such information is already broadcast across DBus for interested parties.)