Wandering Thoughts archives

2017-07-31

Using policy based routing to isolate a testing interface on Linux

The other day I needed to do some network bandwidth tests to and from one of our sandbox networks and wound up wanting to use a spare second network port on an already-installed test server that was fully set up on our main network. This calls for policy based routing to force our test traffic to flow only over the sandbox network, so we avoid various sorts of asymmetric routing situations (eg). I've used Linux's policy based routing and written about it here before, but surprisingly not in this specific situation; it's all been in different and more complicated ones.

So here is what I need for a simple isolated testing interface, with commentary so that when I need this again I don't have just the commands, I also can re-learn what they're doing and why I need them.

  • First we need to bring up the interface itself. For quick testing I just use raw ip commands:

    ip link set eno2 up
    ip addr add dev eno2 172.21.1.200/16
    

  • We need a routing table for this interface's routes and a routing policy rule that forces use of them for traffic to and from our IP address on eno2.

    ip route add 172.21.0.0/16 dev eno2 table 22
    ip route add default via 172.21.254.254 table 22
    
    ip rule add from 172.21.1.200 iif lo table 22 priority 6001
    

    We need the local network route for good reasons. The choice of table number is arbitrary.

By itself this is good enough for most testing. Other hosts can connect to your 172.21.1.200 IP and that traffic will always flow over eno2, as will outgoing connections that you specifically bind to the 172.21.1.200 IP address using things like ping's -I argument or Netcat's -s argument. You can also talk directly to things on 172.21/16 without having to explicitly bind to 172.21.1.200 first (ie you can do 'ping 172.21.254.254' instead of needing 'ping -I 172.21.1.200 172.21.254.254').

However, there is one situation where traffic will flow over the wrong network, which is if another host in 172.21/16 attempts to talk to your public IP (or if you try to talk to 172.21/16 while specifically using your public IP). Their outbound traffic will come in on eno1, but because your machine knows that it can talk to them directly on eno2 it will just send its return traffic that way (probably with odd ARP requests). What we want is to use the direct connection to 172.21/16 in only two cases. First, when the source IP is set to 172.21.1.200 in some way; this is already covered. Second, when we're generating outgoing traffic locally and we have not explicitly picked a source IP; this allows us to do just 'ping 172.21.254.254' and have it flow over eno2 the way we expect. There are a number of ways we could do this, but it turns out that the simplest way goes as follows.

  • Remove the global routing table entry for eno2:

    ip route del 172.21.0.0/16 dev eno2
    

    (This route in the normal routing table was added automatically when we configured our address on eno2.)

  • Add a new routing table with the local network route to 172.21/16 and use it for outgoing packets that have no source IP assigned yet:

    ip route add 172.21.0.0/16 dev eno2 src 172.21.1.200 table 23
    
    ip rule add from 0.0.0.0 iif lo lookup 23 priority 6000
    

    The nominal IP address 0.0.0.0 is INADDR_ANY (cf). INADDR_ANY is what the socket API uses for 'I haven't set a source IP', and so it's both convenient and sensible that the kernel reuses it during routing as 'no source IP assigned yet' and lets us match on it in our rules.

(Since our two rules here should be non-conflicting, we theoretically could use the same priority number. I'm not sure I fully trust that in this situation, though.)

You can configure up any number of isolated testing interfaces following this procedure. Every isolated interface needs its own separate table of its own routes, but table 23 and its direct local routes are shared between all of them.

IsolatingTestingInterface written at 22:59:19; Add Comment

2017-07-16

Why upstreams can't document their program's behavior for us

In reaction to SELinux's problem of keeping up with app development, one obvious suggestion is to have upstreams do this work instead. A variant of this idea is what DrScriptt suggested in a comment on that entry:

I would be interested in up stream app developers publishing things about their application, including what it should be doing. [...]

Setting aside the practical issue that upstream developers are not interested in spending their time on this, I happen to believe that there are serious and probably unsolvable problems with this idea even in theory.

The first issue is that the behavior of a sophisticated modern application (which are what we most care about confining well) is actually a composite of at least four different sources of behavior and behavior changes: the program itself, the libraries it uses, how a particular distribution configures and builds both of these, and how individual systems are configured. Oh, and as covered, this is really not 'the program' and 'the libraries', but 'the version of the program and the libraries used by a particular distribution' (or when the app was built locally).

In most Linux systems, even simple looking operations can go very deep here. Does your program call gethostbyname()? If so, what files it will access and what network resources it attempts to contact cannot be predicted in advance without knowing how nsswitch.conf (and other things) are configured on the specific system it's running on. The only useful thing that the upstream developers can possibly tell you is 'this calls gethostbyname(), you figure out what that means'. The same is true for calls like getpwuid() or getpwnam(), as well as any number of other things.

The other significant issue is that when prepared by an upstream, this information is essentially a form of code comments. Without a way for upstreams to test and verify the information, it's more or less guaranteed to be incomplete and sometimes outright wrong (just as comments are incomplete and periodically wrong). So we're asking upstreams to create security sensitive documentation that can be predicted in advance to be partly incorrect, and we'd also like it to be detailed and comprehensive (since we want to use this information as the basis for a fine-grained policy on things like what files the app will be allowed access to).

(I'm completely ignoring the very large question of what format this information would be in. I don't think there's any current machine-readable format that would do, which means either trying to invent a new one or having people eventually translate ad-hoc human readable documentation into SELinux policies and other things. Don't expect the documentation to be written with specification-level rigor, either; if nothing else, producing that grade of documentation is fairly expensive and time-consuming.)

AppBehaviorDocsProblem written at 01:18:05; Add Comment

2017-07-14

SELinux's problem of keeping up with general Linux development

Fedora 26 was released on Tuesday, so today I did my usual thing of doing a stock install of it in a virtual machine as a test, to see how it looks and so on. Predictable things ensued with SELinux. In the resulting Twitter conversation, I came to a realization:

It seems possible that the rate of change in what programs legitimately do is higher than the rate at which SELinux policies can be fixed.

Most people who talk about SELinux policy problems, myself included, usually implicitly treat developing SELinux policies as a static thing. If only one could understand the program's behavior well enough one could write a fully correct policy and be done with it, but the problem is that fully understanding program behavior is very hard.

However, this is not actually true. In reality, programs not infrequently change their (legitimate) behavior over time as new versions are developed and released. There are all sorts of ways this can happen; there's new features in the program, changes to how the program itself works, changes in how libraries the program uses work, changes in what libraries the program uses, and so on. When these changes in behavior happen (at whatever level and for whatever reason), the SELinux policies need to be changed to match them in order for things to still work.

In effect, the people developing SELinux policies are in a race with the people developing the actual programs, libraries, and so on. In order to end up with a working set of policies, the SELinux people have to be able to fix them faster than upstream development can break them. It would certainly be nice if the SELinux people can win this race, but I don't think it's at all guaranteed. Certainly with enough churn in enough projects, you could wind up in a situation where the SELinux people simply can't work fast enough to produce a full set of working policies.

As a corollary, this predicts that SELinux should work better in a distribution environment that rigidly limits change in program and library versions than in one that allows relatively wide freedom for changes. If you lock down your release and refuse to change anything unless you absolutely have to, you have a much higher chance of the SELinux policy developers catching up to the (lack of) changes in the rest of the system.

This is a more potentially pessimistic view of SELinux's inherent complexity than I had before. Of course I don't know if SELinux policy development currently is in this kind of race in any important way. It's certainly possible that SELinux policy developers aren't having any problems keeping up with upstream changes, and what's really causing them these problems is the inherent complexity of the job even for a static target.

One answer to this issue is to try to change who does the work. However, for various reasons beyond the scope of this entry, I don't think that having upstreams maintain SELinux policies for their projects is going to work very well even in theory. In practice it's clearly not going to happen (cf) for good reasons. As is traditional in the open source world, the people who care about some issue get to be the ones to do the work to make it happen, and right now SELinux is far from a universal issue.

(Since I'm totally indifferent about whether SELinux works, I'm not going to be filing any bugs here. Interested parties who care can peruse some logs I extracted.)

SELinuxCatchupProblem written at 01:19:14; Add Comment

2017-07-10

Ubuntu's 'Daily Build' images aren't for us

In response to my wish for easily updating the packages on Ubuntu ISO images, Aneurin Price brought up jigdo. In researching what Jigdo is, I wound up running into a tantalizing mention of Ubuntu daily builds (perhaps from here). This sent me off to Internet searches and eventually I wound up on Ubuntu's page for the Ubuntu Server 16.04.2 LTS (Xenial Xerus) Daily Build. This looked like exactly what we wanted, already pre-built for us (which is perfectly fine by me, I'm happy to have someone else do the work of putting in all of the latest package updates for us).

However, when I went looking around Ubuntu's site I couldn't find any real mention of these daily builds, including such things as what they were for, how long they got updated, and so on. That made me a bit nervous, so I pulled down the latest 'current' 16.04 server build and took a look inside the ISO image. Unfortunately I must report that it turns out to not be suitable for what we want, ironically because it has packages that are too fresh. Well, a package; all I looked at was the kernel image. At the moment, the current daily ISO has a kernel package that is marked as being '4.4.0-85', while the latest officially announced and released Ubuntu kernel is 4.4.0-83. We may like having current updates in our install ISOs, but we draw the line at future updates that are still presumably in testing and haven't been officially released (and may never be, if some problem is found or they're replaced by even newer ones).

To be clear, I'm not blaming Ubuntu. They do daily builds for not-yet-released Ubuntu versions, which are obviously 'don't use these on anything you care about', so there is no particular reason why the daily builds for released Ubuntu versions would be any different (and there are perfectly good reasons for wanting the very latest test packages in a bundle). I was just hopeful when I found this site, so now I'm reporting a negative result.

PS: I'm just guessing as to why this image has kernel 4.4.0-85. As mentioned, I haven't found much information from Ubuntu on what these daily builds are about (for already-released versions), and I don't know too much about how potential updates flow through Ubuntu's work processes and so on. I did find this page on their kernel workflow and its link to this report page, and also this bug that's tracking 4.4.0-85 and its changelog.

UbuntuDailyISOsNotForUs written at 01:32:52; Add Comment

2017-07-09

Why we're not currently interested in PXE-based Linux installs

In theory, burning Ubuntu install DVDs (or writing USB sticks) and then booting servers from them in order to do installs is an old-fashioned and unnecessary thing. One perfectly functional modern way is to PXE-boot your basic installer image, go through whatever questions your Ubuntu install process needs to ask, and then likely have the installer get more or less everything over the network from regular Ubuntu package repositories (or perhaps a local mirror). Assuming that it works, you might as well enable the Ubuntu update repositories as well as the basic ones, so that you get the latest versions of packages right from the start (which would deal with my wish for easily updated Ubuntu ISO images).

We don't do any sort of PXE or network installs, though, and we probably never will. There are a number of reasons for this. To start with, PXE network booting probably requires a certain amount of irritating extra setup work for each such machine to be installed, for example to add its Ethernet address to a DHCP server (which requires actually getting said Ethernet address). Ways around this are not particularly appealing, because they either require running an open DHCP server on our primary production network (where most of our servers go) or contriving an entire second 'install network' sandbox and assuming that most machines to be installed will have a second network port. It also requires us to run a TFTP server somewhere to maintain and serve up PXE images.

(This might be a bit different if we used DHCP for our servers, but we don't; all of our servers have static IPs.)

Next, I consider it a feature that you can do the initial install of a machine without needing to do much network traffic, because it means that we can install a bunch of machines in parallel at more or less full speed. All you need is a bunch of prepared media (and enough DVD readers, if we're using DVDs). As a purely pragmatic thing this also vastly speeds up my virtual machine installs, since my 'DVD' is actually an ISO image on relatively fast local disks. Even a local Ubuntu mirror doesn't fully help here unless we give it a 10G network connection and a beefy, fast disk system (and we're not going to do that).

(We actually have a local Ubuntu mirror that we do package upgrades and extra package installs from in the postinstall phase of our normal install process. I've seen some signs that it may be a chokepoint when several machines are going through their postinstall process at once, although I'd need to take measurements to be sure.)

Finally, I also consider it a feature that a server won't boot into the installer unless there is physical media plugged into it. Even with an installer that does nothing until you interact with it (and we definitely don't believe in fully automated netboot installs), there are plenty of ways for this to go wrong. All you need is for the machine to decide to prioritize PXE-booting higher than its local drives one day and whoops, your server is sitting dead in the installer until you can come by in person to fix that. On the flipside, having a dedicated 'install network' sandbox does deal with this problem; a machine can't PXE boot unless it's physically connected to that network, and you'd obviously disconnect machines after the install has finished.

(I'm going to assume that the Ubuntu network install process can deal with PXE-booting from one network port but configuring your real IP address on another one and then not configuring the PXE boot port at all in the installed system. This may be overly generous.)

The ultimate reason probably comes down to how often we install machines. If we were (re)installing lots of servers reasonably often, it might be worth dealing with all of these issues so that we didn't have to wrangle media (and DVD readers) all the time and we'd get a faster install overall under at least some circumstances. Our work in learning all about PXE booting, over the network Ubuntu installs, and so on, and building and maintaining the necessary infrastructure would have a real payoff. But on average we don't install machines all that often. Our server population is mostly static, with new arrivals being relatively rare and reinstalls of existing servers being uncommon. This raises the cost and hassles of a PXE netboot environment and very much reduces the payoff from setting one up.

(I was recently installing a bunch of machines, but that's a relatively rare occurrence.)

WhyNotPXEInstalls written at 01:11:30; Add Comment

2017-07-08

I wish you could easily update the packages on Ubuntu ISO images

Our system for installing Ubuntu machines starts from a somewhat customized Ubuntu ISO image (generally burned onto a DVD, although I want to experiment with making it work on a USB stick) and proceeds through some post-install customization scripts. One of the things that these scripts do is apply all of the accumulated Ubuntu updates to the system. In the beginning, when an Ubuntu LTS release is fresh and bright and new, this update process doesn't need to do much and goes quite fast. As time goes by, this changes. With 16.04 about a year old by now, applying updates requires a significant amount of time on real hardware (especially on servers without SSDs).

Ubuntu does create periodic point updates for their releases, with updated ISO images; for 16.04, the most recent is 16.04.2, created in mid-February. But there's still a decent number of updates that have accumulated since then. What I wish for is a straightforward way for third parties (such as us) to create an ISO image that included all of the latest updates, and to do so any time they felt like it. If we could do this, we'd probably respin our install images on a regular basis, which would be good for other reasons as well (for example, getting regular practice with the build procedure, which is currently something we only do once every two years as a new LTS release comes out).

There is an Ubuntu wiki page on Install CD customization, with a section on adding extra packages, but the procedure is daunting and it's not clear if it's what you do if you're updating packages instead of adding new ones. Plus, there's no mention of a tool that will figure out and perhaps fetch all of the current updates for the set of packages on the ISO image (I suspect that such a tool exists, since it's so obvious a need). As a practical matter it's not worth our time to fight our way through the resulting collection of issues and work, since all we'd be doing is somewhat speeding up our installs (and we don't do that many installs).

Sidebar: Why this is an extra pain with Ubuntu (and Debian)

The short version is that it is because how Debian and thus Ubuntu have chosen to implement package security. In the RPM world, what gets signed is the individual package and any collection of these packages is implicitly trusted. In the Debian and Ubuntu world, what generally gets signed is the repository metadata that describes a pool of packages. Since the metadata contains the cryptographic checksums of all of the packages, the packages are implicitly protected by the metadata's signature (see, for example, Debian's page on secure apt).

There are some good reasons to want signed repository metadata (also), but in practice it creates a real pain point for including extra packages or updating the packages. In the RPM world, any arbitrary collection of signed packages is perfectly good, so you can arbitrarily update an ISO image with new official packages (which will all be signed), or include extra ones. But in the Debian and Ubuntu world, changing the set of packages means that you need new signed metadata, and that means that you need a new key to sign it with (and then you need to get the system to accept your key).

UbuntuISOPackageUpdate written at 00:07:16; Add Comment

By day for July 2017: 8 9 10 14 16 31; before July; after July.

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.