Using policy based routing to isolate a testing interface on Linux
The other day I needed to do some network bandwidth tests to and from one of our sandbox networks and wound up wanting to use a spare second network port on an already-installed test server that was fully set up on our main network. This calls for policy based routing to force our test traffic to flow only over the sandbox network, so we avoid various sorts of asymmetric routing situations (eg). I've used Linux's policy based routing and written about it here before, but surprisingly not in this specific situation; it's all been in different and more complicated ones.
So here is what I need for a simple isolated testing interface, with commentary so that when I need this again I don't have just the commands, I also can re-learn what they're doing and why I need them.
- First we need to bring up the interface itself. For quick
testing I just use raw
ip link set eno2 up ip addr add dev eno2 172.21.1.200/16
- We need a routing table for this interface's routes and a
routing policy rule that forces use of them for traffic to
and from our IP address on
ip route add 172.21.0.0/16 dev eno2 table 22 ip route add default via 172.21.254.254 table 22 ip rule add from 172.21.1.200 iif lo table 22 priority 6001
We need the local network route for good reasons. The choice of table number is arbitrary.
By itself this is good enough for most testing. Other hosts can
connect to your 172.21.1.200 IP and that traffic will always flow
eno2, as will outgoing connections that you specifically
bind to the 172.21.1.200 IP address using things like
argument or Netcat's
-s argument. You can also talk directly to
things on 172.21/16 without having to explicitly bind to 172.21.1.200
first (ie you can do '
ping 172.21.254.254' instead of needing
ping -I 172.21.1.200 172.21.254.254').
However, there is one situation where traffic will flow over the
wrong network, which is if another host in 172.21/16 attempts to
talk to your public IP (or if you try to talk to 172.21/16 while
specifically using your public IP). Their outbound traffic will
come in on
eno1, but because your machine knows that it can talk
to them directly on
eno2 it will just send its return traffic
that way (probably with odd ARP requests).
What we want is to use the direct connection to 172.21/16 in only
two cases. First, when the source IP is set to 172.21.1.200 in some
way; this is already covered. Second, when we're generating outgoing
traffic locally and we have not explicitly picked a source IP; this
allows us to do just '
ping 172.21.254.254' and have it flow over
eno2 the way we expect. There are a number of ways we could do
this, but it turns out that the simplest way goes as follows.
- Remove the global routing table entry for
ip route del 172.21.0.0/16 dev eno2
(This route in the normal routing table was added automatically when we configured our address on
- Add a new routing table with the local network route to 172.21/16
and use it for outgoing packets that have no source IP assigned
ip route add 172.21.0.0/16 dev eno2 src 172.21.1.200 table 23
ip rule add from 0.0.0.0 iif lo lookup 23 priority 6000
The nominal IP address
INADDR_ANYis what the socket API uses for 'I haven't set a source IP', and so it's both convenient and sensible that the kernel reuses it during routing as 'no source IP assigned yet' and lets us match on it in our rules.
(Since our two rules here should be non-conflicting, we theoretically could use the same priority number. I'm not sure I fully trust that in this situation, though.)
You can configure up any number of isolated testing interfaces following this procedure. Every isolated interface needs its own separate table of its own routes, but table 23 and its direct local routes are shared between all of them.
Link: How does "the" X11 clipboard work?
X11: How does “the” clipboard work? (via) is a technical walk through the modern X11 selection system, one that winds up discussing things at the level of the X protocol and Xlib, with helpful code examples. I learned some quite useful things in the process, for example how to use xclip to query things to find out what formats a selection is available in.
(Technical details about X selections are relevant to me because I use a program that deals with them and which I'd like to see do so more conveniently.)
Modern web page design and superstition
In yesterday's entry I said some deeply cynical things about people who design web pages with permanently present brand headers and sharing-links footers (or just permanent brand-related footers in general). I will condense these cynical things to the following statement:
Your page design, complete with its intrusive elements and all, shows what you really care about.
As the logic goes, if you actually cared about the people reading your content, you wouldn't have constantly present, distracting impediments to their reading. You wouldn't have things that got in the way or obscured parts of the text. If you do have articles that are actually overrun with branding and sharing links and so on, the conclusion to draw is the same as when a page of writing on a 'news' site is overrun by a clutter of ads. In both cases, the content is simply bait and the real reason the page exists is the ads or the branding.
Although it might be hard to believe, I'm actually kind of an optimist. So my optimist side says that while this cynical view of modern page design is plausible, I don't think it's universally true. Instead I think that what is going on some of the time is a combination of blindness and superstition. Or to put it concretely, I believe that most people putting together page design don't do it from first principles; instead, much as with programming, most people copy significant design elements from whatever web page design trend is currently the big, common thing.
(This includes both actual web designers and people who are just putting together some web pages. The latter are much more likely to just copy common design elements for obvious reasons.)
Obviously you don't copy design elements that you have no use for, but most people do have an interest in social media sharing and have some sort of organization or web site identity even if it's not a 'brand' as such (just 'this is the website of <X>' is enough, really). Then we have the massive design push in this direction from big, popular content farm sites that are doing this for entirely cynical reasons, like Medium. You see a lot of big web sites doing this, it's at least more or less applicable to you (and may help boost your writing and site, and who doesn't want that), so you replicate these permanent headers and footers in your site and your designs because it's become just how sites are done. In some cases, it may be made easier due to things like canned design templates that either let you easily turn these on or simply come with them already built in (no doubt partly because that's what a lot of people ask for). Neither you nor other people involved in this ever sit down to think about whether it's a good idea; it's enough that it's a popular design trend that has become pretty much 'how pages should look on modern sites'.
(I'm sure there's a spectrum running between the two extremes. I do drop by some websites where I suspect that social media shares are part of what keeps the site going but I also believe that the person running the site is genuinely well-intentioned.)
I consider this the optimistic take because means I don't have to believe a fairly large number of people are deeply cynical and are primarily writing interesting articles and operating websites in order to drive branding. Instead they do care about what they seem too and are just more or less reflexively copying from similar sites, perhaps encouraged by positive results for things like social media sharing.