Wandering Thoughts archives

2016-07-17

A good solution to our Unbound caching problem that sadly won't work

In response to my entry on our Unbound caching problem with local zones, Jean Paul Galea left a comment with the good suggestion of running two copies of Unbound with different caching policies. One instance, with normal caching, would be used to resolve everything but our local zones; the second instance, with no caching, would simply forward queries to either the authoritative server for our local zones or the general resolver instance, depending on what the query was for.

(Everything would be running on a single host, so the extra hops queries and replies take would be very fast.)

In many organizational situations, this is an excellent solution. Even in ours, at first glance it looks like it should work perfectly, because the issue we'd have is pretty subtle. I need to set the stage by describing a bit of our networking.

In our internal networks we have some machines with RFC 1918 addresses that need to be publicly reachable, for example so that research groups can expose a web server on a machine that they run in their sandbox. This is no problem; our firewalls can do 'bidirectional NAT' to expose each such machine on its own public IP. However, this requires that external people see a different IP address for the machine's official name than internal people do, because internal people are behind the BINAT step. This too is no problem, as we have a full 'split horizon' DNS setup.

So let's imagine that a research group buys a domain name for some project or conference and has the DNS hosted externally. In that domain's DNS, they want to CNAME some name to an existing BINAT'd server that they have. Now have someone internally do a lookup on that name, say 'www.iconf16.org':

  1. the frontend Unbound sees that this is a query for an external name, not one of our own zones, so it sends it to the general resolver Unbound.
  2. the general resolver Unbound issues a query to the iconf16.org nameservers and gets back a CNAME to somehost.cs.toronto.edu.
  3. the general resolver must now look up somehost.cs itself and will wind up caching the result, which is exactly what we want to avoid.

This problem happens because DNS resolution is not segmented. Once we hand an outside query to the general resolver, there's no guarantee that it stays an outside query and there's no mechanism I know of to make the resolving Unbound stop further resolution and hot-potato the CNAME back to the frontend Unbound. We can set the resolving Unbound instance up so that it gives correct answers here, but since there's no per-zone cache controls we can't make it not cache the answers.

This situation can come up even without split horizon DNS (although split horizon makes it more acute). All you need is for outside people to be able to legitimately CNAME things to your hosts for names in DNS zones that you don't control and may not even know about. If this is forbidden by policy, then you win (and I think you can enforce this by configuring the resolving Unbound to fail all queries involving your local zones).

sysadmin/UnboundZoneRefreshProblemII written at 23:05:07; Add Comment

DNS resolution cannot be segmented (and what I mean by that)

Many protocols involve some sort of namespace for resources. For example, in DNS this is names to be resolved and in HTTP, this is URLs (and distinct hosts). One of the questions you can ask about such protocols is this:

When a request enters a particular part of the namespace, can handling it ever require the server to go back outside that part of the namespace?

If the answer is 'no, handling the request can never escape', let's say that the protocol can be segmented. You can divide the namespace up into segments, have different segments handled by a different servers, and each server only ever deals with its own area; it will never have to reach over to part of the namespace that's really handled by another server.

General DNS resolution for clients cannot be segmented this way, even if you only consider the answers that have to be returned to clients and ignore NS records and associated issues. The culprit is CNAME records, which both jump to arbitrary bits of the DNS namespace and force that information to be returned to clients. In a way, CNAME records act similarly to symlinks in Unix filesystems. The overall Unix filesystem is normally segmented (for example at mount points), but symlinks escape that; they mean that looking at /a/b/c/d can actually wind up in /x/y/z.

(NS records can force outside lookups but they don't have to be returned to clients, so you can sort of pretend that their information doesn't exist.)

Contrasting DNS with HTTP is interesting here. HTTP has redirects, which are its equivalent of CNAMEs and symlinks, but it still can be segmented because it explicitly pushes responsibility for handling the jump between segments all the way back to the original client. It's as if resolving DNS servers just returned the CNAME and left it up to client libraries to issue a new DNS request for information on the CNAME's destination.

(HTTP servers can opt to handle some redirects internally, but even then there are HTTP redirects which must be handled by the client. Clients don't ever get to slack on this, which means that servers can count on clients supporting redirects. Well, usually.)

I think this protocol design decision makes sense for DNS, especially at the time that DNS was created, but I'm not going to try to justify it here.

tech/DNSResolutionIsNotSegmented written at 01:03:58; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.