An IPv6 dilemma for us: 'sandbox' machine DNS

September 1, 2014

In our current IPv4 network layout, we have a number of internal 'sandbox' networks for various purposes. These networks all use RFC 1918 private address space and with our split horizon DNS they have entirely internal names (and we have PTR resolution for them and so on). In a so far hypothetical IPv6 future, we would presumably give all of those sandbox machines public IPv6 addresses, because why not (they'd stay behind a firewall, of course). Except that this exposes a little question: what public DNS names do we give them? Especially, what's the result of doing a reverse lookup on one of their IPv6 addresses?

(Despite our split horizon DNS, we do have one RFC 1918 IP address that we've been forced to leak out.)

We can't expose our internal names for these machines because they're not valid global DNS names; they live in an entirely synthetic and private top level zone. We probably don't want to not have any reverse mapping for their IPv6 addresses because that's unfriendly (on various levels) and is likely to trigger various anti-abuse precautions on remote machines that they try to talk to. I think the only plausible answer is that we must expose reverse and forward mappings under our organizational zone (probably under a subzone to avoid dealing with name collision issues). One variant of this would be to expose only completely generic and autogenerated name mappings, eg 'ipv6-NNN.GROUP.etc' or the like; this would satisfy things that need reverse mappings with minimal work and no leakage of internal names.

If we expose the real names of machines through IPv6 DNS people will start using these names, for example for granting access to things. This is fine, except that of course these names only work for IPv6. This too is probably okay because most of these machines don't actually have externally visible IPv4 addresses anyways (they get NAT'd to a firewall IP when they talk to the outside world, and of course the NAT IP address is shared between many internal machines).

(There are some machines that are publically accessible through bidirectional NAT. These machines already have a public name to attach an IPv6 address to and we could make the reverse lookup work as well.)

Overall, I think the simplest solution is to have completely generic autogenerated IPv6 reverse and forward zones that are only visible in our external DNS view and then add IPv6 forward and reverse DNS for appropriate sandboxes to our existing internal zones. This does the minimal amount of work to pacify external things that want reverse DNS while preserving the existing internal names for machines even when you're using IPv6 with them.

The fly in this ointment is that I have no idea if the OpenBSD BIND can easily and efficiently autogenerate IPv6 reverse and forward names, given that there are a lot more of them than there are in typical autogenerated IPv4 names. If it's a problem, I suppose we can have a script that autogenerates the public IPv6 names for any IPv6 address we add to internal DNS.

Written on 01 September 2014.
« We don't believe in DHCP for (our) servers
Why we don't want to do any NAT with IPv6 »

Page tools: View Source, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Mon Sep 1 22:26:11 2014
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.