Wandering Thoughts archives

2014-12-28

How I think DNSSec will have to be used in the real world

Back when I started reading about DNSSec, it seemed to be popular to assume that how DNSSec would work for clients is that if a particular DNS lookup failed DNSSec checks, the DNS resolver would tell you that the name couldn't be resolved. In light of my personal DNSSec experience and the general security false positives problem, I no longer accept this as workable approach.

The reality of the world is that there are almost certainly going to be two common reasons for DNSSec failures, namely DNSSec screwups by the origin domain and mandatory interposed DNS resolvers that tinker with the answers people get back. Neither are an attack as such, at least as users will accept, and so real users will not find failing to return DNS results in either situation to be acceptable.

Instead, I think that DNSSec results will have to be used as a reputation signal; good DNSSec results are best, while bad DNSSec results are a bit dubious. Many and perhaps most applications will ignore these reputation signals and continue to accept even bad DNSSec results. Some applications will use the DNSSec trust signal as one of a number of heuristic inputs; a browser might shift to considering such resources as less trustworthy, for example. Only a few extremely high security applications will refuse to go on entirely if the DNSSec trust results come back bad (and browsers are not one of them as they will be usually configured).

(Possibly this is already obvious to people who deal with DNSSec on a regular basis. On the other hand, it doesn't seem to be how Fedora 20's copy of Unbound comes configured out of the box.)

I'm sure that this sloppy approach will enrage a certain number of believers in secure DNS and DNSSec. They will no doubt tell us that the DNS lookup failures are a cost worth paying for secure DNS and that anyways, it's the fault of the people making configuration mistakes and so on, and then grind their teeth at my unwillingness to go along with this. I do not have a pleasant, soft way to put this, so I will just say it straight out: these people are wrong, as usual.

Sidebar: the case of intercepting DNS servers

At one level, a nosy ISP or wireless network that forces all of your DNS lookups to go through its DNS servers and then mangles the results is definitely an attacker. Preventing this sort of mass interception is likely one reason DNSSec exists, just like it's one reason HTTPS exists. However, from a pragmatic perspective it is not what most people will consider an attack; to put it bluntly, most people will want their DNS lookups to still work even in the face of their ISP or their wireless network messing with them a bit, because people would rather accept the messing than not have Internet access at all.

(If none of your DNS lookups work, you don't really have Internet access. Or at least most people don't.)

DNSSecRealWorldUsage written at 03:36:37; Add Comment

2014-12-25

DNSSec in the real world: my experience with DNSSec

In the abstract, I like the idea of secure DNS, because really who wouldn't. I've read enough criticism of DNSSec the protocol to think that it's not great and maybe should be replaced by something more less ungainly, and I've been convinced that it is not the right way to get TLS certificate information, but those are relatively moderate issues (from some jaundiced perspectives).

That's in theory. In practice, things are rather different. In practice the only thing DNSSec has ever done for me is prevent me from seeing websites, generally US government websites (I seem to especially trip over the NIH and NASA's APOD). My normal caching resolver is Unbound, which does some amount of DNSSec checking and enforcing. When I set it up initially, these checks kept keeping me from resolving IP addresses so I kept turning them down and turning them down, to the point where I've now done my best to turn them off entirely. But apparently my best isn't good enough and so periodically Unbound refuses to resolve an IP address, I kill it and start my old dnscache instance, and the IP address immediately resolves.

At one level this is not particularly surprising. DNSSec creates a whole new collection of ways to have DNS resolution screw up, either because people fumble their DNSSec implementation (even temporarily) or because your local resolver can't get a DNSSec reply that it requires to be secure. It's not surprising that this happens every so often.

At another level this is utterly fatal to DNSSec, because of the security false positives problem. For many people, actual DNS interception is vanishingly rare and they perform a very large number of DNS lookups. If the actual signal is very rare, even a very low noise rate will totally swamp it. In other words, every or almost every DNSSec failure people get will be a false positive, and people simply will not tolerate this. As has happened with me, DNSSec will become one of those things that you turn off because it's stopping you from doing things on the Internet that you want to do.

(And in turn this means that vendors cannot make DNSSec something that end users can't turn off. Forcing something that in practice screws your users is an extremely bad idea and it gets you extremely bad reactions.)

MyDNSSecExperience written at 02:02:54; Add Comment

2014-12-20

Unsurprisingly, laptops make bad to terrible desktops

In response to my entry on the security problem for public clients, Jeff Kaufman suggested laptops as an option on the grounds that they already integrate everything into one physical unit. Unfortunately, I don't think this is workable. The core problem is that laptops make terrible desktops, especially in a setting with relatively untrusted access to them. This shouldn't surprise anyone, since laptops aren't designed to be desktops.

A typical university desktop is cheap, has a relatively large screen (17" is the minimum entry point these days and it often goes larger), is turned on essentially all the time, and must be physically secured in place. It should not have any easily detached fiddly bits that can be removed and lost, because sooner or later they will be. Ideally it should be possible to set it up in a relatively ergonomic way. Some desktops need more than basic computing power; for example, the desktops of a computing lab are often reasonably capable (because the practical alternative is buying a few very capable servers and those are often really expensive). Partly because of this it's an advantage if things are at least somewhat modular.

None of these are attributes of laptops, especially in combination (for example, there are cheap laptops but they're cheap partly because they have really small screens). Your typical relatively inexpensive laptop is relatively slow, has a small screen, has historically often not been designed to run anywhere near all the time, is entirely non-modular, has external fiddly bits like power adapters, is not really particularly ergonomic, and is often hard to secure to a table. None of this is surprising because this is all part of the laptop tradeoff; you're getting convenient, lightweight portability for periodic roaming use and giving up a bunch of other stuff for it. You can use a laptop as a desktop and many people do, but it doesn't make laptops ideal for it.

(One sign that laptops are nowhere near ideal desktops is all of the aftermarket products designed to make them work better at that job, starting with laptop stands.)

If you don't care very much about offering a decently good environment, this is actually okay. A bunch of cheap laptops with cable locks in an attended environment suffice to let people do quick things like check their email on the fly, and they might be overall cheaper than custom sourced kiosk machines in enclosures with decent sized screens and so on. And this setup certainly encourages people not to linger. But if you want to offer an attractive environment I don't think that doing this with laptops is viable, especially if you have to worry about people walking off with them. At least for university provided public clients, I think it's desktops or bust.

(Whether or not universities should still try to offer client computing to people is another can of worms entirely that calls for another entry.)

(I'm not exposed enough to modern laptops to know how happy they are about being on and running for hours and hours on end. My relatively old laptop spins up its fans if left sitting powered on for very long and I'm not certain I'd want to let it sit that way for days or weeks on end; if nothing else that might burn out the fans in much shorter order. And they're not exactly really quiet.)

LaptopsBadDesktops written at 00:26:12; Add Comment

2014-12-08

Why I don't believe in generic TLS terminator programs

In some security circles it's popular to terminate TLS connections with standalone generic programs such as stunnel (cf). The stated reason for this boils down to 'separation of concerns'; since TLS is an encrypted TCP session, we can split TLS termination from actually understanding the data streams that are being transported over. A weakness in the TLS terminator doesn't compromise the actual application and vice versa. I've seen people harsh on protocols that entangle the two issues, such as SMTP with STARTTLS.

I'm afraid that I don't like (and believe) in generic TLS terminator programs, though. The problem is their pragmatic impact; in practice you are giving up some important things when you use them. In specific, what you're giving up is easy knowledge of the origin IP address of the connection. A generic TLS terminator turns a TLS stream into the underlying data stream but by definition doesn't understand anything about the structure of the data stream (that's what makes it generic). This lack of understanding means it has no way to pass the origin IP address along to whatever is handling the actual data stream; to do so would require it to modify or augment the data stream somehow, and it has no idea how to do that.

You can of course log enough information to be able to reconstruct this information after the fact, which means that in theory you can recover it during the live session with suitably modified backend software. But this requires customization in both the TLS terminator and the backend software, which means that your generic TLS terminator is no longer a drop in part.

(Projects such as titus can apparently get around this with what is presumably a variant of NAT. This adds a lot of complexity to the process and requires additional privileges.)

I consider losing the origin IP address for connections to be a significant issue. There are lots of situations where you really want to know this information, which means that a generic TLS terminator that strips it is not suitable for unqualified general use; before you tell someone 'here, use this to support TLS' you need to ask them about how they use IP origin information and so on. As a result I tend to consider generic TLS terminators as suitable mostly for casual use, because it's exactly in casual uses that you don't really care about IP origin information.

(You can make a general TLS terminator that inserts this information at the start of the data stream, but then it's no longer transparent to the application; the application has to recognize this new information before the normal start of protocol and so on.)

(These issues are of course closely related to why I don't like HTTP as a frontend to backend transport mechanism, as you have the same loss of connection information.)

NoGenericTLSTerminators written at 02:32:55; Add Comment

2014-12-03

Security capabilities and reading process memory

Today, reading Google's Project Zero explanation of an interesting IE sandbox escape taught me something that I probably should have known already but hadn't thought about and realized. So let's start at the beginning, with capability-based security. You can read about it in more detail in that entry, but the very short version is that capabilities are tokens that give you access rights and generally are subject to no further security checks. If you've got the token, you have access to whatever it is in whatever way the token authorizes. Modern Unix file descriptors are a form of capabilities; for example, a privileged program can open a file that a less privileged one doesn't have access to and pass the file descriptor to it to give access to that file. The less privileged program only has as much access to the file (read, write, both, etc) as is encoded in the file descriptor; if the file descriptor is read only, that's it.

When you design a capability system, you have to answer some questions. First, do programs hold capabilities themselves in their own memory space (for example, as unforgeable or unguessable blobs of cryptographic data) or do they merely hold handles that reference them and the kernel has the real capability? Unix file descriptors are an example of the second option, as the file descriptor number at user level is just a reference to the real kernel information. Second, are the capabilities or handles held by the process bound to the process or are they process independent? Unix file descriptors are bound to processes and so your FD 10 is not my FD 10.

My impression is that many strong capability based systems answer that user processes hold as much of the capability token as possible (up to 'all of it') and that the token is not bound to the process. This is an attractive idea from the system design perspective because it means that the kernel doesn't have to have much involvement in storing or passing around capability tokens. Again, contrast this with Unix, where the kernel has to do a whole pile of work when process A passes a file descriptor to process B. The sheer work involved might well bog down a Unix system that tried to make really heavy use of file descriptor passing to implement various sorts of security.

What I had not realized until I read the Google article is that in systems where processes hold capability tokens that are not bound to the particular process, being able to read a process's memory is a privilege escalation attack. If you can read the memory of a process with a capability token, you can read the token's value and then use it yourself. The system's kernel doesn't know any better, and in fact it not knowing is a significant point of the whole design.

This matters for two reasons. First, reading process memory is often an explicit feature of systems to allow for debugging (it's hard to do it otherwise). Second, in practice there have been all sorts of attacks that cause processes to leak some amount of memory to the attacker and often these disclosure leaks are considered relatively harmless (although not always; you may have heard about Heartbleed). In a capability based system, guarding against both of these issues clearly has to be a major security concern (and I'm not sure how you handle debugging).

(In many systems you probably don't need to use the capabilities in the process that read them out of someone's memory. If they really are process independent, you can read them out in one process, transfer them through the Internet, and pass them back in to or with a completely separate process that you establish in some other manner. All that matters is that the process can give the kernel the magic token.)

CapabilitiesAndReadingMemory written at 01:11:22; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.