Wandering Thoughts archives

2010-11-29

Why low quality encryption is not better than no encryption

Today, I was considering reconfiguring a program to stop supporting relatively less secure SSL connections and found myself thinking the old refrain of 'well, maybe there's some client that only works with the old stuff, isn't it better for them to have some encryption rather than no encryption?'

This is a superficially attractive thought, and it's certainly an easy refrain to fall into. It's also wrong.

From the semi-mathematical security perspective, it sounds good; having weak encryption defeats some attackers and makes some attacks just that little bit more difficult. But that's the wrong perspective, because security is not math, security is people and people are really bad at understanding degrees of security. When you say 'low quality encryption', they miss the 'low quality' and just see the 'encryption'.

(Well, sort of. It is not so much that they miss the low quality, it is that almost everyone lacks the knowledge to understand how low the low quality is and how vulnerable they are as a result. 'Encryption' is something that people understand, while 'low quality' is a meaningless buzzword.)

The end result is that in practice, your low quality encryption winds up creating an unwarranted feeling of security in both people and programs (because programs are written by people). In short, they put more trust in it than they should (ie any trust whatsoever). For the purposes of actual practical security, they would be better off with no encryption at all because then they would understand that they weren't protected (or at least have a realistic chance to).

The much shorter version of this is: if the connection is not really secure, you should not pretend that it is. Pretending just hurts people in the end.

Hopefully I will remember this logic the next time I go through this exercise and proceed to the bit where I actually turn off 'known to be insecure' stuff. (I didn't get to the actual configuration bit today; I just convinced myself that I should.)

LowQualEncryptionBad written at 23:19:20; Add Comment

Why 10G Ethernet is not a near-term issue for us

In my entry on the lifetime of our fileserver infrastructure, I said that 10G Ethernet wasn't something that I expected to force hardware changes on us in the next three years. The reason for this is pretty simple: it is too expensive now.

Well, sort of. What is actually important is what happens because 10G Ethernet is so expensive, namely that it is vanishingly rare today. Rarity matters in several ways. First and most obviously, by default rarity means low or no demand for something; with no 10G shipping on machines that our users deploy, there is no demand for us to supply them with 10G connections, 10G switches, and so on. In fact there is basically no point in trying to support 10G connection speeds early even if we had the money, because we can get better and cheaper hardware by waiting until we actually need it.

Second, there is inevitably going to be a learning process with 10G, where software and configurations have to be tuned to get the speed and then take advantage of it. This learning process will not really get under way until 10G is relatively pervasive, not just here but in developer hardware, and that is not going to happen while 10G is expensive.

(The learning process matters because until it's complete, 10G hardware will be less attractive than it looks because you can't actually get all the nice performance that it theoretically offers.)

10G hardware will inevitably come down in price over time. However, both of these issues have relatively long lead times, which means that there is a (substantial) delay between when 10G hardware becomes cheap and when it starts driving demand for 10G infrastructure that can sensibly be met. Since 10G hardware is not cheap now and does not seem like it will be cheap in the near future, I feel safe in saying that 10G will not be a driver (or a risk) for hardware changes over the next three years.

(Note that this is a different issue than whether or not we would buy 10G hardware for fileservers that we were building from scratch. I certainly hope that in less than three years, 10G is cheap enough that we're putting it in new servers and switches. But that it's available as a nice feature isn't enough to force us into a significant hardware upgrade of our current fileservers.)

10GEthernetDemand written at 00:41:43; Add Comment

2010-11-20

Why I avoid DSA when I have a choice

From Nate Lawson's most recent entry:

Most public key systems fail catastrophically if you ignore any of their requirements. You can decrypt RSA messages if the padding is not random, for example. With DSA, many implementation mistakes expose the signer's private key.

(emphasis mine.)

Even small implementation mistakes are dangerous to crypto systems, but there are degrees of danger. Most of the time, 'all' that happens is that a bad implementation doesn't deliver either the encryption or the endpoint authentication that you thought you had; an attacker can decrypt your messages or impersonate a host. This is still bad, but it is not totally catastrophic.

DSA is not like that. As Nate Lawson has covered, a mistake by a DSA implementation that you use can directly give away your private key. It doesn't matter if your key was securely generated, and it doesn't matter if you only used the bad implementation briefly; your key is bad now. Generate and propagate a new one, provided that you realize that this has happened.

I have no opinion on whether RSA is theoretically stronger or weaker than DSA. I generate RSA keys instead of DSA keys regardless of the relative theoretical merits because all of the theoretical security in the world doesn't matter when all implementors have to get everything right or they give away the house, because they won't (and haven't).

Sidebar: when it is theoretically less dangerous to use DSA

In order to disclose a private key, a weak DSA implementation must actually have it. Thus, it is theoretically safe to use a local DSA key to authenticate yourself to a remote party if you trust your local implementation but don't entirely trust the other end. The most obvious case for this is personal SSH keys.

Still, I wouldn't do it. Why take chances if you don't have to?

WhyAvoidDSA written at 22:41:18; Add Comment

2010-11-18

Thinking about how long our infrastructure will last

Something that I've been thinking about recently is when we need to start thinking about turning over our fileserver infrastructure, or in the alternate way of thinking about it, how long we can make the infrastructure last. Since this could get very theoretical, I'm going to nail down a specific question: is it reasonable to assume our current infrastructure will last at least five years from our initial deployment? We don't seem likely to run into capacity limits, so one major aspect is the issue of how long the hardware and software will last for.

(Our initial deployment was about two years ago, depending on how you count things; we started production in September 2008 but only finished the migration from the old fileservers some time in early 2009.)

On the hardware front, our servers are not running into CPU constraints or other hardware limits that would push us towards replacing them with better machines. This leaves the lifetime of the mechanical parts in them (such as fans), and we have both spares and similar servers that have already been running for four years. So the odds are good. The SATA data disks in our backends are more problematic. They're under relatively active load and asking five years or more from consumer grade SATA drives may be a lot. While we have spares we don't have a complete replacement for all disks, which exposes us to a second order risk: long term technology changes.

SATA drives are not going away any time soon, but they seem likely to be changing a lot as vendors move to SATA drives with 4k sectors. It's possible that our current stack of software will not perform very well with such drives, given that other environments have already run into problems. If that happens we could be forced into software changes.

(I don't think 10G Ethernet is a risk here for reasons beyond the scope of this entry.)

On the software front, our software is both out of date and basically frozen (we have very little interest in changing a working environment). However, we aren't going to be able to do this forever; the likely triggers for forced major software changes would be the end of security updates for the frontends or significant hardware changes (such as 4k sector drives). Both are currently unknowns, but it seems at least possible that we could avoid problems for three more years.

(The backends run RHEL 5, which will have security updates through early 2014 as per here. The practical accessibility of Solaris 10 security updates for the frontends is currently quite uncertain, thanks to Oracle.)

One obvious conclusion here is that we should get a 4k sector SATA drive or two in order to test how well our current environment deals with such drives. That way we can at least be aware in advance, even if we aren't prepared.

FileserverInfrastructureDuration written at 00:45:23; Add Comment

2010-11-14

The ordering of SSL chain certificates

SSL certificates for hosts are usually not directly signed by your CA's trust root certificate, the certificate that is in your browser, your mail client, or whatever. Instead there is generally at least one intermediate certificate (sometimes several), and in order for clients to accept your host certificate you need to send them not just the host certificate but also all of the intermediate certificates in the chain of signatures between you and the CA trust root.

How you configure this depends on the server software, with two general approaches. Apache (well, mod_ssl) lets you specify a certificate chain file separate from your certificate itself; you put your certificate in SSLCertificateFile and any intermediate certificates in SSLCertificateChainFile. Exim doesn't have a separate chain file; instead you put both your host certificate and all intermediate in the tls_certificate file.

All of which raises a question: if you're putting several certificates in one file, what's the right order for them and does it matter?

The correct order turns out to be the host certificate first, then the certificate that signs it, then the certificate that signs the previous certificate, and so on for as many levels as you need. Basically, the most specific certificate to the least specific certificate, with each certificate verifying the previous one. Certificates are plain ASCII (with a variety of extensions, .pem and .crt are common) and can just be joined together with cat.

(This tends not to be clearly documented in the instructions for various software (which tends to assume that you are already an SSL expert), but can be dug out of the TLS RFC with enough determination.)

In practice the order doesn't seem to matter. As you might expect, common clients will accept and verify both out of order certificate chains and certificate chains with unnecessary and unused certificates.

(Clients like browsers and IMAP mail clients have a strong motivation to do so, given that server operators get this wrong with reasonable frequency. Other clients may be more picky and paranoid, generally to no real advantage.)

(This is the kind of entry that I write so that I have a chance of remembering this the next time I care about it.)

Sidebar: why you might have unused certificates in a chain file

Suppose (not entirely hypothetically) that your SSL certificate vendor issues certificates that are signed by a number of different intermediate certificates, depending on the specific circumstances where you got them. If you want to deploy certificates without having to look up exactly which intermediate certificate your CA used, the easy thing to do is to throw them all into a single universal certificate chain file. Then you just install the server certificate and the chain file (concatenating the two of them together for things like Exim) and are done with it.

SSLChainOrder written at 00:32:00; Add Comment

2010-11-07

When you should care about security

Recently I wrote about http to https redirection and mentioned in passing something about caring or not caring about security. I figure I should expand on that a bit.

First off: as I mentioned, caring about encryption is not quite the same thing as caring about security. End to end encryption frustrates many sorts of eavesdroppers and is one of the ways of preventing tampering with your traffic. But as lots of people have learned the hard way over the years, encryption by itself does not create security as such.

When I think you should care about security is when you have something important to protect. What's important? My view is that money is clearly important, significant passwords are important, and email is likely to be important. Other things are not necessarily so important.

(Whether a password is significant or not depends on how much it guards access to. For example, I consider our users' passwords to be important, since knowing such a password gives you access to a user's files and all of our services. Some of our users probably disagree with my view.)

The fundamental reason to think about when you care about security is that security is almost always fundamentally inconvenient. Being secure means being less friendly and more of a hassle, both for you and for your users. Before you blindly pay this price, you need to decide if it's called for at all.

(Also, you need to be honest about it. You may think that your website or service is vitally important and so you should be highly secure, even when this inconveniences your users, but you need to make sure that your users agree with this view; otherwise we get grumpy or even quietly bypass it. Also, there is an important difference between allowing users to be secure and forcing them to be secure.)

WhenSecurity written at 01:26:02; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.