2014-12-24
The security effects of tearing down my GRE tunnel on IPSec failure
When I described how I'd gotten IKE to work for my IPSec setup, I said that I was now tearing down my GRE tunnel when the IKE daemon declared an IPSec shutdown (or negotiation failure) and that this decision has some security implications. Today I want to talk about those implications and why I'm comfortable with them at the moment.
First off, lets talk about why I have security issues here and normal people don't. The difference is that normal people use IPSec based GRE tunnels to reach private resources that cannot be reached from the outside world. Assuming that unprotected GRE traffic is blocked in various ways, a non-functional GRE tunnel is essentially the same as one that doesn't exist; in each case, your packets are not getting to their destination. Tearing down the GRE tunnel probably helps slightly in that your connection attempts now get 'network unreachable' responses instead of stalling for a long time.
Some of the IPs I reach through my IPSec and GRE setup are private IPs that are not reachable from outside. But some of them are public IPs, where I actually can reach them when my GRE tunnel is down. This is where the security impact comes in: when my GRE tunnel goes down, my new connections are transparently diverted over the unencrypted Internet. Connections that were previously only subject to snooping inside the university (where they traveled from my internal GRE endpoint to their target) are now subject to snooping anywhere along the path from my home machine to the university. I may not even realize that my IPSec link is down and this is happening (although in practice I almost certainly will).
But does this exposure actually matter? There are two ways it could; traffic intelligence and connections that are now unencrypted. Traffic intelligence comes about because an outside listener can now see what university IPs I connect to (and with what protocols), where before all they saw was a mostly undifferentiated IPSec stream. I don't worry too much about traffic intelligence for various reasons. As for newly plaintext connections, obviously this doesn't happen if the connection itself has its own encryption; for example, my ssh connections are encrypted whether or not they travel through IPSec or go straight over the Internet. My exposure there is limited to whatever I do from home to university IPs that uses plaintext protocols, and honestly there isn't much (especially things that would keep working without an inside IP address). I think it's basically HTTP website visits to work websites, and I don't think that those are all that sensitive (our sensitive websites are all HTTPS, of course).
The result is that so far, I think I'm okay with this. Almost everything I do from home to work is just ssh connections, and those are encrypted whether or not they go over IPSec. But I'm not completely confident and I can't help but be a little bit nervous about whether I have some exposure I just haven't thought about and realized.
(It's not mail submission, for multiple reasons including that our user mail submission server doesn't take connections from outside IPs; if my GRE tunnel is down, I can't connect to it at all from my home machine.)
2014-12-17
The potential end of public clients at the university?
Recently, another department asked our campus-wide sysadmin mailing list for ideas on how to deal with keyloggers, after having found one. They soon clarified that they meant physical keyloggers, because that's what they'd found. As I read the ensuing discussion I had an increasing sinking feeling that the answer was basically 'you can't' (which was pretty much the consensus answer; no one had really good ideas and several people knew things that looked attractive but didn't fully work). And that makes me pretty unhappy, because it means that I'm not sure public clients are viable any more.
Here at the university there's long been a tradition and habit of various sorts of public client machines, ranging from workstations in computer labs in various departments to terminals in libraries. All of these uses depend crucially on the machines being at least non-malicious, where we can assure users that using the machine in front of them is not going to give them massive problems like compromised passwords and everything that ensues from that.
(A machine being non-malicious is different from it being secure, although secure machines are usually non-malicious as well. A secure machine is doing only what you think it should be, while a non-malicious machine is at least not screwing its user. A machine that does what the user wants instead of what you want is insecure but not hopefully not malicious (and if it is malicious, well, the user did it to themselves, which is admittedly not a great comfort).)
Keyloggers, whether software or physical, are one way to create malicious machines. Once upon a time they were hard to get, expensive, and limited. These days, well, not so much, based on some hardware projects I've heard of; I'm pretty sure you could build a relatively transparent USB keylogger with tens of megabytes of logging capacity as an undergrad final project with inexpensive off the shelf parts. Probably you can already buy fully functional ones for cheap on EBay. What was once a pretty rare and exclusive preserve is now available to anyone who is bored and sufficiently nasty to go fishing. As this incident illustrates, some number of our users probably will do so (and it's only going to get worse as this stuff gets easier to get and use).
If we can't feasibly keep public machines from being made malicious, it's hard to see how we can keep offering and operating them at all. I'm now far from convinced that this is possible in most settings. Pessimistically, it seems like we may have reached the era where it's much safer to tell people to bring their own laptops, tablets, or phones (which they often will anyways, and will prefer using).
(I'm not even convinced it's a good idea to have university provided machines in graduate student offices, many of which are shared and in practice are often open for people who look like they belong to stroll through and fiddle briefly with a desktop.)
PS: Note that keyloggers are on the easy scale of the damage you can do with nasty USB hardware. There's much worse possible, but of course people really want to be able to plug their own USB sticks and so on into your public machines.
Sidebar: Possible versus feasible here
I'm pretty sure that you could build a kiosk style hardware enclosure that would make a desktop's actual USB ports and so on completely inaccessible, so that people couldn't unplug the keyboard and plug in their keylogger. I'm equally confident that this would be a relatively costly piece of custom design and construction that would also consume a bunch of extra physical space (and the physical space needed for public machines is often a big limiting factor on how many seats you can fit in).
2014-12-10
How to delay your fileserver replacement project by six months or so
This is not exactly an embarrassing confession, because I think we made the right decisions for the long term, but it is at least an illustration of how a project can get significantly delayed one little bit at a time. The story starts back in early January, where we had basically finalized the broad details of our new fileserver environment; we had the hardware picked out and we knew we'd run OmniOS on the fileservers and our current iSCSI target software on some distribution of Linux. But what Linux?
At first the obvious answer was CentOS 6, since that would get us a nice long support period and RHEL 5 had been trouble-free on our existing iSCSI backends. Then I really didn't like RHEL/CentOS 6 and didn't want to use it here for something we'd have to deal with for four or five years to come (especially since it was already long in the tooth). So we switched our plans to Ubuntu, since we already run it everywhere else, and in relatively short order I had a version of our iSCSI backend setup running on Ubuntu 12.04. This was probably complete some time in late February, based on circumstantial evidence.
Eliding some rationale, Ubuntu 12.04 was an awkward thing to settle on in March or so of this year because Ubuntu 14.04 was just around the corner. Given that we hadn't built and fully tested the production installation, we might actually have wound up in the position of deploying 12.04 iSCSI backends after 14.04 had actually come out. Since we didn't feel in a big rush at the time, we decided it was worthwhile to wait for 14.04 to be released and for us to spin up the 14.04 version of our local install system, which we expected to have done by not too long after the 14.04 release. As it happened it was June before I picked the new fileserver project up again and I turned out to dislike Ubuntu 14.04 too.
By the time we knew we didn't really want to use Ubuntu 14.04, RHEL 7 was out (it came out June 10th). While we couldn't use it directly for local reasons, we though that CentOS 7 was probably going to be released soon and that we could at least wait a few weeks to see. CentOS 7 was released on July 7th and I immediately got to work, finally getting us back on track to where we probably could have been at the end of January if we'd stuck with CentOS 6.
(Part of the reason that we were willing to wait for CentOS 7 was that I actually built a RHEL 7 test install and everything worked. That not only proved that CentOS 7 was viable, it meant that we had an emergency fallback if CentOS 7 was delayed too long; we could go into at least initial production with RHEL 7 instead. I believe I did builds with CentOS 7 beta spins as well.)
Each of these decisions was locally sensible and delayed things only a moderate bit, but the cumulative effects delayed us by five or six months. I don't have any great lesson to point out here, but I do think I'm going to try to remember this in the future.
2014-12-05
How security sensitive is information about your network architecture?
One of the breathless things that I've seen said recently about the recent Sony Pictures intrusion is that having their network layout and infrastructure setup disclosed publicly is really terrible and will force Sony Pictures to change it. This doesn't entirely make sense to me; I'm hard pressed to see how network layout information and so on is terribly security sensitive in a sensibly run environment. Switch and router and database passwords, certainly; but just the network architecture?
(This information is clearly business sensitive, but that's a different thing.)
There is clearly one case where this is terrible for security, namely if you've left holes and back doors in your infrastructure. But this is badly design infrastructure in the first place that you just tried to protect with security through obscurity (call this the ostrich approach; if people don't see it it's still secure). It's not that disclosure has made your infrastructure insecure, the disclosure has just revealed that it is.
Beyond that, having full information on your network architecture will certainly make an attacker's work easier. Rather than having to fumble around exploring the networks and risking discovery through mistakes, they can just go straight to whatever they're interested in. But making an attacker's job somewhat easier is a far cry from total disaster. If things are secure to start with this doesn't by itself enable the attacker to compromise systems or get credentials (although it'll probably make the job easier).
Or in short: if your network architecture isn't fundamentally insecure to start with, I don't see how disclosing it is a fatal weakness. I suppose there are situations where you're simply forced to run your systems in a way that are fundamentally insecure because the software and protocols you're using don't allow any better and you have to allow enough access to the systems so that people could exploit this if they knew about it and wanted to.
(I may well be missing things here. I'm aware that I work in an unusually open environment, which is that way partly because this is the culture of academia and partly due to pragmatics. As I've put it before, part of our threat model has to be inside the building.)
(Also, probably I should be remembering my old comments on the trade press here.)
2014-12-04
Log retention versus log analysis, or really logs versus log analysis
In a comment on my entry on keeping your logs longer, oz wrote:
yea but. keeping logs longer is not particularly interesting if you have no heavy duty tools to chew through them. [...]
Unsurprisingly, I disagree with this.
Certainly in an ideal world we would have good log analysis tools
that we use to process raw logs into monitoring, metrics data, and
other ongoing uses of the raw pieces we're gathering and retaining.
However ongoing processing of logs is far from the only reason to
have them. Another important use is to go back through the data you
already have in order to answer (new) questions, and this can be
done without having to process the logs through heavy duty tools.
Many questions can be answered with basic Unix tools such as grep
and awk, and these can be very important post-facto ad-hoc
questions.
A lack of good tools may limit the sophistication of the questions you can ask (at least with moderate effort) and the volume of questions you can deal with, but they don't make logs totally useless. Far from it, in fact. In addition, given that logs are the raw starting point you can always keep logs now and build processing for them later, either on an 'as you have time' or an 'as you have the need' basis. As result I feel that this is a the perfect is the enemy of the good situation unless your log volume is so big that you can't just keep raw logs and do anything with them.
(And on modern machines you can get quite far with plain text, Unix tools, and some patience, even with quite large log files.)
If you want the really short version: having information is almost always better than not having information, even if you're not doing anything with it right now.