Wandering Thoughts archives

2008-11-28

Why rootkits targeted at Red Hat Enterprise would make me especially nervous

A while back, I wrote in passing that I would be especially nervous if I ran across a rootkit that specifically targeted Red Hat Enterprise systems (for example, to the extent of corrupting the RPM database checksums). Today I feel like elaborating on that.

(Right off the bat I'll say that it's not because we use RHEL here. Our use of RHEL is small and so far none of it is in machines that are particularly exposed to users or the world.)

What would make me nervous is that the population running RHEL is both rather small and rather selective; it is almost entirely commercial enterprises with real money to throw around (RHEL not being cheap). This means that a RHEL specific rootkit is specifically targeting companies, which means that it is likely being used by people who specifically intend to exploit companies of reasonable size and prosperity, not just whatever random machines that they can get their hands on.

(I admit that the existence of CentOS may throw a spanner into this theory, although it depends on how specific the targeting is.)

Crackers are up to no good generally, but people who target reasonable sized companies are up to a whole new level of no good and are correspondingly much more serious about the whole business. Since there is more work and money (and risk) involved, my belief is that it is much more likely that the people involved will be dangerously skilled attackers, people who are at least as clever and knowledgeable as I am. (And probably significantly more clever; they likely do this for a living with serious money involved.)

This doesn't mean that we're being targeted by such people. But it does mean that when they build rootkits, they're likely to build very good, thorough, sophisticated, and hard to detect rootkits. So if there's ever a RHEL rootkit that's going around, I'm going to have to assume that it's a really good one, skip all my usual measures, and go straight to the painful things. And that assumes that we detect the compromise in the first place, which is probably unlikely.

RHELTargetingConcern written at 00:01:23; Add Comment

2008-11-20

Combining dual identity routing and isolated interfaces revisited

Back in DualIdentityIsolation, I described how I set up a dual identity machine so that it had isolated interfaces. In it I wrote:

Alas, I am now left with a mystery: according to the policy routing rules, it looks like a packet from IP1 to an address on that subnet should get routed via the gateway (and similarly for the other networks), [....]

You know what? I should have paid more attention to the mystery, because as it turns out such packets were getting routed via the gateway. I just didn't notice because I looked at the wrong thing when I wrote the original entry, and it worked most of the time; gateways are generally perfectly happy to accept packets for the local network and throw them back on the network.

(The one case where the gateway is not is when your gateway is also a firewall, and the firewall has filtering rules that wind up rejecting your forwarded packets. This is what happened to me yesterday, forcing me to look into the issue, although in retrospect some slightly odd things had been happening for a while. This goes to illustrate that you really should look into vaguely peculiar things, because they might be a sign of something important.)

To fix this problem we need to add an additional rule to each table from the original entry, more or less like so:

ip route add NET1 dev eth0 src IP1 table 20

(And similarly for the other two IP addresses.)

The 'src IP1' bit is probably unnecessary (in fact, thinking about it it almost has to be), but I wasn't sure when I set up my rules and after this mistake I wasn't in any mood to take chances. So you get the version of the rules that I am sure work, instead of a version that I merely think should work.

DualIdentityIsolationII written at 23:03:02; Add Comment

2008-11-16

Checking systems with RPM verification (part 2)

There are at least two useful tricks beyond basic RPM verification that can be useful in some situations.

The first is that RPM's verification doesn't have to use the system database; instead it can get the MD5 checksums from a .rpm itself. This is done with 'rpm -V -p <rpm>'. Naturally the .rpm had better be the same version that's installed, and of course prelinking is still an issue if you don't trust the system that much.

The second is RPM's --root argument, which causes it to do all its work against a filesystem tree located somewhere other than /. This is useful if you want to verify something that you're not currently running, whether for disaster recovery or forensic analysis. This can be combined with the first trick to verify a system where you're dubious about both the system infrastructure and the version of the RPM database that the system has.

If you have to do this in the presence of prelinking, I think that for most people the best thing you can do is to un-prelink the system and then verify what's left (with __prelink_undo_cmd set to something like /bin/cat, just in case). A really clever attacker might still be able to hide things, but fortunately those are rare.

(If I had to do better than this, I'd create a custom version of prelink that has a --root argument and then hard-code it as the __prelink_undo_cmd when I was doing the verification.)

As a general disclaimer for people thinking about going to this much work: note that you should always bear in mind the basic principle of analyzing compromised machines. You can't really trust anything running on the machine itself; to the extent that you do, you are gambling that your attacker is not clever. In many cases you will win this gamble, because I don't think that very many cracker toolkits are exploiting RPM prelinking or bothering to compromise RPM and its database of checksums. However, it only takes one cracker to put together a toolkit that does it, and sooner or later someone will.

(Your odds are significantly worse if you believe that someone is specifically targeting RPM-based distributions with their attacks. I would be especially nervous about a package targeted specifically at Red Hat Enterprise Linux machines.)

RPMVerificationII written at 23:38:03; Add Comment

2008-11-07

What the timestamps in Ubuntu kernel messages mean

Ubuntu kernels are configured so that they include cryptic time information in kernel messages. An example looks like:

[8804849.737776] Kernel BUG at fs/nfs/file.c:321

(In the general tradition of Linux kernel messages, this is just enough to show you that there is useful stuff present without actually giving you any clear idea of how to interpret it to get actual information.)

It turns out that the timestamp is seconds and microseconds since the system booted. If the system is still running, the simple way to relate it to wall clock time is to look at /proc/uptime to get the current number of seconds since the system booted and subtract the difference to get the wall clock time. ('date -d "now - NNN seconds"' may be useful here.)

(If the system is not still running you will have to work forward from its boot time, which hopefully got recorded somewhere. I believe that adjusting the system clock does not adjust the 'seconds since boot' counters; beware of this if you had to make significant time adjustments to your system.)

This configuration setting is controlled by the kernel config option PRINTK_TIME. Well, sort of. The config option sets the default value for this setting, which you can override in two ways:

  • at boot time, with 'printk.time=1' (or 0, depending on preferences; 'y', 'Y', 'n', and 'N' are also accepted)
  • dynamically, by writing a suitable value to /sys/module/printk/parameters/time.

(I got this information from here.)

PrintkTimestampMeaning written at 01:20:19; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.