2006-02-23
Checking systems with RPM verification (part 1)
I spent part of Monday poking through a Fedora Core system that had
been partially compromised, and I was reminded yet again how one of my
favorite RPM features isn't as widely known as it could be. Namely, that
RPM keeps a handy database of the MD5 checksums about every file it's
installed (as well as a pile of other information). The rpm command's
-V option taps this database to verify the actual files on the system
against what the database says they should be and makes it a handy
system integrity checker.
The quick way to dump this information is 'rpm -Va', but this just
gives a big file list. I use a little script I call check-rpmv to
group the output by RPM, which makes it easier to sort through. In the
hopes of avoiding rewriting check-rpmv from scratch yet again on yet
another system where I don't have my usual tools handy, here it is:
#!/bin/sh n=`mktemp /tmp/checkrpmv.XXXXX` for i in `rpm -qa | sort`; do rpm -V $i >$n if test -s $n; then echo $i: sed 's/^/\t/' <$n fi done rm -f $n
Now, it's important to note that basic RPM verification is only really
a semi-casual system verification tool if you're dealing with a cracked
machine, since the database (and rpm itself) is just sitting there on
the system. In the case on Monday we were reasonably sure the crackers
hadn't gotten root, so it was not worth doing a bare metal upwards
forensics check.
(Even if you suspect a root compromise, RPM verification is a useful and quick first pass. Especially as most crackers are just not all that clever and thorough.)
The other big thing I like RPM verification for is as a tool for hunting down how a system has been customized, since it will point out what configuration files have been changed and so on. Even if it's your own system, having your memory checked can be comforting (especially just before an upgrade).
2006-02-21
An annoyance with $PATH on Red Hat
In the spirit of equal opportunity annoyance:
One of the things that never fails to irritate me with Red Hat is that
when you do '/bin/su root' on a
stock system, you get a shell whose $PATH doesn't include /sbin or
/usr/sbin.
This trips me up all the time, and every time it's an nngh moment. And
it's not as if Red Hat doesn't already have a ~root/.bashrc that does
quite a lot, so they could perfectly well also have it fix up $PATH to
have everything it would if you just logged in as root.
(Yes, yes, 'use sudo'. Frankly, no; if I'm starting up a general root
shell, I'm going to be honest about it. And I don't like 'su - root',
because that has other effects.)
On my own systems, I've long since customized root's .bashrc
so that when I su, it switches to my preferred shell; other bits then insure that I
get a sensible root environment (including sanitizing $PATH). This may
be evil, but it is convenient.
(There's an argument that Red Hat's default root .bashrc is already evil
since it aliases cp, mv, and rm to their -i forms, but I'm not
going to go there. Especially since I do it myself.)
Sidebar: a pop usage conundrum
In the above, is the right usage 'a nngh moment' or 'an nngh moment'?
2006-02-16
An interesting IDE to SATA migration problem
A coworker here ran into a novel problem migrating a system from an IDE disk to a SATA disk: he ran out of partitions. He had 16 partitions on the IDE disk, and on Linux you can only have 15 partitions on a SATA drive. (Fortunately he found a partition he didn't really need.)
On the surface, this limitation is because SATA drives on Linux are considered to be SCSI disks, and all SCSI disks can only have 15 partitions. Of course, that just leads to the next question: why does SCSI have this limitation?
The simple answer is that there's a tradeoff between how many drives you can have and how many partitions each of them can have. Linux doesn't support very many IDE drives (or didn't initially), so it could afford to let each of them have lots of partitions. However, when SCSI was set up people expected to have lots of SCSI drives, so each of them could only have so many partitions.
(The tradeoff happens because Linux spent years using 16-bit device numbers, which are statically assigned to various things. So there were only so many to go around, period. Of course not all of the allocations are sensible, as you can see here.)
If you have a fully populated /dev, you can see the tradeoff in
action; just do 'ls -l /dev/sd? /dev/hd?' and watch how the major and
minor numbers jump around. SCSI squeezes 16 disks into one major number
(8), while IDE has just two in its first major number (3); hdc and
hdd actually hang out in major 22 instead. Since the kernel restricts
IDE drives to 63 partitions, what takes up the other half of major 3's
minor numbers is one of those small mysteries.
(I will hazard a guess that major numbers tended to be assigned in roughly the order Linux started acquiring support for the hardware, which may say interesting things about the popularity of secondary IDE controllers versus SCSI controllers in Linux's early days.)