2006-08-29
An interesting filesystem corruption problem
Today we had a fun problem created by a combination of entirely rational
find optimizations and a corrupted, damaged filesystem.
An important Linux server took some kind of hit that turned some files into directories (with contents, presumably stolen from some other poor directory). We found some but were pretty sure there were others lurking out there too, and wanted to do our best to find them. (If only to figure out what we needed to restore from the last good backups.)
As it happens, most of the actual files on this filesystem have some sort of extension, and pretty much all directories don't. So, I made the obvious attempt:
find /hier -name '*.*' -type d -print
Much to my surprise, this didn't report anything, not even the files
we already knew about in /hier/foo/bar.
Okay, first guess: I happened to know that find optimizes directory
traversals based on its knowledge of directory link counts, so if
the count is off find will miss seeing directories. A quick check
showed that /hier/foo/bar had the wrong link count (it only had
two links, despite now having subdirectories). Usefully, find has a
'-noleaf' option to turn this off (it's usually used to deal with
non-Unix filesystems that don't necessarily follow this directory link
count convention).
But that didn't work either. Fortunately I happened to know about the
other optimization modern Unixes do for find: they have a field in
directory entries called 'd_type', which has the type of the file
(although not its permissions). If files had gotten corrupted into
directories, it would make sense that the d_type information in
their directory entries would still show their old type and make
find skip them.
A quick d_type dumper program showed that this was indeed the
case. This also gave us a good way to hunt these files down: walk the
filesystem, looking for entries with a mismatch between d_type and
what stat(2) returned.
In retrospect, I have to thank find for biting us with these
optimizations; it led me to a better way to find the problem spots than
I otherwise would have had.
(And writing a brute force file tree walker, even in C, turns out to be not as much work as I thought it would be.)
This is of course a great example of leaky abstractions and how
knowing the low-level details can really matter. If I hadn't been well
read enough about Unix geek stuff, I wouldn't have known about either
find optimization and things would have been a lot more hairy. (I
might have found -noleaf with sufficient study of the manpage, but
that wouldn't have been enough.)