Why you want a filesystem consistency checker

June 6, 2007

Filesystem consistency checkers have historically had three overlapping purposes:

  1. to patch up the damage done when a machine was shut down part way through modifications to the filesystem.

  2. to find and fix up problems caused by corrupted disk blocks (whether they're caused by a dying disk, a controller error that scribbled random data on a track, or whatever).

  3. to check for and repair structural errors created by operating system bugs.

Fixing inconsistent filesystems is mostly or entirely obsolete these days, due to people moving to journaled filesystems that never allow themselves to get into an inconsistent state to start with. Both of the other reasons remain valid, because systems are fallible at all levels.

In theory there is no need to have the filesystem consistency checker be a separate program, especially since the kernel filesystem code has to do some consistency checking itself. In practice system administrators find a standalone program to be more reassuring, partly because it gives them more control over what can be a nervous process (especially if you suspect that you have problems of the third sort).

It is worth noting explicitly that no amount of block checksumming can protect you against the third sort of problem. Checksums only tell you that the data that made it to disk is the data the operating system thought it was putting there; they can't tell you whether the data itself is completely correct, and so they can't protect against logic errors.

Written on 06 June 2007.
« RPM's multi-architecture file ownership problem
Why I hate firewalls, especially stateful firewalls »

Page tools: View Source, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Wed Jun 6 23:07:53 2007
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.