How you can abruptly lose your filesystem on a software RAID mirror
We almost certainly just completely lost a software RAID mirror with no advance warning (we'll know for sure when we get a chance tomorrow to power-cycle the machine in the hopes that this revives a drive). This comes as very much of a surprise to us, as we thought that this was not supposed to be possible short of simultaneous two drive failure out of the blue, which should be an extremely rare event. So here is what happened, as best we can reconstruct right now.
In December, both sides of the software RAID mirror were operating
normally (at least as far as we know; unfortunately the filesystem
we've lost here is /var
). Starting around January 4th, one of the
two disks began sporadically returning read errors to software RAID
code, which caused the software RAID to redirect reads to the other
side of the mirror but not otherwise complain to us about the
read errors beyond logging some kernel messages. Since nothing
showed up about these read errors in /proc/mdstat
, mdadm
's
monitoring never sent us email about it.
(It's possible that SMART errors were also reported on the drive, but we don't know; smartd monitoring turns out not to be installed by default on CentOS 7 and we never noticed that it was missing until it was too late.)
In the morning of January 27th, the other disk failed outright in a way that caused Linux to mark it as dead. The kernel software RAID code noticed this, of course, and duly marked it as failed. This transferred all IO load to the first disk, the one that had been seeing periodic errors since January 4th. It immediately fell over too; although the kernel has not marked it as explicitly dead, it now fails all IO. Our mirrored filesystem is dead unless we can somehow get one or the other of the drives to talk to us.
The fatal failure here is that nothing told us about the software RAID code having to redirect reads from one side of the mirror to the other due to IO errors. Sure, this information shows up in kernel messages, but so does a ton of other unstructured crap; the kernel message log is the unstructured dumping ground for all sorts of things and as a result, almost nothing attempts to parse it for information (at least not in a standard, regular installation).
Well, let me amend that. It appears that this information is actually
available through sysfs, but nothing actually monitors it (in
particular mdadm
doesn't). There is an errors
file in
/sys/block/mdNN/md/dev-sdXX/
that contains a persistent counter
of corrected read errors (this information is apparently stored in
the device's software RAID superblock), so things like mdadm
's
monitoring could track it and tell you when there were problems.
It just doesn't.
(So if you have software RAID arrays, I suggest that you put together
something that monitors all of your errors
files for increases
and alerts you prominently.)
Comments on this page:
|
|