Hardware RAID and the problem of (not) observing disk IO

April 28, 2017

We normally don't use machines with hardware RAID controllers; we far prefer software RAID. But in the grand university tradition of not having any money, sometimes we inherit machines with them and have to make them work. Which brings me to my tweet:

This hardware RAID controller's initialization of an idle RAID-6 array of 2 TB disks has been running for more than 24 hours now.

At one level this is probably typical; it likely took about as long for Linux's software RAID to initialize a similar RAID-6 array on another similar machine recently. I suspect that both of them are (or were) doing it in one of the slow ways.

But on another level, the problem is that I have no idea if anything is wrong here. One reason for that is that as far as I know, there's no way on this hardware RAID controller for me to see IO performance information for the individual disks. All I get is aggregate performance (or staring at the disk activity lights, except that on this piece of hardware they're basically invisible). The result is that I have no idea what my disks are actually doing and how fast they're doing that. Is the initialization going slowly because it's seek-heavy, or because the hardware RAID controller is only doing IO slowly (to preserve IO bandwidth for non-existent other requests), or because the disks just aren't going that fast, or some other reason? I can't tell. It's an opaque black box.

In a world where disks either work perfectly as specified or fail outright, this might be okay. You could measure (or know) the raw disk performance, and then use the observed array performance to derive more or less what load each disk should be seeing and how it must be performing. Obviously there are a lot of uncertainties and assumptions in that; we're assuming that IO is divided evenly over the drives, for example. But this is not that world; in this world, disks can quietly have their performance degrade. In an array with evenly distributed IO, this will have the effect of 'smearing' a single degraded disk's bad performance across the entire array. Instead of seeing that one disk has become a lot slower, it looks like all your disks have slowed down somewhat. And if you suspect that you have such a quietly degraded drive, well, good luck finding it.

I don't know if really good hardware RAID has this sort of observability built into it; I've only had one exposure to theoretically enterprise-level RAID, which had its issues. But I'm pretty sure that garden variety hardware RAID doesn't, and it's become clear to me that that is a pretty big strike against garden variety hardware RAID. Without low level observability it's very close to impossible to diagnose any number of performance problems. Black box hardware RAID that you don't have to think about is all very good until it turns into a badly performing black box, and then all you can do is throw it away and start again.


Comments on this page:

By Chris Smallwood at 2017-04-28 09:32:36:

Depends on the quality of the HW Raid controller and which software you have loaded. The functionality you mention is present on the host processor, but the OEM needs to expose that functionality via it's control / monitoring software. Many people just load the controllers drivers and never bother installing the rest of the software and thus don't get to see the per-member IO characteristics, or the device manufacturer never bother's to expose the controller's functionality to the host saving that for more "Premium" devices with higher price tags.

Written on 28 April 2017.
« Understanding Git's model versus understanding its magic
Some versions of sort can easily sort IPv4 addresses into natural order »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Fri Apr 28 01:32:26 2017
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.