Hardware RAID versus software RAID

August 15, 2006

Here's a good argument-starting question: which is faster and/or better, hardware RAID or software RAID?

A lot of conventional wisdom says hardware RAID, but I mostly disagree. Let's ask the inverse question: how (or when) can hardware RAID be faster than software RAID?

If you're doing RAID-5, the hardware could do XOR faster than your CPU can. But in modern systems XOR performance is pretty much constrained by the memory bandwidth, not CPU power. It is possible to have better memory bandwidth than the CPU (graphics cards do), but it's not cheap.

If you're doing RAID-1, hardware RAID can reduce the bus bandwidth needed for writes from N DMA transfers to N disks to one DMA transfer to itself. But for this to make your system faster, you need to be saturating the PCI bus bandwidth with write traffic, which is not exactly common. (In theory you might see this with RAID-5 too.)

I believe that's it. (Additions and corrections welcome.)

The usual retort for all this is that while hardware RAID may be no faster than software RAID, at least you're offloading the work from your main CPU so the system's overall speed goes up. However, for this to matter your system needs to be CPU constrained and doing significant write IO (if you only have insignificant write IO, the extra CPU usage for software RAID will also be small). This is not exactly common either.

There is one downside for software RAID: the operating system has to be running in order to use it, which can complicate early boot. But software RAID also has a lot of upsides, including hardware independence. (You're dependent on software, but you pretty much are anyways; only a few crazy people try moving filesystems between different operating systems.)

The one wildcard I can see in hardware RAID's favour is virtualization, which might deliver a future of heavily used hardware running close to both CPU and IO saturation.

(This entry is brought to you by the Tivoli Storage Manager documentation I was plowing through today, which made me grind my teeth by tossing off a 'hardware RAID is better than software RAID' bit in passing.)


Comments on this page:

By Dan.Astoorian at 2006-08-16 12:12:34:

Asking whether hardware RAID is better/worse than software RAID isn't particularly meaningful unless the question is better qualified. I've seen good and bad implementations of both.

For example, hardware RAID with a battery-backed write cache can improve latency statistics dramatically by reporting the data as committed once it hits the cache. Can a comparable optimization be implemented with software RAID? You can plug the server into a UPS and try to assure that the cache gets flushed before the battery runs out, but there are other failure modes to consider too. Is an OS crash that prevents the cache from being flushed a more likely event than a hardware RAID controller failure that could corrupt the cache? Is a UPS more reliable than the battery on a hardware RAID?

For that matter, if you build a Linux server with a bunch of attached disks, build RAID1/RAID5 devices on it using the md driver, and export them using iSCSI target drivers, have you just built a software RAID or a hardware RAID?

--Dan Astoorian

By cks at 2006-08-16 16:45:41:

In the case of the Linux md RAID being exported via iSCSI target drivers, I'd say you've probably effectively built a hardware RAID device, as the system probably isn't doing anything besides the RAID stuff.

Hardware RAID and SAN controllers and the like aren't magical, after all; a lot of that 'hardware' is actually 'software on a dedicated machine'.

By Stuart Remphrey at 2018-12-28 22:59:27:

(responding to an old post, but a pet peeve of mine also!)

IMHO the main difference is (was) the battery backup allowing low-latency writes & covering the "write hole" where RAID-5 data may be written before recalculating & updating the parity, during which time they may be inconsistent (doesn't affect ZFS RAID-Z).

Since the original blog entry the introduction of high-endurance Flash has helped, as "s/w" implementations can use these to cache writes: which further blurs the h/w / s/w boundary!

Written on 15 August 2006.
« The importance of numerical literacy
The fun of 32-bit bugs »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Tue Aug 15 23:01:16 2006
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.