Why does anyone buy iSCSI TOE network cards?

February 14, 2008

We're (still) looking for a new SAN, so I have recently been doing some work with a homebrew iSCSI testing environment built from ordinary servers. The experience leaves me with a simple question: why does anyone buy special iSCSI accelerator network cards?

(These are sometimes called 'iSCSI HBA cards'.)

I ask because my ordinary test environment, with commodity server NICs and no tuning besides 9,000 byte jumbo frames, can basically saturate gigabit Ethernet doing bulk reads over iSCSI. Through the filesystem, no less. Neither end is hurting for CPU; the iniator machine shows 75% idle, the target machine show 40% idle.

(I don't care about the target machine's CPU all that much, since it has no other role in life besides being an iSCSI target; what really matters is iniator CPU usage.)

To me this smells an awful lot like the hardware RAID situation, where you have to be both CPU constrained and doing a large volume of IO in order for an iSCSI HBA to make any sense. My suspicion is that this is not very common, and thus that a lot of people are being scared into buying iSCSI accelerators that they don't need.

(The one possible exception I can think of is if you are using iSCSI to supply disk space to virtualized servers. Still, I can't help but think that such a machine is maybe a bit overcommitted.)

Sidebar: my test environment

If for no other use than my future reference:

  • the initiator machine is a SunFire X2100 M2 running Solaris 10 U4 with ZFS and the native Solaris iSCSI iniator.
  • the target machine is an HP DL320s running Red Hat Enterprise 5 (in 64-bit mode) with the IET driver. The iSCSI-exported storage is LVM over software RAID 6 across 11 80GB SATA disks.

(The HP's native hardware RAID 6 turned out to have abysmal performance, so I fell back to Linux software RAID, which performs fine.)

The Solaris machine is using an Intel gigabit NIC (via a Sun expansion card); the HP machine is using one of the onboard Broadcom BCM5714 ports. Linux enabled RX and TX checksumming and scatter-gather support on the Broadcom, and I don't know what Solaris did. The machines are connected through a Netgear switch.

(There is a rant about switches that claim to support jumbo frames but don't that could go here.)


Comments on this page:

From 198.137.214.33 at 2008-02-15 16:52:49:

Bootability. -Alex * http://alexharden.org/

By cks at 2008-02-18 19:00:48:

Given that iSCSI is a complex protocol with a lot of configuration on top of TCP (itself not a simple protocol), I'm not sure how you can have a reasonable 'iSCSI HBA' card that can boot a system from iSCSI storage. Such a card would either be PC-only and have a huge boot-time ROM or be something awfully close to a separate computer running its own code.

(And do people actually boot machines from iSCSI storage instead of from local disks?)

From 131.251.5.84 at 2008-02-21 05:45:48:

Nope, bootable iscsi luns make a lot of sense. The card is usually configurable with a firmware utility, sometimes even open source ones.

Also there are OSes with non-existent (or really shitty) iSCSI support that cope a lot better when they think they're talking to a standard SCSI HBA.

From 164.116.253.111 at 2008-03-26 11:48:13:

You were using the HP iSCSI Accelerator?

I was searching for info on it and came across this page, but I see that supported OS' for it are listed in the following A from the FAQ

"It supports Windows 2000, Windows 2003 32-bit and ES."

By cks at 2008-03-26 16:59:29:

I don't have any experience with iSCSI accelerator cards. The HP DL320s I'm (partly) testing with is doing iSCSI through a regular Broadcom NIC, which it can drive at gigabit data rates (assuming that your disks can go that fast).

Given that there are so many iSCSI parameters to set and fiddle with, the thought of having all of that on the card scares me.

(I suppose dedicated iSCSI HBAs do have one potential advantage: they limit how much damage a compromised initiator machine can do on the iSCSI network, since they don't give it a real network connection there.)

From 81.18.0.241 at 2008-07-31 04:44:08:

On the issue of the HP harware-raid performing poorly: do you have a battery backed accelerator board installed ? If you don't the raid controller will have very poor performance because write caching is turned off on the drives to insure data integrity.

By cks at 2008-07-31 09:06:13:

The HP has whatever hardware a default configured HP DL320s has and all of the IO was streaming writes. I certainly expect a default configuration hardware RAID to perform adequately (and at least non-catastrophically) on the easiest IO load to handle.

(I really doubt that the Linux kernel is asking the hardware to synchronize the cache to disk after every stripe write or something. If the HP has decided to do it anyways, that's a hardware issue.)

Written on 14 February 2008.
« A consequence of Python's 'computer science' nature
A weird routing mystery »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Thu Feb 14 23:43:51 2008
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.