The FreeBSD iSCSI initiator is not ready for serious use (as of 9.1)
Our ZFS fileservers use iSCSI to talk to the backend disk storage, so if FreeBSD is to be a viable Solaris replacement for us its iSCSI initiator implementation needs to be up to the level of Solaris's (or Linux's). I recently did some basic testing to see if it looked like it was, and I'm afraid that it's not; as of FreeBSD 9.1, the iSCSI initiator seems suitable only for experimentation. In my testing I ran into two significant issues and one major issue, after which I stopped looking for further problems because it seemed pointless.
The first issue is clear in the FreeBSD iscsi.conf(5)
manpage,
which is simply full of 'not implemented yet'
notes for various iSCSI connection parameters. Unfortunately a number of these
are (potentially) important for good
performance. This is the least significant issue, since I wouldn't
really care about it if everything else worked (instead it would
just be a vague caution sign).
The second issue is how the iSCSI initiator is managed and
connections are established. Basically, FreeBSD provides absolutely
no support for this and in particular there is no boot time
daemon that you run to connect to all of your configured iSCSI
initiators. Instead you get to somehow run one instance of
iscontrol(8) per
target, restarting any that die because their error recovery is (as the
manual says) not 'fully compliant' or due to other issues. iscontrol
has at least some bad limitations; in my experimentation, it would not
start against a target that had no LUNs advertised (which is valid). I
did not test its behavior if all LUNs of a target got deleted while it
was running, but I wouldn't be surprised if it also exited (which would
be a bad problem for us).
Both of these pale against the major issue, which was performance or the lack thereof. I'm not going to quote numbers for reasons I'll discuss later, but it was bad at all levels: in ZFS, in UFS, and just banging on the raw iSCSI disk. Streaming read performance was roughly 3/5ths of what Linux got in the same environment. Untuned streaming write performance was one tenth of the streaming read performance, but with drastically increased iSCSI parameters (not needed for Linux) FreeBSD managed to pull that up to 2/3rds of its read performance and 2/5ths of what Linux could get. Web searches turned up other people reporting catastrophically bad iSCSI write speeds (I suspect that they did not tune iSCSI parameters, which reduces it to merely bad), so I don't think that this is just me or just my test environment.
This level of (non)performance is a complete non-starter for us. It's so bad that there is no point in me spending more time to go beyond my quick experiment and basic tests. Under other circumstances I might have looked at the code and dug into things further to see if I could find some fixable defect, but I don't feel that there's any point here. The other issues make it clear to me that no one has run the FreeBSD iSCSI initiator in production (at least no one sane), and I have no desire to be the first person on the block to find all of the other problems it may have.
(The situation with iscontrol
alone makes it clear that no one has
exposed this to real usage, because no sane sysadmin would tolerate
running their entire iSCSI initiator connection handling that way. I
don't object to separate iscontrol
instances; I do object to no master
daemon and no integration with the FreeBSD startup system.)
(Also, you don't want to know how FreeBSD handles or in this case doesn't handle the various iSCSI dynamic discovery methods.)
All of this leaves me disappointed. I wanted FreeBSD to be a viable competitor and alternative, something that we could really consider. Now our options are much narrower.
(Well, I can always hope that the FreeBSD iSCSI initiator improves drastically in the next, oh, year, since we're not about to replace our current Solaris fileserver infrastructure right away. We've only just started to think about a replacement project; it may be two or three years before we actually need to make a choice and deploy.)
Sidebar: my test environment and cautions
At this point I will say it out loud: I was not testing FreeBSD on physical hardware. I discovered all of this during very basic tests in a virtual machine. Normally this would make even me question my results, but I did a number of things to validate them. First, I tested (streaming) TCP bandwidth between the FreeBSD VM and the iSCSI backend (which is on real hardware) and got figures of close to the raw wire bandwidth; I can be reasonably sure that the FreeBSD VM was not having its network bandwidth choked by the virtualization system. Second, I also ran a Linux VM in the same virtualization environment and measured its performance (network and iSCSI). As noted above, it did significantly better than FreeBSD did (despite actually having less RAM allocated to it).
It's always possible that FreeBSD iSCSI is choking on something about the virtualization environment that doesn't affect its raw TCP speed or Linux. My current view is that the odds of this are sufficiently low (for various reasons) that it is not worth the hassle of spinning up a physical FreeBSD machine just to be sure.
(Partly this is because I found other people on the Internet also complaining about the FreeBSD iSCSI write speeds. If I was the sole person having problems, I would suspect myself instead of FreeBSD.)
I suppose the one quick test I should do is to feed the FreeBSD VM a whole lot more memory to see if that suddenly improves both read and write speeds a whole lot. But even if FreeBSD had Linux-level read and write performance, the other significant issues would probably sink it here.
|
|