The FreeBSD iSCSI initiator is not ready for serious use (as of 9.1)

March 21, 2013

Our ZFS fileservers use iSCSI to talk to the backend disk storage, so if FreeBSD is to be a viable Solaris replacement for us its iSCSI initiator implementation needs to be up to the level of Solaris's (or Linux's). I recently did some basic testing to see if it looked like it was, and I'm afraid that it's not; as of FreeBSD 9.1, the iSCSI initiator seems suitable only for experimentation. In my testing I ran into two significant issues and one major issue, after which I stopped looking for further problems because it seemed pointless.

The first issue is clear in the FreeBSD iscsi.conf(5) manpage, which is simply full of 'not implemented yet' notes for various iSCSI connection parameters. Unfortunately a number of these are (potentially) important for good performance. This is the least significant issue, since I wouldn't really care about it if everything else worked (instead it would just be a vague caution sign).

The second issue is how the iSCSI initiator is managed and connections are established. Basically, FreeBSD provides absolutely no support for this and in particular there is no boot time daemon that you run to connect to all of your configured iSCSI initiators. Instead you get to somehow run one instance of iscontrol(8) per target, restarting any that die because their error recovery is (as the manual says) not 'fully compliant' or due to other issues. iscontrol has at least some bad limitations; in my experimentation, it would not start against a target that had no LUNs advertised (which is valid). I did not test its behavior if all LUNs of a target got deleted while it was running, but I wouldn't be surprised if it also exited (which would be a bad problem for us).

Both of these pale against the major issue, which was performance or the lack thereof. I'm not going to quote numbers for reasons I'll discuss later, but it was bad at all levels: in ZFS, in UFS, and just banging on the raw iSCSI disk. Streaming read performance was roughly 3/5ths of what Linux got in the same environment. Untuned streaming write performance was one tenth of the streaming read performance, but with drastically increased iSCSI parameters (not needed for Linux) FreeBSD managed to pull that up to 2/3rds of its read performance and 2/5ths of what Linux could get. Web searches turned up other people reporting catastrophically bad iSCSI write speeds (I suspect that they did not tune iSCSI parameters, which reduces it to merely bad), so I don't think that this is just me or just my test environment.

This level of (non)performance is a complete non-starter for us. It's so bad that there is no point in me spending more time to go beyond my quick experiment and basic tests. Under other circumstances I might have looked at the code and dug into things further to see if I could find some fixable defect, but I don't feel that there's any point here. The other issues make it clear to me that no one has run the FreeBSD iSCSI initiator in production (at least no one sane), and I have no desire to be the first person on the block to find all of the other problems it may have.

(The situation with iscontrol alone makes it clear that no one has exposed this to real usage, because no sane sysadmin would tolerate running their entire iSCSI initiator connection handling that way. I don't object to separate iscontrol instances; I do object to no master daemon and no integration with the FreeBSD startup system.)

(Also, you don't want to know how FreeBSD handles or in this case doesn't handle the various iSCSI dynamic discovery methods.)

All of this leaves me disappointed. I wanted FreeBSD to be a viable competitor and alternative, something that we could really consider. Now our options are much narrower.

(Well, I can always hope that the FreeBSD iSCSI initiator improves drastically in the next, oh, year, since we're not about to replace our current Solaris fileserver infrastructure right away. We've only just started to think about a replacement project; it may be two or three years before we actually need to make a choice and deploy.)

Sidebar: my test environment and cautions

At this point I will say it out loud: I was not testing FreeBSD on physical hardware. I discovered all of this during very basic tests in a virtual machine. Normally this would make even me question my results, but I did a number of things to validate them. First, I tested (streaming) TCP bandwidth between the FreeBSD VM and the iSCSI backend (which is on real hardware) and got figures of close to the raw wire bandwidth; I can be reasonably sure that the FreeBSD VM was not having its network bandwidth choked by the virtualization system. Second, I also ran a Linux VM in the same virtualization environment and measured its performance (network and iSCSI). As noted above, it did significantly better than FreeBSD did (despite actually having less RAM allocated to it).

It's always possible that FreeBSD iSCSI is choking on something about the virtualization environment that doesn't affect its raw TCP speed or Linux. My current view is that the odds of this are sufficiently low (for various reasons) that it is not worth the hassle of spinning up a physical FreeBSD machine just to be sure.

(Partly this is because I found other people on the Internet also complaining about the FreeBSD iSCSI write speeds. If I was the sole person having problems, I would suspect myself instead of FreeBSD.)

I suppose the one quick test I should do is to feed the FreeBSD VM a whole lot more memory to see if that suddenly improves both read and write speeds a whole lot. But even if FreeBSD had Linux-level read and write performance, the other significant issues would probably sink it here.

Comments on this page:

By cks at 2013-03-21 12:46:53:

Because I don't feel like updating the actual entry: I tried giving the FreeBSD VM 6 GB of memory instead of 1 GB. It made no difference to the results with tuned parameters.

From at 2013-03-21 16:49:00:

There's already something in the works. Maybe it's usable by the time FreeBSD 10.0 is released (later this year)... see and for further details.

By cks at 2013-03-21 18:31:12:

Unfortunately that's an iSCSI target (the server side) not an iSCSI initiator (the client side). A FreeBSD ZFS fileserver would be an iSCSI client and so needs a good initiator; a target is mostly or entirely irrelevant to our needs.

From at 2013-03-21 23:43:24:

Yes, it is a sore spot. IIRC, even apart of the target work, there is initiator improvements planned too.

From at 2013-03-22 02:13:45:


They will also work on the Initiator:

"Once target support is robust, FreeBSD's existing iSCSI initiator will be updated to use many of the components developed for the target. This will improve initiator performance and add modern features such as InitialR2T."

By cks at 2013-03-22 11:18:47:

Ah oops, you're right; I missed that bit (in the first link). I hope that they improve the other aspects of the iSCSI initiator as well as its performance. But overall this news makes me happy and I'll have to keep an eye on future FreeBSD releases; thanks for letting me know.

From at 2013-04-26 09:27:30:

At, there is a link to the current (ok, kind of current; work on RDMA support required a lot of changes and those are not included in that diff) patch, including the initiator side. It's still unoptimised, so performance is supposed to suck, but it shows the direction.

From at 2013-04-29 18:52:09:

Edward is currently implementing a native in-kernel iSCSI stack (both target and initiator) for this increasingly popular block storage protocol. "Although there are a number of iSCSI target implementations that support FreeBSD, the project lacks a high performance and reliable in-kernel target. As iSCSI gains favor, this stack will be a key element in maintaining FreeBSD's competitive position in enterprise and open-source deployments" said Justin T. Gibbs, president of the FreeBSD Foundation. The project is expected to be completed in October 2013.

 - Alex
By Kunal Thaggarse at 2016-02-09 08:01:22:

Try the new iscsid that's built into the kernel. It's much better now and the fact that it's built into the kernel shows how much faith the developers had to add it in and not load it as a module like the old iscontrol initiator.

Written on 21 March 2013.
« Don't use ab for your web server stress tests (I like siege instead)
The problem with trying to make everything into a Python module »

Page tools: View Source, View Normal, Add Comment.
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Thu Mar 21 00:37:53 2013
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.