== The IO scheduler improvements I saw In the spirit of sharing actual numbers and details for things that I left a bit unclear in an [[earlier entry CFQAndiSCSITargets]], here is more: First, we switched to the _deadline_ IO scheduler (from the default _cfq_). I did brief tests with the _noop_ scheduler and found it basically no different from _deadline_ for my test setup, and _deadline_ may have some advantages for us with more realistic IO loads. My IO tests were sequential read and write IO, performed directly on a test [[fileserver ../solaris/ZFSFileserverSetup]], which uses a single [[iSCSI backend LinuxISCSITargets]]. On a ZFS pool that effectively is a stripe of two mirror pairs, switching the backend to _deadline_ increased a single sequential read from about 175 MBytes/sec to about 200 Mbytes/sec. Two sequential reads of separate files were more dramatic; aggregate performance jumped by somewhere around 50 Mbytes/sec. In both cases, this was close to saturating both gigabit connections between the iSCSI backend and the fileserver. (Since all of these data rates are well over the 115 Mbytes/sec or so that NFS clients can get out of our Solaris fileservers, this may not make a significant difference in client performance.) I measured no speed increase for a single sequential writer, but it was already more or less going at what I believe is the raw disk write speed. (According to the IET mailing list, other people have seen much more dramatic increases in write speeds.) I didn't try to do systematic tests; for our purposes, it was enough that _deadline_ IO scheduling had a visible performance effect and didn't seem to have any downsides. I didn't need to know the specific contours of all of the improvements we might possibly get before I could recommend deployment on the production machines.