The IO scheduler improvements I saw
In the spirit of sharing actual numbers and details for things that I left a bit unclear in an earlier entry, here is more:
First, we switched to the deadline
IO scheduler (from the default
cfq
). I did brief tests with the noop
scheduler and found it
basically no different from deadline
for my test setup, and deadline
may have some advantages for us with more realistic IO loads.
My IO tests were sequential read and write IO, performed directly
on a test fileserver, which uses
a single iSCSI backend. On a ZFS pool that
effectively is a stripe of two mirror pairs, switching the backend to
deadline
increased a single sequential read from about 175 MBytes/sec
to about 200 Mbytes/sec. Two sequential reads of separate files were
more dramatic; aggregate performance jumped by somewhere around 50
Mbytes/sec. In both cases, this was close to saturating both gigabit
connections between the iSCSI backend and the fileserver.
(Since all of these data rates are well over the 115 Mbytes/sec or so that NFS clients can get out of our Solaris fileservers, this may not make a significant difference in client performance.)
I measured no speed increase for a single sequential writer, but it was already more or less going at what I believe is the raw disk write speed. (According to the IET mailing list, other people have seen much more dramatic increases in write speeds.)
I didn't try to do systematic tests; for our purposes, it was enough
that deadline
IO scheduling had a visible performance effect and
didn't seem to have any downsides. I didn't need to know the specific
contours of all of the improvements we might possibly get before I
could recommend deployment on the production machines.
|
|