xiostat: accurate Linux disk IO statistics
Update, September 21 2011:
xiostat is now obsolete. Please use
MXiostat instead. The remainder of this page is retained for historical
iostat command turns out to not report completely accurate
and complete IO statistics (covered in more detail here). xiostat.py is the replacement program I wrote
to give us a faithful recounting of the actual kernel disk IO statistics.
It works on 2.6 kernels and Red Hat's 2.4 kernels that have disk IO stats
added (such as Red Hat Enterprise Linux 3).
So that's: xiostat.py
xiostat.py [-q] [-c COUNT] [DEV [DELAY]]
Since the default device to report on is
sde1 (for peculiar local
reasons), the DEV argument is in practice mandatory.
-q omits the
field headers that are printed every 24 lines or so,
-c is how many
iterations to stop after (by default
xiostat.py runs forever), and
DELAY is how many seconds to delay between each iteration (default 1).
Feedback, fixes, etc, can be directed to ChrisSiebenmann.
Note that one significant difference between
xiostat only reports on a single device at a time. While
this could be fixed with more work (that I don't have time for right
now), right now if you want to monitor multiple devices at the same
time you need to run multiple copies of
It's know to work on 2.6 kernels (and thus Debian Sarge, Fedora Core 2+, etc) and Red Hat Enterprise 4; it should work on RHEL 3 as well, but I don't have a machine to test on. It will not work on 2.4 kernels unless they have the Red Hat disk statistics patch.
Xiostat prints almost all fields in amount per second, regardless of how long the DELAY is set for. Time based fields are printed in milliseconds.
|act||Instantaneous count of outstanding IO requests right now (not a per second field)|
|rio||Read requests completed.|
|rmerge||Read requests merged into existing requests|
|rsect||Read sectors submitted|
|rwait||Average time for read requests to complete|
|rgrp||Average sectors per read request|
|w*||As for r* fields, but for writes instead of reads.|
|agrp||Average sectors per request (across both read and write)|
|aveq||Average queue size, ie the average number of outstanding IO requests|
|await||Average time for all requests to complete (across both read and write)|
|util||Percentage of the time that there was at least one IO request pending.|
For fine details, you should read my writeup of what information the kernel makes available.
In the process of the work that caused me to write
wound up adding some extra kernel statistics to monitor actual device
service and activity times (to a Red Hat Enterprise 3 kernel). The
remnants of code to handle these additional fields are still in the
xiostat.py, because I haven't had the energy to clean it up
and validate the changes. Hence mysterious mentions of fields like
'rduse' and 'waveq'.
There is also some experimental code to parse
/etc/fstab and the 2.4
LVM information to try to let people specify filesystems instead of
devices. I don't believe this works well on 2.6, so you don't want to
GPL. It's not explicitly labeled as such in the code, due to lack of time et al. (Someday I will fix this, but not today.)