Why I wrote my own bulk IO performance measurement tool
So, you might wonder why I wrote my own program to measure bulk IO
performance (as mentioned in this entry). After all, there are quite a lot of
programs to do this already, many of them very sophisticated and
some of them (like
dd) that are everywhere.
The short answer is that I wanted a program that gave me just the results that I wanted, directly measuring what I wanted measured, and that I understand exactly what it was doing and how it was doing it. I did not want to measure lots of parameters or post-process something's output; I was only interested in the bandwidth of streaming bulk IO performance (both read and write), and I wanted to get immediately usable numbers with a minimum of fuss.
(This makes it essentially a micro-benchmark.)
The problem with all of the other benchmarking programs is exactly that all of the ones that are easy to find are the sophisticated ones. They measure a great many things, or they require a bunch of configuration, and I am not sure of exactly of what they are doing at a low level (and with disk IO, sometimes the low level details matter a lot; consider the rewrite issue).
(The problem with GNU
dd is that it is not everywhere, especially
the modern version that will tell you IO bandwidth numbers instead
of making you work them out yourself.)
My program also has some additional features that have turned out to be handy. The most useful one is that it will periodically report its running IO rate, which has been very useful for spotting stuttering IO. (This happens more often than one would like, even for reads.)