A DTrace script to help figure out what process IO is slow

September 5, 2014

I recently made public a dtrace script I wrote, which gives you per file descriptor IO breakdowns for a particular process. I think it's both an interesting, useful tool and probably not quite the right approach to diagnose this sort of problem, so I want to talk about both the problem and what it tells you. To start with, the problem.

Suppose, not entirely hypothetically, that you have a relatively complex multi-process setup with data flowing between the various processes and the whole thing is (too) slow. Somewhere in the whole assemblage is a bottleneck. Basic monitoring tools for things like disk IO and network bandwidth will give you aggregate status over the entire assemblage, but they can only point out the obvious bottlenecks (total disk IO, total network bandwidth, etc). What we'd like to do here is peer inside the multi-process assemblage to see which data flows are fast and which are slow. This per-data-flow breakdown is why the script shows IO on a per file descriptor basis.

What the DTrace script's output looks like is this:

s fd   7w:   10 MB/s  waiting ms: 241 / 1000   ( 10 KB avg *   955)
p fd   8r:   10 MB/s  waiting ms:  39 / 1000   ( 10 KB avg *   955)
s fd  11w:    0 MB/s  waiting ms:   0 / 1000   (  5 KB avg *     2)
p fd  17r:    0 MB/s  waiting ms:   0 / 1000   (  5 KB avg *     2)
s fd  19w:   12 MB/s  waiting ms: 354 / 1000   ( 10 KB avg *  1206)
p fd  21r:   12 MB/s  waiting ms:  43 / 1000   ( 10 KB avg *  1206)
  fd 999r:   22 MB/s  waiting ms:  83 / 1000   ( 10 KB avg *  2164)
  fd 999w:   22 MB/s  waiting ms: 595 / 1000   ( 10 KB avg *  2164)
IO waits:  read:  83 ms  write: 595 ms  total: 679 ms 

(This is a per-second figure averaged over ten seconds and file descriptor 999 is for the total read and write activity. pfiles can be used to tell what each file descriptor is connected to if you don't already know.)

Right away we can tell a fair amount about what this process is doing; it's clearly copying two streams of data from inputs to outputs (with a third one not doing much). It's also spending much more of its IO wait time writing the data rather than waiting for there to be more input, although the picture here is misleading because it's also making pollsys() calls and I wasn't tracking the time spent waiting in those (or the time spent in other syscalls).

(The limited measurement is partly an artifact of what I needed to diagnose our problem.)

What I'm not sure about this DTrace script is if it's the most useful and informative way to peer into this problem. Its output points straight to network writes being the bottleneck (for reasons that I don't know) but that discovery seems indirect and kind of happenstance, visible only because I decided to track how long IO on each file descriptor took. In particular it feels like there are things I ought to be measuring here that would give me more useful and pointed information, but I can't think of what else to measure. It's as if I'm not asking quite the right questions.

(I've looked at Brendan Gregg's Off-CPU Analysis; an off-cpu flamegraph analysis actually kind of pointed in the direction of network writes too, but it was hard to interpret and get too much from. Wanting some degree of confirmation and visibility into this led me to write fdrwmon.d.)

Written on 05 September 2014.
« Some uses for SIGSTOP and some cautions
The kernel should not generate messages at the behest of the Internet »

Page tools: View Source, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Fri Sep 5 23:34:57 2014
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.