Some observed read latencies for SSDs and NVMe drives under Fedora 32

February 14, 2021

In a recent entry I wished for some information on the single request latency for SSDs and NVMe drives. You can't get it from common IOPS figures, although per Randall's comments on this entry, some places publish 'QD1' (queue depth 1) measurements that can be reverse engineered into more or less the latencies they saw. I don't feel that I can do a good enough job of actually measuring this on our own hardware, but as it happens I can report on what I see in the 'wild' on my generally lightly loaded work and home Fedora 32 machines. These observations are available because I've long run a Prometheus install on both my home and my work machine, with the Cloudflare ebpf exporter set up to collect kernel block IO information.

I'm going to take the last week as sufficiently representative and only look at reads, and I'll start with my office workstation. My office workstation currently has two Kingston A2000 500 GB NVMe drives (both on PCIe x4 interfaces after some struggles) and two WD Blue 2 TB SSDs on SATA 6.0 Gbps interfaces. Assuming that I'm extracting the information correctly from Prometheus, over the past week things look like this:

  • The WD Blue SSDs have a median ('p50') for reads that's typically around 1 millisecond. Their p90 read is 29 milliseconds, and their p99 read is around 62 milliseconds.

  • The Kingston A2000 NVMes have a median ('p50') for reads that's typically around 100 to 200 microseconds their p90 read is around 240 microseconds, and their p99 read is around 255 microseconds.

My home machine has two Crucial MX300 750 GB SSDs (and some hard drives) on SATA 6.0 Gbps interfaces. As before I'm only going to look at reads. The median ('p50') is around 180 microseconds, the p90 is very spiky but often sits around 3 milliseconds, and the p99 read is up to 4 milliseconds most of the time and some jumps to 16 milliseconds.

There are many possible explanations for all of these differences. One of them is that my office workstation is more idle than my home machine; it's possible that either the SSDs or the NVMes or both are not infrequently going into an idle state that slows their responses. Another possibility is that my office machine actually does enough more read IO (because it's backed up by our central backup system) and this perturbs my overall data, even though the backups only last an hour or so every day.

(In the past seven days, the disks on my office workstation have roughly eight to nine times the total read IO as the disks on my home machine. I had no idea the difference was that large until I looked at my Prometheus metrics data for both, which is a good reason to collect it.)

I was going to include information about writes but they're even more puzzling than the read numbers, and I've concluded that the drives are probably all cooking the books in their own ways.

Written on 14 February 2021.
« Link: What was the original reason for the design of AT&T assembly syntax?
How ZFS on Linux brings up pools and filesystems at boot under systemd »

Page tools: View Source, Add Comment.
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Sun Feb 14 23:55:16 2021
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.