Wandering Thoughts archives

2021-02-14

Some observed read latencies for SSDs and NVMe drives under Fedora 32

In a recent entry I wished for some information on the single request latency for SSDs and NVMe drives. You can't get it from common IOPS figures, although per Randall's comments on this entry, some places publish 'QD1' (queue depth 1) measurements that can be reverse engineered into more or less the latencies they saw. I don't feel that I can do a good enough job of actually measuring this on our own hardware, but as it happens I can report on what I see in the 'wild' on my generally lightly loaded work and home Fedora 32 machines. These observations are available because I've long run a Prometheus install on both my home and my work machine, with the Cloudflare ebpf exporter set up to collect kernel block IO information.

I'm going to take the last week as sufficiently representative and only look at reads, and I'll start with my office workstation. My office workstation currently has two Kingston A2000 500 GB NVMe drives (both on PCIe x4 interfaces after some struggles) and two WD Blue 2 TB SSDs on SATA 6.0 Gbps interfaces. Assuming that I'm extracting the information correctly from Prometheus, over the past week things look like this:

  • The WD Blue SSDs have a median ('p50') for reads that's typically around 1 millisecond. Their p90 read is 29 milliseconds, and their p99 read is around 62 milliseconds.

  • The Kingston A2000 NVMes have a median ('p50') for reads that's typically around 100 to 200 microseconds their p90 read is around 240 microseconds, and their p99 read is around 255 microseconds.

My home machine has two Crucial MX300 750 GB SSDs (and some hard drives) on SATA 6.0 Gbps interfaces. As before I'm only going to look at reads. The median ('p50') is around 180 microseconds, the p90 is very spiky but often sits around 3 milliseconds, and the p99 read is up to 4 milliseconds most of the time and some jumps to 16 milliseconds.

There are many possible explanations for all of these differences. One of them is that my office workstation is more idle than my home machine; it's possible that either the SSDs or the NVMes or both are not infrequently going into an idle state that slows their responses. Another possibility is that my office machine actually does enough more read IO (because it's backed up by our central backup system) and this perturbs my overall data, even though the backups only last an hour or so every day.

(In the past seven days, the disks on my office workstation have roughly eight to nine times the total read IO as the disks on my home machine. I had no idea the difference was that large until I looked at my Prometheus metrics data for both, which is a good reason to collect it.)

I was going to include information about writes but they're even more puzzling than the read numbers, and I've concluded that the drives are probably all cooking the books in their own ways.

linux/SSDSomeSeenReadLatencies written at 23:55:16; Add Comment

Link: What was the original reason for the design of AT&T assembly syntax?

This quite informative answer to a Stackoverflow question (via) answers the question, or at least provides a great deal of context that I didn't know. It turns out that the reason AT&T syntax puts the destination register second (instead of first, the way Intel syntax does) almost certainly stretches all the way back to how PDP-11s encoded instructions.

(The AT&T assembly syntax, commonly used on Unix systems but not uncommonly disliked (via), is a cross-platform general syntax that AT&T and Unix mostly used on a range of platforms. The specific x86 version of AT&T syntax is yet another adaptation of this general syntax. More information on the difference between AT&T and Intel syntax for x86 can be found on, eg, Wikipedia.)

links/ATTAssemblySyntaxOrigin written at 12:41:15; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.