I don't usually think about IO transfer times
All of this started when I was thinking about the differences between iSCSI disk IO and normal disk IO to regular disks. In light of our experiences I wound up thinking 'well, one of them is that iSCSI disk IO takes appreciable time to actually transfer between the 'disk' that is the iSCSI target and the host that uses it'. After all, a 128 Kbyte iSCSI IO will take around a millisecond just to get between the target and the initiator (in the best case). Then some useful part of my brain poked me, asking what the SATA data transmission rate is. The answer is of course that it's usually 3 Gbits/sec, or only (roughly) three times faster than gigabit Ethernet. Suddenly the difference between the two looks much smaller than I was thinking.
(It's even less of a difference if your system winds up falling back to 1.5 Gbits/sec basic SATA. Per Wikipedia, signaling overhead makes it only somewhat faster than gigabit Ethernet.)
What's really going on is that an inaccurate mental model of hard drive disk IO had settled into my head. In this model transmission speeds were so much higher than platter read and write IO speeds that they could be ignored. I was thinking that the time consuming bits of disk IO were seeking and then a bit of reading the data, but sending the data over the wire was trivial (at least for old-style spinning rust, since I knew that SSDs were fast enough that it mattered). This model may have been accurate at some point but it is dangerously close to being incorrect today; modern 3 Gb SATA is only around three times as fast as the platter read IO speeds that I usually deal with. Transmitting data between the drive and the computer can now take an appreciable amount of time.
This has a knock on effect on the impact of 'over-reading', such as always reading a 128 KB block when the user only asked for less. My traditional view has been that this was basically free (for local disks) because disk drives usually read an entire track at a time and then transmitting the extra data was effectively free because transmission was so fast in general. But since transmission speed does matter, this is not necessarily the case in real life; the extra transmission time alone may make a difference.
(Of course SATA has the edge over iSCSI, even in our environment, because systems generally have more SATA channels than they have iSCSI gigabit network links. A disk generally gets that 3 Gbits/sec to itself while with iSCSI all N disks are sharing one gigabit connection.)
Sidebar: this is actually worse in our environment
We're not just using SATA; we're using SATA port multipliers. My understanding of port multipliers is that they don't increase the link bandwidth, so we actually have 3 Gbits/sec being shared between four or five disks (depending on the specific chassis). This is enough disks that I'd expect simultaneous full-bore IO from all of the disks at once to run into the channel's bandwidth limit.
(I guess it's time to go test that. I did some tests before but I should probably revisit them with the benefit of more thinking about things. And looking back it's striking that I didn't think to do the math on the SATA channel bandwidth limit at the time of those tests; I just handwaved things with the assumption that the channel would be more than fast enough.)