Some network speeds and network related speeds we see in mid 2022
We are not anywhere near a bleeding edge environment. We still mostly use 1G networking, with 10G to a minority of machines, and a mixture of mostly SATA SSDs with some HDDs for storage. A few very recent machines have NVMe disks as their system disks. So here are the speeds that we see and that I think of as 'normal', here in mid 2022 in our environment, with somewhat of a focus on where the limiting factors are.
On 1G connections, anything can get wire bandwidth for streaming TCP traffic (or should be able to; if it can't, you have some sort of problem). On 10G connections, a path between Linux machines without a firewall in the middle should readily run over 900 Mbytes/sec for a TCP connection without any specific tuning (and without Ethernet jumbo frames). We haven't tried to measure our OpenBSD firewalls recently but I don't think they can move traffic this fast yet. SSH connections aren't this fast; we can count on hitting 1G wire bandwidth but generally not anywhere near close to 10G TCP bandwidth with a single SSH connection.
(A machine with enough CPUs can improve the aggregate speed with multiple SSH connections in parallel.)
A single HDD will do sustained reads and writes somewhere between 100 Mbytes/sec and 240 Mbytes/sec; I generally assume 150 Mbytes/sec for most of our drives, although the very recent ones can go faster. However, there can be performance surprises in sustained HDD IO. A single HDD is generally fast enough to saturate a 1G network connection, but it takes quite a number in parallel in some way to reach the practical limit of 10G network speeds. Similarly, it would take a lot of HDDs operating in parallel to hit PCIe controller bandwidth limits.
(You're most likely to get enough HDDs in parallel to reach 10G TCP speeds in a RAID6 or RAID5 array, especially if you want them to write at those speeds.)
A single SATA SSD will do sustained reads in the general area of 500 Mbytes/sec. The SSD sustained write performance is more uncertain, but I've observed rates over 400 Mbytes/sec for over an hour on some systems. A single SSD doing streaming reads is slower than a 10G TCP network connection but probably faster than a 10G SSH connection; however, even a simple pair of mirrored SSDs will likely provide enough read bandwidth to saturate the 10G TCP connection. Our experience is that two (SATA) SSDs don't hit PCIe bandwidth limits, but that you can apparently hit them if you put enough SSDs on a single controller (our Linux fileservers seem to have an aggregate PCIe bandwidth limit for their drives on a SATA controller).
A single decent NVMe drive will definitely read fast enough to saturate a 10G TCP connection, even on a PCIe x2 link. As with SATA SSDs, I consider the sustained write performance to be more uncertain, and I don't have much data on it so far (we have no NVMe drives in situations where they would see sustained writes). However, generally the claimed sustained write performance numbers for NVMe drives are pretty good; if they hold up in real life, even a single NVMe drive should be able to write data at the full speed of a 10G TCP connection.
In a 1G network environment, the network is our limiting factor. In our 10G network environment doing transfers through SSH, SSH is probably the limiting factor. If you do direct TCP over 10G or use a high bandwidth SSH, HDD performance may be your limiting factor but SATA SSD performance is probably not the limit for reads (it might be for writes, since a mirrored pair of SATA SSDs only writes at the speed of one). It's likely that NVMe drives will make even 10G TCP performance your limiting factor for both reads and writes.
(However it will be years before we're using NVMe drives in any significant amounts, especially for things where bandwidth matters, unless a surprising amount of money rains on us out of the sky.)