== 10G Ethernet is a sea change for my assumptions We're soon going to migrate a bunch of filesystems to a SSD-based [[new fileserver ../solaris/ZFSFileserverSetupII]], all at once. Such migrations force us to do full backups of migrated filesystems (to the backup system they appear as new filesystems), so a big move means a sudden surge in backup volume. As part of how to handle this surge, I had the obvious thought: we should upgrade the backup server that will handle the migrated filesystems to 10G Ethernet now. The 10G transfer speeds plus the source data being on SSDs would make it relatively simple to back up even this big migration overnight during our regular backup period. Except I realized that this probably wasn't going to be the case. [[Our backup system ../sysadmin/DiskBackupSystem]] writes backups to disk, specifically to ordinary SATA disks that are not aggregated together in any sort of striped setup, and an ordinary SATA disk might write at 160 Mbytes per second on a good day. This is only slightly faster than 1G Ethernet and certainly nowhere near the reasonable speeds of 10G Ethernet in our environment. We can read data off the SSD-based fileserver and send it over the network to the backup server very fast, but that doesn't really do us anywhere near as much good as it looks when the entire thing is then going to come to a screeching halt by the inconvenient need to write the data to disk on the backup server. 10G will probably help the backup servers a bit, but it isn't going to be anywhere near a great speedup. What this points out is that my reflexive assumptions are calibrated all wrong for 10G Ethernet. I'm used to thinking of the network as slower than the disks, often drastically, but this is no longer even vaguely true. Even so-so 10G Ethernet performance (say 400 to 500 Mbytes/sec) utterly crushes single disk bandwidth for anything except SSDs. If we get good 10G speeds, we'll be crushing even moderate multi-disk bandwidth (and that's assuming we get full speed streaming IO rates and we're not seek limited). Suddenly the disks are the clear limiting factor, not the network. In fact even a single SSD can't keep up with a 10G Ethernet at full speed; we can see this from the mere fact that SATA interfaces themselves currently max out at 6 Gbits/sec on any system we're likely to use. (I'd run into this before even for 1G Ethernet, eg [[here IOTransferTimeAssumption]], but it evidently hadn't really sunk into my head.) PS: I don't know what this means for our backup servers and any possible 10G networking in their future. 10G is likely to improve things somewhat, but the dual 10G-T Intel cards we use don't grow on trees and maybe it's not quite cost effective for them right now. Or maybe the real answer is working out how to give them striped staging disks for faster write speeds.