== SSD versus NVMe for basic servers today (in early 2021) I was recently reading Russell Coker's [[Storage Trends 2021 https://etbe.coker.com.au/2021/04/12/storage-trends-2021/]] ([[via https://planet.debian.org/]]). As part of the entry, Coker wrote: > Last year NVMe prices were very comparable for SSD prices, I was > hoping that trend would continue and SSDs would go away. [...] Later Coker notes, about the current situation: > It seems that NVMe is only really suitable for workstation storage and > for cache etc on a server. So SATA SSDs will be around for a while. Locally, [[we https://support.cs.toronto.edu/]] have an assortment of servers, mostly basic 1U ones, which have either two 3.5" disk bays, four 3.5" disk bays, or rarely a bunch of disk bays (such as [[our fileservers ../linux/ZFSFileserverSetupIII]]). None of these machines can natively take NVMe drives as far as I know, not even our 'current' Dell server generation (which is not entirely current any more). They will all take SSDs, though, possibly with 3.5" to 2.5" adapters of various sorts. So for us, SSDs fading away in favour of NVMe would not be a good thing, not until we turn over all our server inventory to ones using NVMe drives. Which raises the question of where those NVMe servers are and why they aren't more common. For servers that want more than four drive bays, such as [[our fileservers]], my impression is that one limiting factor for considering an NVMe based server has generally been PCIe lanes. If you want eight or ten or sixteen NVMe drives (or more), the numbers add up fast if you want them all to run at x4 (our 16-bay fileservers would require 64 PCIe lanes). You can get a ton of PCIe lanes but it requires going out of your way, in CPU and perhaps in CPU maker (to AMD, which server vendors seem to have been slow to embrace). You can get such servers ([[Let's Encrypt got some https://letsencrypt.org/2021/01/21/next-gen-database-servers.html]]), but I think they're currently relatively specialized and expensive. With such a high cost for large NVMe, most people who don't have to have NVMe's performance would rather buy SATA or SAS based systems like [[our fileservers]]. (To really get NVMe speeds, these PCIe lanes must come directly from the CPU; otherwise they will get choked down to whatever the link speed is between the CPU and the chipset.) Garden variety two drive and four drive NVMe systems would only need eight or sixteen PCIe lanes, which I believe is relatively widely available even if you're saving an x8 for the server's single PCIe expansion card slot. But then you have to physically get your NVMe drives in the system. People who operate servers really like drive carries, especially hot-swappable ones. Unfortunately I don't think there's a common standard for this for NVMe drives ([[at one point there was U.2, but it's mostly vanished NVMeAndTechChange]]). In theory a server vendor could develop a carrier system that would let them mount [[M.2 https://en.wikipedia.org/wiki/M.2]] drives, perhaps without being hot swappable, but so far I don't think any major vendor has done the work to develop one. (The M.2 form factor is where the NVMe volume is due to consumer drives, so basic commodity 1U servers need to follow along. The Dell storage server model that Let's Encrypt got seems to use U.2 NVMe drives, which will presumably cost you 'enterprise' prices, along with the rest of the server.) All of this seems to give us a situation where SATA remains the universal solvent of storage, especially for basic 1U servers. You can fit four 3.5" SATA drive bays into the front panel of a 1U server, which covers a lot of potential needs for people like us. We can go with two SSDs, four SSDs, two SSDs and two big HDs, and so on. (NVMe drives over 2 TB seem relatively thin on the ground at the moment, although SSDs only go up one step to 4 TB if you want plenty of options. Over that, right now you're mostly looking at 3.5" spinning rust, which is another reason to keep 1U servers using 3.5" SATA drive bays.)