SSD versus NVMe for basic servers today (in early 2021)

April 12, 2021

I was recently reading Russell Coker's Storage Trends 2021 (via). As part of the entry, Coker wrote:

Last year NVMe prices were very comparable for SSD prices, I was hoping that trend would continue and SSDs would go away. [...]

Later Coker notes, about the current situation:

It seems that NVMe is only really suitable for workstation storage and for cache etc on a server. So SATA SSDs will be around for a while.

Locally, we have an assortment of servers, mostly basic 1U ones, which have either two 3.5" disk bays, four 3.5" disk bays, or rarely a bunch of disk bays (such as our fileservers). None of these machines can natively take NVMe drives as far as I know, not even our 'current' Dell server generation (which is not entirely current any more). They will all take SSDs, though, possibly with 3.5" to 2.5" adapters of various sorts. So for us, SSDs fading away in favour of NVMe would not be a good thing, not until we turn over all our server inventory to ones using NVMe drives. Which raises the question of where those NVMe servers are and why they aren't more common.

For servers that want more than four drive bays, such as our fileservers, my impression is that one limiting factor for considering an NVMe based server has generally been PCIe lanes. If you want eight or ten or sixteen NVMe drives (or more), the numbers add up fast if you want them all to run at x4 (our 16-bay fileservers would require 64 PCIe lanes). You can get a ton of PCIe lanes but it requires going out of your way, in CPU and perhaps in CPU maker (to AMD, which server vendors seem to have been slow to embrace). You can get such servers (Let's Encrypt got some), but I think they're currently relatively specialized and expensive. With such a high cost for large NVMe, most people who don't have to have NVMe's performance would rather buy SATA or SAS based systems like our fileservers.

(To really get NVMe speeds, these PCIe lanes must come directly from the CPU; otherwise they will get choked down to whatever the link speed is between the CPU and the chipset.)

Garden variety two drive and four drive NVMe systems would only need eight or sixteen PCIe lanes, which I believe is relatively widely available even if you're saving an x8 for the server's single PCIe expansion card slot. But then you have to physically get your NVMe drives in the system. People who operate servers really like drive carries, especially hot-swappable ones. Unfortunately I don't think there's a common standard for this for NVMe drives (at one point there was U.2, but it's mostly vanished). In theory a server vendor could develop a carrier system that would let them mount M.2 drives, perhaps without being hot swappable, but so far I don't think any major vendor has done the work to develop one.

(The M.2 form factor is where the NVMe volume is due to consumer drives, so basic commodity 1U servers need to follow along. The Dell storage server model that Let's Encrypt got seems to use U.2 NVMe drives, which will presumably cost you 'enterprise' prices, along with the rest of the server.)

All of this seems to give us a situation where SATA remains the universal solvent of storage, especially for basic 1U servers. You can fit four 3.5" SATA drive bays into the front panel of a 1U server, which covers a lot of potential needs for people like us. We can go with two SSDs, four SSDs, two SSDs and two big HDs, and so on.

(NVMe drives over 2 TB seem relatively thin on the ground at the moment, although SSDs only go up one step to 4 TB if you want plenty of options. Over that, right now you're mostly looking at 3.5" spinning rust, which is another reason to keep 1U servers using 3.5" SATA drive bays.)


Comments on this page:

By Andrew at 2021-04-12 23:43:05:

I object to "SSD vs NVMe" — they're NVMe SSDs.

For any nontrivial use like running databases you need enterprise drives like the Intel DC series where the performance stays consistent and doesn’t fall off a cliff after some use, whether due to poor free space management, substandard QLC flash, running out of SLC cache, de duplication or any of the many tricks consumer SSD makers use to make their drives look better than they really are in benchmarks.

The premium for enterprise drives for consistent performance and firmware testing is generally much higher than any premium due to the interface.

From 83.220.236.220 at 2021-04-13 06:40:39:

Do you mean NVMe vs AHCI?

It seems that the motherboards in your fileservers, the X11SPH-nCTF, does in fact support NVMe per the product page:

8. 2x Port NVMe PCI-E 3.0 x4 via OCuLink

I had to look up "OCuLink":

So in 2012, word started spreading that PCI-SIG was developing a standard cabled protocol for PCIe devices off the motherboard. And this standard would be free and unencumbered by corporate overlords, Apple and Intel.

Basically just like Thunderbolt can carry PCIe over a cable, so can OCuLink, but it is more vendor-neutral via the PCI-SIG, contrasted with TB which is controlled by Intel (and had heavier license fees in 2017).

It didn't take off a lot because most vendors went with either the M.2 or U.2 interfaces:

However, in 2015 SFF-8639 was officially renamed “U.2″ for four-lane PCIe storage applications. This has become somewhat more popular. So, in a way, U.2 is a cousin of OCuLink and some devices might even use OCuLink protocol over the U.2 connector!

U.2 drives look like conventional 2.5” SSD’s. So they might take off in servers and datacenter applications. But lately, most PCIe storage implementations are leaning towards the compact M.2 interface instead. And on the pro-sumer side, it’s almost as difficult to find a U.2 motherboard or drive as it was to find SATA Express!

  • Ibid
By Andy Smith at 2021-04-13 09:01:51:

I was interested to see the introduction of some 1U servers with 4x 3.5" hot swap SATA and 4x 2.5" hot swap NVMe: https://www.tyan.com/Barebones_GC68B7126_B7126G68V4E4HR

– Andy Smith

By -dsr- at 2021-04-13 09:07:46:

From my point of view, U.2 hasn't vanished -- U.2 is finally moving down from "if you have to ask if you can afford it" territory to being plausible that your next expensive fileserver will have lots of U.2 slots.

As you point out, having N NVMe SSDs implies 2-4x N PCIe lanes. Right now, that means buying AMD Epyc CPUs, which have lots of PCIe lanes, or the top end of Intel's CPUs. If you don't have a brand new system centered around that, you are unlikely to have more than 4 NVMe SSDs - two on the motherboard and 2 on an add-in PCIe adapter card, or else 2-4 in U.2 slots.

I have just ordered 2 development servers with 24 U.2 slots each. The premium for U.2 versions of NVMe SSDs over M.2 NVMe SSDs is not large; still, more than half the cost of each system is in disks, and I am not fully populating the slots yet.

In about 2 years, I expect "workstation" class systems to come with 2-4 U.2 slots, 2 M.2 slots, and 4 SATA3 interfaces for legacy storage.

Written on 12 April 2021.
« Counting how many times something started or stopped failing in Prometheus
Getting NVMe and related terminology straight (for once) »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Mon Apr 12 22:03:28 2021
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.