On servers maybe moving to M.2 NVMe drives for their system drives

December 2, 2021

We've been looking into getting some new servers (partly because a number of our existing Dell R210 IIs are starting to fail). Although we haven't run into this yet ourselves, one of the things we've heard in the process of this investigation is that various lines of basic servers are trying to move to M.2 NVMe system disks instead of 3.5" or 2.5" disks. People are generally unhappy about this, for a number of reasons including that these are not fancy hot-swappable M.2 NVMe, just basic motherboard plug-in M.2.

(Not all basic servers have hot swap drive bays even as it stands, though. Our Dell R210 IIs have fixed disks, for example, and we didn't consider that a fatal flaw at the time.)

My first knee-jerk reaction was that server vendors were doing this in order to get better markups from more expensive M.2 NVMe drives (and I just assumed that M.2 NVMe drives were more expensive than 2.5" SATA SSDs, which doesn't seem to always be the case). However, now I think they have a somewhat different profit focused motive, which is to lower their manufacturing costs (they may or may not lower their prices; certainly they would like to pocket the cost savings as profit).

As far as I know, basic motherboard M.2 NVMe is fairly straightforward on a parts basis. I believe that you break out some PCIe lanes that the chipset already supplies to a new connector with some additional small mounting hardware, and that's about it. The M.2 NVMe drive PCB that you've received as a complete part then plugs straight into that motherboard connector during server assembly.

(If the chipset doesn't have the spare two sets of x4 PCIe lanes (or maybe one set), you don't try to make the server an M.2 NVMe one.)

By contrast, 2.5" or 3.5" drives require more physical stuff and work inside the server. Each drive bay needs a power cable and a SATA or SAS cable (which go into their own set of motherboard connectors), and then you need some physical mounting hardware in the server itself, either as hot swap drive bays on the front or internal mounts. During physical enclosure design you'll have extra objects in the way of the airflow through the server and you'll need to figure out the wiring harness setup. During server assembly you'll have extra work to wire up the extra wiring hardness, mount whatever drives people have bought into the drive bays, and put the drive bays in the machine.

All of this is probably not a huge extra cost, especially at scale. But it's an extra cost, and I suspect that server vendors (especially of inexpensive basic servers) are all for getting rid of it, whether or not their customers really like or want M.2 NVMe system disks. If M.2 NVMe drives end up being the least expensive SSD drive form factor, as I suspect will happen, server vendors get another cost savings (or opportunity to undercut other server vendors on pricing).


Comments on this page:

By Graziano at 2021-12-03 04:09:44:

There is another issue: size. In my previous job (working on HPC too), we had a few storage clusters (Ceph). The storage nodes were computers with SAS boxes attached to them, for a total of three or four units each (rack units). At some point we started to throw away that junk and to replace them with NVME based nodes. We ended up getting the same storage size, with much better performance, sized only one unit (and much lower power consumption).

Since those drives usually also hold the swap partition, the huge benefits of NVMe far outweigh the lack of hot-swap capability.

It may be worth researching the EDSFF interface:

In the not too distant future (2022) we are going to see a rapid transition away from two beloved SSD form factors in servers. Both the M.2 and 2.5″ SSD form factors have been around for many years. As we transition into the PCIe Gen5 SSD era, EDSFF is going to be what many of STH’s community will want. Instead of M.2 and 2.5″ we are going to have E1.S, E1.L, E3.S, and E3.L along with a mm designation that means something different than it does with M.2 SSDs.

Though I'm sure M.2 will stick around for a while just because of current volumes and large 'legacy' installed based.

By cks at 2021-12-03 13:57:13:

I don't really believe the 2022 timeline presented for EDSFF even for large storage servers using NVMe, much less for basic servers that only have a few disks. For basic servers that want to minimize part count by mounting the system disks on the motherboard, EDSFF seems to have little advantage over M.2 and M.2 hardware (and design expertise) is widely available today.

Written on 02 December 2021.
« Unfortunately, damaged ZFS filesystems can be more or less unrepairable
My views so far on bookmarklets versus addons in Firefox »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Thu Dec 2 22:56:39 2021
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.