The problem of multiple NVMe drives in a PC desktop today

November 29, 2019

My current office workstation currently has two 250 GB Samsung 850 EVO SSDs (and some HDs). These were decent SSDs for their era, but they're now any number of years old and 250 GB isn't very large, so as part of our general stocking up on 500 GB SSDs at good sale prices, I get to replace them. To my surprise, it turns out that decent 500 GB NVMe drives can now be had at roughly the same price as 500 GB SSDs (especially during sales), so I got approval to get two NVMe drives as replacements instead of two SSDs. Then I realized I had a little issue, because my motherboard only has one M.2 NVMe slot.

In general, if you want multiple NVMe drives in a desktop system, you're going to have problems that you wouldn't have with the same number of SSDs (or HDs). PC motherboards have been giving you lots of SATA ports for a long time now, but M.2 slots are much scarcer. I think that part of this is simply an issue of physical board space, since M.2 slots need a lot more space than SATA ports do, but part of it also seems to be that M.2 drives consume a lot more PCIe lanes than SATA ports do. An M.2 slot needs at least two lanes and really you want it to have four, and even today there are only so many PCIe lanes to go around, at least on common desktops.

(I suspect that this is partly segmentation on the part of Intel and to a lesser extent AMD. They know that server people increasingly want lots of PCIe lanes, so if they restrict that to expensive CPUs and chipsets, they can sell more of them. Unusual desktop people like me get to lose out.)

I'm solving my immediate problem by getting a PCIe M.2 adapter card, because fortunately my office desktop has an unused PCIe x4 card slot right now. But this still leaves me with potential issues in the long run. I mirror my drives, so I'll be mirroring these two NVMe drives, and when I replace a drive in such a mirror I prefer to run all three drives at once for a while rather than break the mirror's redundancy to swap in a new drive. With NVMe drives, that would require two addon cards on my current office machine and I believe it would drop my GPU from x16 to x8 in the process (not that I need the GPU bandwidth, since I run basic X).

(And if I wanted to put a 10G-T Ethernet card into my desktop for testing, that too would need another 4x capable slot and I'd have my GPU drop to 8x to get the PCIe lanes. Including the GPU slot, my motherboard has only three 4x capable card slots.)

One practical issue here is that apparently PCIe M.2 adapter cards can vary somewhat in quality and the resulting NVMe IO rates you get, and it's hard to know whether or not you're going to wind up with a decent one. Based on the low prices for cards with a single M.2 slot and the wide collection of brands I'd never heard of, this is a low margin area dominated by products that I will politely call 'inexpensive'. The modern buying experience for such products is not generally a positive one (good luck locating decent reviews, for example).

(Also, apparently not all motherboards will boot from an NVMe drive on an adapter card in a PCIe slot. This isn't really an issue for me (if the NVMe drives in my motherboard M.2 slot fails, I'll move the adapter drive to the motherboard), but it might be for some people.)

Hopefully all of this will get better in the future. If there is a large movement toward M.2 (and I think there may be), there will be demand for more M.2 capacity from CPUs and chipsets and more M.2 slots on desktop motherboards, and eventually the vendors will start delivering on that (somehow). This might be M.2 slots on the motherboard, or maybe more x8 and x16 PCIe slots and then adapter cards (and BIOSes that will boot from them).

Comments on this page:

From at 2019-11-29 02:39:55:

One practical issue here is that apparently PCIe M.2 adapter cards can vary somewhat in quality and the resulting NVMe IO rates you get

Hmm, that's a bit surprising to me – isn't it literally just a physical adapter from M.2 pins to PCIe 2x2 pins, with no circuitry of its own?

I've seen some 2x M.2 "RAID" adapters which have their own controller chips and as a result provide far lower speeds, but the most basic 1-slot adapter seems like it should be the same everywhere.

By cks at 2019-11-30 17:29:06:

I don't know any details about this; my information comes from Amazon review comments on one adapter, and they may have been inaccurate. There's a significant variety in adapter pricing (one not explained by inclusion of features such as claimed better heat sinking and so on), but that could just be branding. Detailed photos of one (old) card I could find make it look entirely passive, with just PCB traces and some resisters and diodes and stuff, but another one clearly has an 8-pin chip of some sort as well.

By Miksa at 2019-12-17 09:40:17:

GPU dropping from x16 to x8 should be a non-issue. I have read that only the latest Nvidia Geforce 2018 Ti is able to saturate a PCIe 3.0 x8 slot, and even then the performance penalty is minimal.

Written on 29 November 2019.
« Selecting metrics to gather mostly based on what we can use
Counting the number of distinct labels in a Prometheus metric »

Page tools: View Source, View Normal, Add Comment.
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Fri Nov 29 01:45:12 2019
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.