The problem of multiple NVMe drives in a PC desktop today
My current office workstation currently has two 250 GB Samsung 850 EVO SSDs (and some HDs). These were decent SSDs for their era, but they're now any number of years old and 250 GB isn't very large, so as part of our general stocking up on 500 GB SSDs at good sale prices, I get to replace them. To my surprise, it turns out that decent 500 GB NVMe drives can now be had at roughly the same price as 500 GB SSDs (especially during sales), so I got approval to get two NVMe drives as replacements instead of two SSDs. Then I realized I had a little issue, because my motherboard only has one M.2 NVMe slot.
In general, if you want multiple NVMe drives in a desktop system, you're going to have problems that you wouldn't have with the same number of SSDs (or HDs). PC motherboards have been giving you lots of SATA ports for a long time now, but M.2 slots are much scarcer. I think that part of this is simply an issue of physical board space, since M.2 slots need a lot more space than SATA ports do, but part of it also seems to be that M.2 drives consume a lot more PCIe lanes than SATA ports do. An M.2 slot needs at least two lanes and really you want it to have four, and even today there are only so many PCIe lanes to go around, at least on common desktops.
(I suspect that this is partly segmentation on the part of Intel and to a lesser extent AMD. They know that server people increasingly want lots of PCIe lanes, so if they restrict that to expensive CPUs and chipsets, they can sell more of them. Unusual desktop people like me get to lose out.)
I'm solving my immediate problem by getting a PCIe M.2 adapter card, because fortunately my office desktop has an unused PCIe x4 card slot right now. But this still leaves me with potential issues in the long run. I mirror my drives, so I'll be mirroring these two NVMe drives, and when I replace a drive in such a mirror I prefer to run all three drives at once for a while rather than break the mirror's redundancy to swap in a new drive. With NVMe drives, that would require two addon cards on my current office machine and I believe it would drop my GPU from x16 to x8 in the process (not that I need the GPU bandwidth, since I run basic X).
(And if I wanted to put a 10G-T Ethernet card into my desktop for testing, that too would need another 4x capable slot and I'd have my GPU drop to 8x to get the PCIe lanes. Including the GPU slot, my motherboard has only three 4x capable card slots.)
One practical issue here is that apparently PCIe M.2 adapter cards can vary somewhat in quality and the resulting NVMe IO rates you get, and it's hard to know whether or not you're going to wind up with a decent one. Based on the low prices for cards with a single M.2 slot and the wide collection of brands I'd never heard of, this is a low margin area dominated by products that I will politely call 'inexpensive'. The modern buying experience for such products is not generally a positive one (good luck locating decent reviews, for example).
(Also, apparently not all motherboards will boot from an NVMe drive on an adapter card in a PCIe slot. This isn't really an issue for me (if the NVMe drives in my motherboard M.2 slot fails, I'll move the adapter drive to the motherboard), but it might be for some people.)
Hopefully all of this will get better in the future. If there is a large movement toward M.2 (and I think there may be), there will be demand for more M.2 capacity from CPUs and chipsets and more M.2 slots on desktop motherboards, and eventually the vendors will start delivering on that (somehow). This might be M.2 slots on the motherboard, or maybe more x8 and x16 PCIe slots and then adapter cards (and BIOSes that will boot from them).
|
|