The next (or coming) way to connect SSDs to your system

June 27, 2015

Modern SSDs have a problem: flash chips are so fast that they outpace even high speed SATA and SAS links. In the enterprise market the workaround for this is SSD 'drives' that are PCIe cards, but this has all sorts of drawbacks as a general solution. Since the companies involved here are not stupid, they've known this for some time and have come up with a new interconnection system, NVMe aka NVM Express.

The Wikipedia page is a bit confusing to outsiders, but as far as I can tell NVMe is essentially a standard for how PCIe SSDs should present themselves to the host system. NVMe devices advertise that they have a specific PCI device class and promise to have a common set of registers, control operations, and so on; as a result, any NVMe device can be driven by a single common driver instead of each company's devices needing their own driver.

(Most PCI and PCIe devices need specific drivers because there's no standard for how they're controlled; each different device has its own unique collection of registers, operations, and so on. This gives us, eg, a zillion different PCI(e) Ethernet device drivers.)

If this was all that NVMe was, it would be kind of boring because it would be restricted to actual PCIe card SSDs and those are never going to be really popular. But NVMe also has a physical standard called U.2 that lets you pull PCIe out over a cable to a conventional-ish SSD drive. This means that you can have a 2.5" form factor SSD mounted somewhere and cabled up that is an NVMe drive and thus is actually a PCIe device on one of your PCIe busses. Assuming everything works and U.2 ports appear in sufficient quantity on motherboards, this seems likely to compete with SATA for connecting SSDs in general, not just in expensive enterprise setups.

(U.2 used to be called SFF-8639 until this month. As you can tell, the ink is barely dry on much of this stuff.)

If I'm reading the tea leaves right, U.2 is somewhat less convenient than ordinary SATA because it requires cables and connectors that are a bit more than twice as big. This is going to impact port density and wiring density, but there are plenty of ordinary machines which have enough motherboard real estate and enough space for cables that this probably isn't a big concern. On the other hand I do expect a bunch of small motherboard and high density servers to deliberately stay with SATA or SAS for the higher achievable port density.

(PCIe and thus NVMe can also be connected up with a less popular connector standard called M.2. This is apparently intended for plugging bare-board SSDs directly into your motherboard instead of cabling things to mounts elsewhere, although I've read some things suggestion it can be coerced into working with cables.)

Does this all matter to ordinary people balancing the SSD inflection point? Maybe. My view is that it does matter in the long term for computer hardware companies, but that's going to take another entry to explain.

Written on 27 June 2015.
« The status of our problems with overloaded OmniOS NFS servers
Faster SSDs matter to companies because they sell things »

Page tools: View Source, Add Comment.
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Sat Jun 27 02:12:13 2015
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.