This week the National Association of Broadcasters (NAB) show is going full swing in Las Vegas and Ethernet Technology Summit (ETS) is running in Santa Clara, California. Today in the United States also happens to be Tax Day, when you must file your return and pay any extra taxes owed to the US Government. That makes it a great time to show a new solution that aims to eliminate latency “taxes” from flash storage—it’s called NVMe Over Fabrics.
What Is NVMe and Why Would I Want It?
First a brief history of NVMe (Non-Volatile Memory Express): Traditionally flash storage is connected by SAS or SATA disk interfaces or a PCIe slot with proprietary drivers. SAS and SATA are proven solutions but they—and the included SCSI protocol layer–are designed for spinning disk, not flash. NVMe standardizes a flash-optimized command set to access to flash devices over a PCIe bus, eliminating the SCSI latency tax. NVMe devices are shipping now with native drivers for Linux, Windows, and VMware.
What does NVMe Flash Have to do with Fabrics?
This is fabulous to connect to flash inside the server or storage controller. But how do I share it, do high-availability, or failover if it’s captive inside just one server? What if I need more NVMe devices than easily fit in one server? The history of enterprise storage shows customers want to share fast, reliable storage it over a network, but the NVMe commands and PCIe bus are NOT friendly to sharing or making a fabric outside the box. So the NVMe, Inc. members proposed a new NVMe Over Fabrics standard in September 2014 to enable remote access of NVMe devices over RDMA fabrics, which eliminate the network latency tax of TCP and multiple data copies. Mellanox joined the technical working group and helping shape the standard, expected at the end of 2015.
But You Can See It Now!
Naturally some customers want to see or use NVMe Over Fabrics now, Now, NOW!!! So Mangstor created some very fast NVMe cards and a pre-1.0 standard driver that supports this now. They leverage Mellanox ConnectX-3 HCAs and the SwitchX-2 based switches, as both these can support both FDR 56Gb InfiniBand and 40Gb or 56Gb Ethernet. Using 4 of their fabulously fast NMX-16 PCIe cards (4TB each) and 2x56GbE ports to the storage server, they get >10GBytes/s of read throughput (>8GB write throughput) and >2.5M random read IOPS (4KB block size). Latency in best cases is the SAME as local flash access (otherwise only 6-8us more than local), proving there is little to no latency tax from the RDMA network. These are pretty amazing performance numbers even for flash and are well suited for high-performance databases and post-production video processing.
See the Demo, but Don’t Forget to File!
So if you’re in Las Vegas or Silicon Valley today or tomorrow (April 15-16), see this amazing demo in the Echostreams booth at NAB #SL15405 or the Mellanox booth at ETS #304.
We’ll help you avoid the SCSI tax and TCP tax, but you still have to pay your US Federal taxes!