Where Are We On this NVMe Thing?
Back in April 2015, during the Ethernet Technology Summit conference, my colleague Rob Davis wrote a great blog about NVMe Over Fabrics. He outlined the basics of what NVMe is and why Mellanox is collaborating on a standard to access NVMe devices over networks (over fabrics). We had two demos from two vendors in our booth:
These numbers were pretty impressive, but in the technology world, nothing stands still. One must always strive to be faster, cheaper, more reliable, and/or more efficient.
The Story Gets Better
Today—four months after Ethernet Technology Summit—kicked off the Flash Memory Summit in Santa Clara, California. Mellanox issued a press release highlighting the fact that we now have NINE vendors showing TWELVE demos of flash (or other non-volatile memory) being accessed using high-speed Mellanox networks at 40, 56, or even 100Gb/s speeds. Mangstor and Saratoga Speed are both back with faster, more impressive demos and we have other demos from Apeiron, HGST, Memblaze, Micron, NetApp, PMC-Sierra, and Samsung. Here’s a quick summary:
It’s Secret and Stealthy, But You Can See their Demo
Apeiron Data is a somewhat stealthy startup but they’re showing some very innovative NVMe solutions around accelerating big data. See their Apeiron Data Fabric (booth #819).
NVMe Over Fabrics at 100Gb/s
As mentioned, Mangstor is back with an upgraded NVMe Over Fabrics solution. Their NX6320 flash storage array now supports Mellanox ConnectX-4 for 100Gb Ethernet and can do 14M (million) IOPs. It’s rumored another configuration using multiple Mangstor arrays can hit 50GB/s (yes, 50 GigaBytes per second) of throughput. Is this rumor true? Visit their booth to find out (booth #649).
Micron has a demo of NVMe Over Fabrics supporting millions of IOPs with very low latency. It also uses Mellanox ConnectX-4 adapter running 56Gb. They don’t have a booth, but they do have a keynote Wednesday at 11:40am.
Figure 1: Mellanox ConnectX-4 100Gb/s adapter supports fast NVMe Over Fabrics performance with both Mangstor and Micron
Super Fast Non-Volatile Memory with Amazingly Low Latency
HGST has an amazing demo using Phase Change Memory (PCM). PCM is persistent but much faster than NAND flash while much more affordable than DRAM. And if you set up access properly with Mellanox InfiniBand, you can make total access latency across the network as low as 2us (2 microseconds). This TechRadar Pro article explains it nicely (booth #647).
Figure 2: HGST demos super-fast Phase Change Memory with super low-latency InfiniBand
Peerless Demos of NVMe with RDMA
PMC-Sierra also has some exciting technology to show, in fact two demos. The first demo shows that throughput to the NVMe device over a RDMA network connection is the same as using that NVMe device locally, and that latency for remote access is only a few microseconds (6-7us) higher than local access. The second demo shows how using PeerDirect RDMA to directly access the NVRAM device allows for higher throughput and lower latency than traditional RDMA. Both demos feature the PMC Sierra FlashTec NVRAM drive and the Mellanox ConnectX-3 NIC supporting the routable RoCE protocol. Full details are described in this PMC-Sierra blog.
Figure 3: PMC-Sierra demonstrates RDMA with Mellanox PeerDirect
iSER Makes Progress Too
As a standards-based block storage protocol that leverages iSCSI, iSER (iSCSI RDMA) is ideal for applications and customers that want the traditional SCSI layer instead of a block device, and customers want to use a mature storage protocol (or can’t wait for NVMe Over Fabrics to standardize). Three vendors are showing iSER-capable flash arrays: NetApp with their EF-560 (booth #511), Samsung with their NVMe scale-out array (booth #307) and Saratoga Speed with their Altamont XP (booth #517).
Ceph on Flash
Ceph is traditionally known for big object storage and moderately high sequential throughput, not for IOPs-intensive workloads. But now both Samsung (booth #307) and a another vendor (see if you can find them) are showing high-IOPs all-flash Ceph solutions with optimized Ceph code.
Figure 4: Using all flash with Mellanox 40GbE keeps Cephie the octopus happy.
What Can We Conclude?
Seeing all the excitement over NVMe Over Fabrics, iSER, Ceph with flash, and other flash demos that use Mellanox networking, we can draw a few conclusions:
If you’re coming to Flash Memory Summit 2015, visit our booth (#817) then go visit our partners to see these amazing demos! We will also be showing in our booth the new ConnectX-4 Lx adapter, which is optimized for 25Gb Ethernet, and our Spectrum switch, which is the fastest switch for 25, 40, 50, or 100Gb Ethernet. Booths are open Wednesday from 12-7pm and Thursday from 10am-2pm.
RESOURCES