50 Shades of Flash—Solutions That Won’t Tie Up Your Storage

 
Storage, ,

Where Are We On this NVMe Thing?

Back in April 2015, during the Ethernet Technology Summit conference, my colleague Rob Davis wrote a great blog about NVMe Over Fabrics. He outlined the basics of what NVMe is and why Mellanox is collaborating on a standard to access NVMe devices over networks (over fabrics). We had two demos from two vendors in our booth:

  • Mangstor’s NX-Series array with NVMe Over Fabrics, using Mellanox 56GbE RoCE (or FDR InfiniBand), demonstrated >10GB/s read throughput and >2.5 million 4KB random read IOPS.
  • Saratoga Speed’s Altamont XP-L with iSER (iSCSI RDMA), using Mellanox 56Gb RoCE to reach 11.6GB/s read throughput and 2.7 million 4KB sequential read IOPs

These numbers were pretty impressive, but in the technology world, nothing stands still. One must always strive to be faster, cheaper, more reliable, and/or more efficient.

 

The Story Gets Better

Today—four months after Ethernet Technology Summit—kicked off the Flash Memory Summit in Santa Clara, California. Mellanox issued a press release highlighting the fact that we now have NINE vendors showing TWELVE demos of flash (or other non-volatile memory) being accessed using high-speed Mellanox networks at 40, 56, or even 100Gb/s speeds.  Mangstor and Saratoga Speed are both back with faster, more impressive demos and we have other demos from Apeiron, HGST, Memblaze, Micron, NetApp, PMC-Sierra, and Samsung. Here’s a quick summary:

 

It’s Secret and Stealthy, But You Can See their Demo

Apeiron Data is a somewhat stealthy startup but they’re showing some very innovative NVMe solutions around accelerating big data. See their Apeiron Data Fabric (booth #819).

 

NVMe Over Fabrics at 100Gb/s

As mentioned, Mangstor is back with an upgraded NVMe Over Fabrics solution. Their NX6320 flash storage array now supports Mellanox ConnectX-4 for 100Gb Ethernet and can do 14M (million) IOPs. It’s rumored another configuration using multiple Mangstor arrays can hit 50GB/s (yes, 50 GigaBytes per second) of throughput. Is this rumor true?  Visit their booth to find out (booth #649).

Micron has a demo of NVMe Over Fabrics supporting millions of IOPs with very low latency. It also uses Mellanox ConnectX-4 adapter running 56Gb. They don’t have a booth, but they do have a keynote Wednesday at 11:40am.

John Kim 081215 Fig1Figure 1: Mellanox ConnectX-4 100Gb/s  adapter supports fast NVMe Over Fabrics performance with both Mangstor and Micron

 

Super Fast Non-Volatile Memory with Amazingly Low Latency

HGST has an amazing demo using Phase Change Memory (PCM).  PCM is persistent but much faster than NAND flash while much more affordable than DRAM. And if you set up access properly with Mellanox InfiniBand, you can make total access latency across the network as low as 2us (2 microseconds).  This TechRadar Pro article explains it nicely (booth #647).

John Kim 081215 Fig2

Figure 2: HGST demos super-fast Phase Change Memory with super low-latency InfiniBand

 

Peerless Demos of NVMe with RDMA

PMC-Sierra also has some exciting technology to show, in fact two demos. The first demo shows that throughput to the NVMe device over a RDMA network connection is the same as using that NVMe device locally, and that latency for remote access is only a few microseconds (6-7us) higher than local access.  The second demo shows how using PeerDirect RDMA to directly access the NVRAM device allows for higher throughput and lower latency than traditional RDMA.  Both demos feature the PMC Sierra FlashTec NVRAM drive and the Mellanox ConnectX-3 NIC supporting the routable RoCE protocol.  Full details are described in this PMC-Sierra blog.

John Kim 081215 Fig3

Figure 3: PMC-Sierra demonstrates RDMA with Mellanox PeerDirect

 

iSER Makes Progress Too

As a standards-based block storage protocol that leverages iSCSI, iSER (iSCSI RDMA) is ideal for applications and customers that want the traditional SCSI layer instead of a block device, and customers want to use a mature storage protocol (or can’t wait for NVMe Over Fabrics to standardize). Three vendors are showing iSER-capable flash arrays: NetApp with their EF-560 (booth #511), Samsung with their NVMe scale-out array (booth #307) and Saratoga Speed with their Altamont XP (booth #517).

 

Ceph on Flash

Ceph is traditionally known for big object storage and moderately high sequential throughput, not for IOPs-intensive workloads. But now both Samsung (booth #307) and a another vendor (see if you can find them) are showing high-IOPs all-flash Ceph solutions with optimized Ceph code.

John Kim 081215 Fig4

Figure 4: Using all flash with Mellanox 40GbE keeps Cephie the octopus happy.

 

What Can We Conclude?

Seeing all the excitement over NVMe Over Fabrics, iSER, Ceph with flash, and other flash demos that use Mellanox networking, we can draw a few conclusions:

  • Non-volatile memory (NVMEM) is getting faster. Instead of just replacing hard drives, it’s ready to displace DRAM by letting customers build servers with more NVMEM and less DRAM, increasing application performance while lowering costs;
  • A fast, low latency network is absolutely essential to supporting fast non-volatile storage, whether using flash or something faster. RDMA is hugely beneficial, if not outright required;
  • Mellanox looks like the most popular choice for demonstrating NVMe Over Fabrics, Peer-to-peer PCIe device access, RDMA, or 100Gb/s networking. We must be doing something right!

If you’re coming to Flash Memory Summit 2015, visit our booth (#817) then go visit our partners to see these amazing demos! We will also be showing in our booth the new ConnectX-4 Lx adapter, which is optimized for 25Gb Ethernet, and our Spectrum switch, which is the fastest switch for 25, 40, 50, or 100Gb Ethernet. Booths are open Wednesday from 12-7pm and Thursday from 10am-2pm.

 

RESOURCES

 

 

About John F. Kim

John Kim is Director of Storage Marketing at Mellanox Technologies, where he helps storage customers and vendors benefit from high performance interconnects and RDMA (Remote Direct Memory Access). After starting his high tech career in an IT helpdesk, John worked in enterprise software and networked storage, with many years of solution marketing, product management, and alliances at enterprise software companies, followed by 12 years working at NetApp and EMC. Follow him on Twitter: @Tier1Storage

Comments are closed.