25 Is the New 10, 50 Is the new 40, 100 Is the New Amazing

 
Adapters, Ethernet, Storage, Switches, , ,

(This blog was inspired by an insightful article in EE Times, written by my colleague, Chloe Jian Ma.)

The latest buzz about Ethernet is that 25GbE is coming. Scratch that, it’s already here and THE hot topic in the Ethernet world, with multiple vendors sampling 25GbE wares and Mellanox already shipping an end-to-end solution with adapters, switches and cables that support 25, 50, and 100GbE speeds. Analysts predict 25GbE sales will ramp faster than any previous Ethernet speed.

Why?????? What’s driving this shift?

John Kim 030416 Fig 1Figure 1: Analysts predict 25/40/50/100GbE adapters reach 57% of a $1.8 Billion USD high-speed Ethernet adapter market by 2020. (Based on Crehan Research data published January 2016.)

These new speeds are so hot that, like the ageless celebrities you just saw on the Oscar Night red carpet, we say “25 is the new 10 and 50 is the new 40.” But whoa! Sure everyone wants to look younger for the camera, but no 25-year old actor wants to look 10. More importantly, why would anyone want 25GbE or 50GbE when we already have 40GbE and 100GbE?

The answer is about performance and cost at cloud scale.

 

Speed Lanes Fibers/ Wires* Ratified by First Product Avail.
1GbE 1 x 1 2 IEEE in 1999 2000
10GbE 1 x 10 2 IEEE in 2002 2003
25GbE 1 x 25 2 25GE Consortium in 2014,
IEEE in process
late 2015
40GbE 4 x 10 8* IEEE in 2010 2011
50GbE 2 x25 4 25GE Consortium in 2014,
IEEE in process
late 2015
100GbE 10×10 20* IEEE in 2010 2011 for switches only
100GbE 4 x 25 8* IEEE in 2010 early 2015

*Optical networking can combine multiple lanes into one fiber with wavelength division multiplexing (WDM), so a 40GbE or 100GbE (4×25) fiber cable could use either 8 fibers without WDM or 2 fibers with WDM.

Figure 2: Chart showing Ethernet speed standards including the number of lanes and wires/fibers typically needed.

 

Performance

First, servers are getting faster. A mid-range-end x86 server can easily drive 20Gb/s of network throughput, while high-end servers can push 40, 50, or even 80Gb/s.

 

Second, storage is getting faster– NVMe SSDs today support 3GB/s (24Gb/s) of sequential read throughput, each! Vendors are sampling NVMe SSDs that support over 4GB/s (32Gb/s). A recent Mellanox demo with Microsoft Windows Storage Spaces showed one server with three NVMe SSDs could sustain 95Gb/s of throughput over a Mellanox 100GbE link using the Windows SMB Direct protocol. (See the Windows 100GbE demo on the Mellanox Microsoft solutions page.)

 

Third—here’s the other Oscars tie-in—more movies and TV shows are being shot and edited using 4K (3840×2160 or 4096×2160) resolution. Uncompressed 4K video needs 12 to 26 Gb/s of bandwidth per stream (depending on color depth) to capture, edit, and render, so both 10GbE and 8Gb Fibre Channel are too slow. Some studios are already testing 6K and 8K video and expect 8K resolution video will be commonplace by 2020.

 

At this point someone is obliged to point out that not all servers are high-end, not all applications read from 3 NVMe SSDs at maximum speed, and not every film is shot and viewed in 4K resolution. All true, but even today’s low-end and midrange servers with 3 SAS SSDs or a dozen fast HDDs (or 20 slow HDDs) can easily overwhelm a single 10GbE connection. In many cases, 25GbE is ideal for many new servers, SSDs, and ultra-high definition video workflows.

 

Cost

Obviously, if higher bandwidth didn’t cost more, everyone would use it, but cost matters too. Many mid-range application servers and clients today need more than 10Gb/s but less than 25Gb/s.

But why not just use 2x10GbE or 4x10GbE? Because it’s less expensive to use 25GbE, which provides 2.5x more bandwidth over the same number of wires and switch ports but costs about 1.5x more per NIC port than 10GbE. This gives 25GbE a 40% lower cost/bandwidth than 10GbE. Analysts forecast 25GbE will reach the same price per port as 10GbE in just 3 years, by 2019, giving 25GbE an even better cost/bandwidth advantage.

 

John Kim 030416 Fig 2

Figure 3: Forecast of 25GbE and 10GbE server Ethernet adapter pricing. Source: Chloe Jian Ma article in EE Times with graphic based on Crehan Research data published July 2015.

 

What if you need more than 25Gb/s? You can always use 40GbE (on 8 wires/fibers), which is growing rapidly in cloud and enterprise data centers. But there is also a 50GbE option which carries 25% more data. The adapter ports cost about the same as 40GbE, making the cost/bandwidth 20% lower than for 40GbE. Part of the cost savings is simply from using one NIC and one switch port instead of two, but another important reason is due to the way 25, 50 and 100 do signaling. 10 and 25GbE both use a single lane (two wires or fibers), 50GbE uses two lanes, and 40 and 100GbE use 4 lanes (eight wires or fibers in most cases). The more lanes used, the greater the power consumption for the adapter and switch, the fewer ports you can squeeze into one switch, and the more expensive the cable. For example a 100GbE switch port can support one connection at 40 or 100GbE but two connections of 25 or 50GbE.  Because 25, 50 and 100GbE send and receive more data per lane than 10 or 40GbE, they make networking more efficient and lower cost.

 John Kim 030416 Fig 3a

Figure 4: New upgrade paths for server and storage networks take advantage of faster signaling speeds per lane. Source: Chloe Jian Ma blog post published in EE Times in October 2015.

 

Networking at Cloud Scale

Large enterprises may deploy thousands of servers per year while large cloud and Web 2.0 customers deploy thousands of servers per month and may roll out 2 or 3 new data centers each year. At these quantities, small cost savings in each network connection add up to large numbers and additional special factors come into play.

 

Besides the lower cost of the adapter ports themselves, one 25GbE port consumes less power, half the cabling, and half the switch ports of two 10GbE ports; reducing the number of switch ports saves more power plus rack space. Also, it’s easier to manage one 25 or 50GbE connection than bonding two or four 10GbE connections per server, and you reduce the number of inter-switch links (ISLs) by 60% when using 100GbE instead of 40GbE connections between switches.

 

As a bonus, 25GbE can run over existing fiber optic cable plant designed for 10GbE, and 50 or 100GbE can run over existing fiber optic cable plant designed for 40GbE, just by changing the transceivers. This is a tremendous cost savings for large data centers that already use fiber cabling for 10 and 40GbE. And for those hyperscale customers who use the Open Compute Platform (OCP), these special server, rack and NIC designs result in additional capital cost, power and space savings, as well as the ability to have four servers share one NIC using a special Mellanox feature called Multi-Host.

 

John Kim 030416 Fig 5

Figure 5: Mellanox ConnectX-4 Lx Multi-Host adapter in the OCP Yosemite chassis allows four servers to share one 100GbE NIC.

 

Saving $100 per server on adapters and another $300 per server in cabling and switch ports is only mildly interesting for customers deploying 20 servers per quarter ($8000 savings) but immensely appealing for those deploying 20,000 servers at a time ($8 million savings), plus the ongoing savings in power, cooling, and rack space.

 

The Bottom Line About the Bottom Line

The key message is that most new servers and the flash storage inside require connections faster than 10GbE. Likewise, storage systems with 20+ spinning disks or 3+ SSDs (even the slower SAS/SATA SSDs) often need 20Gb/s or more of bandwidth. This makes 25GbE ideal for new server deployments, 40 or 50GbE ideal for storage, and 100GbE the most efficient link to aggregate switch traffic or bridge datacenter racks and rows.  25, 50 and 100 offer not only faster performance but lower cost and easier scaling for large datacenters.

 

Mellanox is happy to support all these solutions as we are the clear market leader in 40GbE adapters and first vendor to ship 25, 50, and 100GbE adapters. In fact, Mellanox is the only vendor to offer an end-to-end solution for 10, 25, 40, 50, and 100GbE, including adapters, switches, cables and transceivers, and a leading supplier of OCP-compatible network cards.

 

You can see the Mellanox 25, 50, and 100GbE solutions, including our OCP adapters, next week at the 2016 US OCP Summit, March 9-10 in San Jose.

 

John Kim 030416 SN2700 switch

Figure 5: The Mellanox Spectrum SN2700 can support 32 ports of 40 or 100GbE and up to 64 ports of 25 or 50GbE.

 

RESOURCES

 

About John F. Kim

John Kim is Director of Storage Marketing at Mellanox Technologies, where he helps storage customers and vendors benefit from high performance interconnects and RDMA (Remote Direct Memory Access). After starting his high tech career in an IT helpdesk, John worked in enterprise software and networked storage, with many years of solution marketing, product management, and alliances at enterprise software companies, followed by 12 years working at NetApp and EMC. Follow him on Twitter: @Tier1Storage

Comments are closed.