10/40GbE Architecture Efficiency Maxed-Out? It’s Time to Deploy 25/50/100GbE

 
Big Data, Data Center, Enterprise, Ethernet, Uncategorized, , , , , ,

iStock_flying-animation-information-in-cloud-78487761_HD_1080_2In 2014, after the IEEE rejected the idea of standardizing 25GbE and 50GbE over one lane and two lanes respectively. It was then that a group of technology leaders (including Mellanox, Google, Microsoft, Broadcom, and Arista) formed the 25Gb Ethernet consortium in order to create an industry standard for defining interoperable solutions. The Consortium has been so successfully pervasive in its mission that many of the larger companies that had opposed standardizing 25GbE in the IEEE, have joined the 25GbE Consortium and are now top-level promoters. Since then, the IEEE has changed its original position and has now standardized 25/50GbE.

However, now that 25/50GbE is an industry standard, it is interesting to look back and analyze whether the decision to form the Consortium was the right one.

2016_0604_fig1

There are many ways to handle such an analysis, but the best way is to compare the efficiency that modern ultra-fast and ultra-scalable data centers experience when running over 10/40GbE architecture versus over 25/50/100 architecture. Here, too, there are many parameters that can be analyzed, but the most important is the architecture’s ability to achieve (near) real-time data processing (serving the ever-growing “mobile world”) at the lowest possible TCO per virtual machine (VM).

Of course, processing the data in (near) real-time requires higher performance, but it also needs cost-efficient storage systems, which implies that scale-out software defined storage with flash-based disks must be deployed. Doing so will enable Ethernet-based networking and eliminate the need for an additional separate network (like Fibre Channel) that is dedicated to storage, thereby reducing the overall deployment cost and maintenance.

To further reduce cost, and yet to still support the faster speeds that flash-based storage can provide, it is more efficient to use only one 25GbE NIC instead of using three 10GbE NICs. Running over 25GbE also reduces the number of switch ports and the number of cables by a factor of three. So, access to storage is accelerated at a lower system cost.  A good example of this is the NexentaEdge high performance scale-out block and object storage that has been deployed by Cambridge University for their OpenStack-based cloud.

2016_0604_fig2

Building a bottleneck-free storage system is critical for achieving the highest possible efficiency of various workloads in a virtualized data center. (For example, VDI performance issues begin in the storage infrastructure.) However, no less important is to find ways to reduce the cost per VM, which can be best accomplished by maximizing the numbers of VMs that can run over a single server. With the growing number of cores per CPU, as well as the growing number of CPUs per server, hundreds of VMs can run over a single server, cutting the cost per VM. However, a faster network is essential to avoid being IO bounded. For example, a simple ROI analysis of VDI deployment of 5000 Virtual Desktops that compares just the hardware CAPEX savings shows that running over 25GbE cuts the VM cost in half. Adding the cost of the software and the OPEX further improves the ROI.

2016_0604_table

The growth in computing power per server and the move to faster flash-based storage systems demands higher performance networking. The old 10/40GbE-based architecture simply cannot hit the right density/price point and the new 25/50/100GbE speeds are therefore the right choice to close the ROI gap.

As such, the move by Mellanox, Google, Microsoft, and others to form the 25Gb Consortium in order to push ahead with 25/50GbE as a standard despite the IEEE’s initial short-sighted rejection now seems like an enlightened decision, not only because of the IEEE’s ultimate change-of-heart, but even so more because of the performance and efficiency gains that 25/50GbE bring to data centers.

About Motti Beck

Motti Beck is Sr. Director Enterprise Market Development at Mellanox Technologies Inc. Before joining Mellanox Motti was a founder of BindKey Technologies an EDC startup that provided deep submicron semiconductors verification solutions and was acquired by DuPont Photomask and Butterfly Communications a pioneering startup provider of Bluetooth solutions that was acquired by Texas Instrument. Prior to that, he was a Business Unit Director at National Semiconductors. Motti hold B.Sc in computer engineering from the Technion – Israel Institute of Technology. Follow Motti on Twitter: @MottiBeck

Comments are closed.