Before you break out the torches and pitch forks let’s play this out. Many of today’s advanced HPC workloads such as AI, Machine Learning, and Data Analytics are possible over Ethernet. With Mellanox Spectrum™ 25/50/100Gb Ethernet Switches and Connect®-X network adapters there is a very compelling argument for why it may be the best and most efficient networking solution for them. And, it’s becoming today’s new reality.
Let’s face it, 10GbE networks can’t support the bandwidth needed by Artificial Intelligence and other demanding data driven workloads. These newer technologies demand amazing computational capabilities and place huge demands on the network infrastructure. Twenty years ago, when 10GbE was first announced, it was considered overkill for most applications and workloads…and you had plenty of capacity on your 386 processor and your 1.44 MB 3.5-inch floppy disk. Today’s screaming fast CPU speeds, multi-core processors and the advent of GPUs, demand increased network speeds to move data in and out. State-of-the-art NVMe storage is unleashing new performance standards for storage, and already NVMe over Fabrics is taking hold in the industry for faster transfer of data between the host computer and wickedly fast solid-state storage. With 200GbE speeds just on the horizon, 10GbE protocols are fading into the pile of yesterday’s technology. I already have a spot reserved on my display shelf for a 10GbE network adapter right next to my Palm Pilot and Nintendo 64 (both still work by the way!).
Let’s walk through a check list of what already makes the Mellanox Ethernet switches and adapters the best network solution on the market today:
For both deep learning and inferencing, accuracy for real-time decisions from the most robust cognitive computing applications today depends on fast data delivery. Mellanox end-to-end Ethernet solutions meet and exceed the most demanding criteria and leave the competition in the dust. Just ask one of our customers currently using Mellanox Ethernet for speech recognition technology, iFLYTEK.
We can easily see the performance advantages with TensorFlow over a Mellanox 100GbE network versus a 10GbE network, both taking advantage of RDMA in the chart below. While distributed TensorFlow takes full advantage of RDMA to eliminate processing bottlenecks, even with large-scale images the Mellanox 100GbE network delivers the expected performance and exceptional scalability from the 32 NVIDIA Tesla P100 GPUs. For both 25GbE and 100GbE, it’s evident that those who are still using 10GbE are falling short of any return on investment they might have thought they were achieving.
Beyond the performance advantages, the economic benefits of running AI workloads over Mellanox 25/50/100GbE are substantial. Spectrum switches and ConnectX network adapters deliver unbeatable performance at an even more unbeatable price point, yielding an outstanding ROI. With flexible port counts and cable options allowing up to 64 fully redundant 10/25/50 GbE ports in a 1U rack space, Mellanox end-to-end Ethernet solutions are a game changer for state-of-the-art data centers that wish to maximize the value of their data.