Mellanox Takes Supercomputing16 by Storm

 
Adapters, Big Data, Cloud Computing, Cloud Networking, Data Center, database, Ethernet, Events, High Performance Networks, InfiniBand, Storage, Switches, , , , , , , , ,

Although activities have been going on for a few days now, Supercomputing 16 officially opened last night in chilly Salt Lake City, Utah with Mellanox Technologies leading the charge. I’ll be bringing you updates throughout the week starting with today’s milestones and several key announcements. Today’s highlights include:

Big news from the TOP500 Supercomputers list with the announcement that Mellanox is chosen by nearly four times more end-users versus proprietary offerings in 2016. The news showed that we are accelerating the fastest supercomputer and connecting 65 percent of overall HPC systems; connecting all 40G and connecting the first 100G Ethernet systems. The announcement reflects that our InfiniBand solutions were chosen in nearly four times more end-user projects in 2016 versus Omni-Path and five times more end-user projects versus other proprietary offerings. This demonstrates an increase in both InfiniBand usage and market share. InfiniBand accelerates 65 percent of the total HPC systems on the list and 46 percent of the Petaflop infrastructures. Mellanox continues to connect the fastest supercomputer on the list, delivering highest scalability, performance and efficiency.

Published twice a year and publicly available at: www.top500.org, the TOP500 list ranks the world’s most powerful computer systems according to the Linpack benchmark rating system. A detailed TOP500 presentation can be found here: TOP500

  • Mellanox InfiniBand Accelerates the New National Computational Infrastructure (NCI) Supercomputer Australian National Computational Infrastructure (NCI)! We just announced that NCI has chosen Mellanox’s 100Gbit/s EDR InfiniBand interconnect for its new Lenovo NextScale supercomputer. The new system will deliver a 40 percent performance increase in NCI computational capacity starting in January 2017. The solution will also leverage the Mellanox smart interconnect and In-Network Computing technology to maximize applications performance, efficiency and scalability.
  • Mellanox is driving Virtual Reality to new levels with breakthrough performance via a demo of ultra-low latency long distance using Mellanox’s 100Gb/x EDR Infiniband. Mellanox is showcasing a Virtual Reality over 100Gb/s EDR InfiniBand demonstration at the Supercomputing Conference. Mellanox and Scalable Graphics will showcase an ultra-low latency solution that demonstrates the ultimate extended virtual reality experience for rapidly growing industry markets including computer aided engineering, oil and gas, manufacturing, medical, gaming and others. By leveraging the high throughput and the low latency of Mellanox 100Gb/s ConnectX®-4 InfiniBand, Scalable Graphics VR-Link Expander provides a near-zero latency streaming solution for bringing an optimal Virtual Reality experience even over long distances.
  • Just last Thursday, Mellanox announced the world’s first 200Gb/s InfiniBand data center interconnect solutions. Mellanox ConnectX-6 adapters, Quantum switches and LinkX cables and transceivers together provide a complete 200Gb/s HDR InfiniBand interconnect infrastructure for the next generation of high performance computing, machine learning, big data, cloud, web 2.0 and storage platforms. These 200Gb/s HDR InfiniBand solutions maintain Mellanox’s generation-ahead leadership while enabling customers and users to leverage an open, standards-based technology that maximizes application performance and scalability while minimizing overall data center total cost of ownership. Mellanox 200Gb/s HDR solutions will become generally available in 2017. To quote Mellanox’s CEO, Eyal Waldman:

“The ability to effectively utilize the exponential growth of data and to leverage data insights to gain that competitive advantage in real time is key for business success, homeland security, technology innovation, new research capabilities and beyond. The network is a critical enabler in today’s system designs that will propel the most demanding applications and drive the next life-changing discoveries,” said Eyal Waldman, president and CEO of Mellanox Technologies. “Mellanox is proud to announce the new 200Gb/s HDR InfiniBand solutions that will deliver the world’s highest data speeds and intelligent interconnect and empower the world of data in which we live. HDR InfiniBand sets a new level of performance and scalability records while delivering the next-generation of interconnects needs to our customers and partners.”

In addition, Mellanox received praise and support for the announcement from industry leaders including:

“Ten years ago, when Intersect360 Research began its business tracking the HPC market, InfiniBand had just become the predominant high-performance interconnect option for clusters, with Mellanox as the leading provider,” said Addison Snell, CEO of Intersect360 Research. “Over time, InfiniBand continued to grow, and today it is the leading high-performance storage interconnect for HPC systems as well. This is at a time when high data rate applications like analytics and machine learning are expanding rapidly, increasing the need for high-bandwidth, low-latency interconnects into even more markets. HDR InfiniBand is a big leap forward and Mellanox is making it a reality at a great time.”

  • Finally, last week, in tandem with powerhouses Tencent and IBM, we were part of a blockbuster announcement that we had all been named the 2016 winner of Sort Benchmark’s annual global computing competition. Tencent broke records in the GraySort and MinuteSort categories, improving last year’s Alibaba overall results by up to five times and achieving more than one Terabyte/second of sort performance. In addition, the results improved by up to 33 times per node.

Using 512 OpenPower-based servers, with NVMe-based storage and Mellanox ConnectX®-4 100Gbps Ethernet adapters, TencentCloud spent less than 99 seconds to finish sorting a massive 100 terabytes of data, and used 85 percent less servers than the 3,377 servers used by last year’s winner. To achieve this, Tencent developed their own sort application and tuned it for specifically for the benchmark. Managing the combination of sort, NVMe storage and high-performance CPU, pushes the analytics boundary and hence latency and bandwidth of the network which plays a crucial part in achieving maximum performance. With advanced hardware-based stateless offloads and flow steering engine, Mellanox’s ConnectX-4 adapter reduces the CPU overhead in packet processing and provides the lowest latency and highest bandwidth.

Visit Mellanox Technologies at SC16 (November 14-17, 2016)

Visit Mellanox Technologies at SC16 (booth #2631) to learn more on the new 200G HDR InfiniBand solutions and to see the full suite of Mellanox’s end-to-end high-performance InfiniBand and Ethernet solutions.

For more information on Mellanox’s booth and speaking activities at SC16.

As you can see, we have a lot going on at Supercomputing, stay tuned, more blogs and news to come.

About Scot Schultz

Scot Schultz is a HPC technology specialist with broad knowledge in operating systems, high speed interconnects and processor technologies. Joining the Mellanox team in March 2013 as Director of HPC and Technical Computing, Schultz is a 25-year veteran of the computing industry. Prior to joining Mellanox, he spent the past 17 years at AMD in various engineering and leadership roles, most recently in strategic HPC technology ecosystem enablement. Scot was also instrumental with the growth and development of the Open Fabrics Alliance as co-chair of the board of directors. Scot currently maintains his role as Director of Educational Outreach, founding member of the HPC Advisory Council and of various other industry organizations. Follow him on Twitter: @ScotSchultz

Comments are closed.