It is that time of the year at Mellanox, where we proudly present some of the coolest things our team has worked on! This time it is going to be at the Open Compute Project (OCP) Summit which will be held in the heart of Silicon Valley – San Jose Convention Center on March 11-12, 2015. It is impressive to see how hyper-scale architecture has been revolutionized in just 4 years.
What started as a small project from the basement of Facebook office in Palo Alto has come alive in the form of some cutting edge innovation in racks, server, networking and storage. Some of these innovations from Mellanox will take the center stage during the OCP summit that will accelerate the advancement of data center components, mainly server and networking. Key highlights during the OCP events are:
ConnectX-4 and Multi-Host: Back in November, Mellanox announced the industry’s first 100GbE interconnect adapter pushing the innovation in the networking arena in HPC, Cloud, Web2.0, storage and enterprise applications. With a throughput of 100 Gb/s, bidirectional throughput of 195 Gb/s, application latency of 610 nanoseconds and message rate of 149.5 million messages per second, ConnectX-4 InfiniBand adapters provide the means to increase data center return on investment while reducing IT costs.
Today Mellanox took a step further, by announcing Multi-Host Technology – a ground-breaking server disaggregation technology. Mellanox’s Multi-Host technology enables direct connectivity of multiple heterogeneous hosts (x86, Power, ARM, GPU etc.) to a single network controller, thus keeping the hosts completely independent of each other, yet saving on switch ports, cables, real estate and power.
The multi-host technology breakthrough results in network cost savings of 45% vs. previous solutions. Multi-host also results in significant additional server and CPU cost savings by enabling the use of lower cost CPUs and the sharing of server infrastructure. Mellanox is showing a live demonstration of the ConnectX-4 multi-host technology running on the new Facebook Yosemite platform in its booth at the OCP Summit.
Switch Abstraction Interface (SAI): In addition to the multi-host demo, Mellanox also is demonstrating the first example of the Microsoft network operating system running on the Mellanox SwitchX-2 platform over the Switch Abstraction Interface. Not surprisingly, advanced Ethernet switch ASICs share considerable common functionality and only a small (but important) subset of features are different between vendors. Until now, it was difficult for application and protocol stacks to operate seamlessly over different vendors switching platforms.
To address this issue, Microsoft worked with the OCP community to define a common API layer called Switch Abstraction Interface (SAI). SAI allows efficient integration of functionalities like port management, data forwarding, access control lists (ACLs), QoS and switching and routing modes to easily connect the hardware ASIC software development kit (SDK) to a higher level switching protocol stack.
In collaboration with Microsoft and the OCP community, Mellanox is demonstrating the industry’s first implementation of the Switch Abstraction Interface (SAI). Check out the blog on SAI by Amir Sheffer here.
Contribution of OpenOptics Technical Specification to OCP: As a continuing demonstration of its commitment to OCP, Mellanox along with the other members of the multi-source agreement also announced yesterday the contribution of the OpenOptics technical specification to the OCP project. The founders and supporters of OpenOptics for Highly Scalable Interconnect Solution include Mellanox, Ciena, Oracle, Ranovus, Vertilas and Ghiasi Quantum.
The contribution of the technical specification enables multiple vendors to develop compatible solutions that are capable of delivering terabits/sec of data over a single fiber. By using low-cost silicon photonics technologies, the OpenOptics Wavelength Division Multiplexing (WDM) specification empowers data centers to leverage up to 32 channels within a single fiber strand, thereby reducing the cost of hyper-scale data center networks.
AMAX ClusterMax HPC – Powered by Mellanox ConnectX-3 Pro: Lastly the OCP initiative is playing a major role in bringing highly scalable and power-efficient hardware to hyper-scale computing users. But it’s not just the hyper-scale arena that is benefiting – due to the importance given to green computing and other initiatives like Green500, OCP has also managed to stir a lot of interest in the HPC community. The advancement in energy efficiency, power & cooling and compute density can be leveraged by HPC community as well.
With the launch of One Platform, AMAX provides a modular, building-block OCP solution that allows infrastructure to be easily re-purposed for multiple applications – PHAT-DataTM for Hadoop, CloudMaxTM for OpenStack, StorMaxTM for CEPH and ClusterMaxTM for HPC. Powered by Mellanox ConnectX-3 Pro InfiniBand solution, AMAX ClusterMaxTM HPC provides an energy-efficient and cost-effective HPC solution.
This rack solution inspired by OCP standard integrates AMAX server, storage and Mellanox’s 56GbE InfiniBand solution. This 42U rack is designed for low CAPEX and OPEX, with the emphasis on simple and efficient hardware designed for high density, serviceability and manageability. Check out AMAX’s booth #C4 for One Platform OCP solution.
We look forward to seeing you at our booth #D21!