All posts by John Biebelhausen

About John Biebelhausen

John Biebelhausen is Director, OEM Marketing at Mellanox Technologies, Inc. Before joining Mellanox, John had an extensive career as a Marketing Executive in a range of technology companies including IBM, Lenovo, Dell and Sharp Labs. John was a founder in multiple start-up companies pioneering Direct Relationship Marketing and SaaS applications. John holds a MS of Science in Finance from Colorado State University and a BBA in Economics and Finance from Kent State University.

Feeding the Data Beast with PCIe 4.0

Summary

Artificial Intelligence, Virtual Machines, containerization, and 5G mobile wireless networks are key drivers for next-generation high-performance systems. However, current servers with PCIe Express (PCIe) 3.0 require wide busses to keep up with the latest Ethernet or InfiniBand speeds or with the performance demands of new NVMe solid-state drives (SSD).

For example, the bandwidth of an 8-lane PCIe 3.0 interface supports a single 40 Gigabit Ethernet connection but creates a bottleneck with dual ports and at greater speeds. Likewise, PCIe 3.0 is already seen as a speed limitation for SSDs. A faster solution is required since increasing lane width with 3.0 is not efficient in terms of cost, complexity, higher power, complex circuit board layout, and component fanout.

To meet the requirements of data-intensive applications, PCIe 4.0 doubles the bandwidth of servers, creating a superhighway to increase a server’s data handling capacity to 64 GBytes/s. This will dramatically enhance data access capabilities so that servers will be able to analyze more data, more effectively for real-time insights, and access shared data quicker to fully utilize NVMe drives. Other advantages include a smooth transition for 4K and 8K video enabling four times sharper video footage and a platform that can manage the traffic that will be created by new 5G mobile networks.

Why is  PCIe 4.0 needed?

CPUs and GPUs are manipulating ever-increasing data sets. Flash storage and NVMe drives are far faster than spinning media from the past therefore, media and entertainment companies must support higher-definition content and interconnect speeds are quickly moving towards 200 GBytes/s. All this means an improved local I/O mechanism is needed to keep system latency low and to prevent bandwidth bottlenecks. PCIe 4.0 provides faster speeds to handle the greater bandwidth that new and more powerful component and applications demand.

To explain this further, let’s look at a 40 GBytes/s Ethernet adapter to understand why Gen 4 is needed. The industry standardizes on PCIe 3.0 x8 for a standard 40 Gb adapter. The max bandwidth of a single PCIe 3.0 lane is 985 MBytes/s (or 7.88 Gb/s). Multiply that by 8 to get the max bandwidth of a x8 slot and a PCIe 3.0 x8 slot can provide 63.04 Gb/s, plenty to handle a single port adapter. However, there is not enough bandwidth for a dual-port adapter. Another example is the huge growth in M.2 NVMe SSDs that utilize PCI Express connectivity. The x4 M.2 NVMe SSDs using PCIe 3.0 peak at 3.94 GBytes/s, a number which will double to 7.88 GBytes/s with PCIe 4.0. It’s easy to see that most data center will see a real benefit from PCIe 4.0. With Ethernet moving from 100 Gb/s speeds, which are currently available, to 200 Gb/s at the end of this year the additional bandwidth of PCIe 4.0 arrives just in time to stay ahead of the curve as a x16 slot can provide enough bandwidth for a 200 Gb/s adapter.

Conclusion

In today’s era of data-centric computing, to truly deliver superior performance, data center servers must be designed differently. They need advanced I/O buses with enhanced bandwidth and latency capabilities to deliver higher capacity for data-intensive workloads. PCIe 4.0prepares a data center to handle the new demands that specialized applications need and provides a foundation to support high-performance and efficient network connectivity to support a wide range of link speeds and evolving network acceleration engines such as SmartNICs. In turn, SmartNICs will complement the performance boost by offering acceleration engines for specific functions like networking, security, and storage to alleviate the CPU from the burden of I/O tasks.

The exponential growth of data demands not only the fastest throughput but also smarter networks. Mellanox intelligent interconnect solutions incorporate advanced acceleration engines that perform sophisticated processing algorithms on the data as it moves through the network. Intelligent network solutions greatly improve the performance and total infrastructure efficiency of data-intensive applications in the cloud such as cognitive computing, machine learning and Internet of Things (IoT). Mellanox stands out as the first company with adapters to support PCIe Express Gen 4.0 and achieved that milestone in 2017!

 

Supporting Resources:

 

 

 

 

Mellanox announced liquid cooled adapters for Lenovo systems

Keeping it Fast and Cool with Liquid Cooling

We live in a time where everything is expected to be done quickly; we praise the fastest race cars, the fastest airplanes, the fastest swimmer, fastest sprinter… the list goes on and on. There are countless movies, TV shows and advertisements that try to tap into the human obsession with speed. This same obsession is true with data centers server and their ever-increasing CPU speeds, demanding graphic GPU’s and the networking infrastructure used to connect these environments together. What do they all have in common other than increasing speed? The ability to keep everything cool and performing in harmony.

Water cooling is increasingly used to deal with the special requirements of the data center.  Because data centers are often assigned the most convenient available space, rather than a space that is specially designed, servers may be contained in too small an area or one that cannot be adequately ventilated. Water cooling is sometimes referred to as liquid cooling, because various other substances are sometimes used instead of, or in addition to, water.

Mellanox Technologies a leading supplier of high-performance, end-to-end smart interconnect solutions for data center servers and storage systems, today announced the availability of liquid cooled HDR 200G Multi-Host InfiniBand adapters for the Lenovo ThinkSystem SD650 server platform, featuring Lenovo Neptune™ liquid cooling technologies. This platform is ideal for high performance computing (HPC), artificial intelligence (AI) and scalable cloud infrastructures, utilizing Mellanox In-Network Computing acceleration engines in an energy efficient best-of-class server. With this solution, customers can now experience the fastest networking speeds deployed in a shared configuration, allowing them to save on both CAPEX and OPEX buying fewer networking components, while still running workloads much quicker. Additionally, Lenovo’s Neptune liquid cooling technologies reduce energy consumption, allowing customers to operate an extremely energy efficient high-performance data center.

In collaboration with Lenovo, Mellanox is able to deliver a scalable and highly energy efficient platform that delivers nearly 90% heat removal efficiency and can reduce data center energy costs by nearly 40%, and takes full advantage of the best-of-breed capabilities from Mellanox InfiniBand, including the Mellanox smart acceleration engines, RDMA, GPUDirect, Multi-Host and more. HDR 200G InfiniBand provides the highest performance and scalability for HPC and AI workloads and when combined with the Lenovo ThinkSystem SD650 Server creates a world-leading solution for compute and storage infrastructures.

As the world moves to the Exascale era of HPC, the interconnect fabric will be a key pillar of driving Lenovo’s “Exascale to Everyscale” initiative that seeks to bring the innovation of Exascale to HPC systems of all sizes. The new Mellanox HDR 200G InfiniBand solution provides customers faster networking speeds, and when paired with the industry-leading Lenovo ThinkSystem SD650 warm water-cooled server, increases performance while reducing energy consumption – making it ideal for demanding HPC workloads.

Mellanox HDR InfiniBand end-to-end solution, including ConnectX-6 adapters, Quantum switches, the upcoming HDR BlueField system-on-a-chip (SoC), and LinkX cables and transceivers, delivers the best performance and scalability for HPC, cloud, artificial intelligence, storage, and other applications, providing users with the capabilities to enhance their research, discoveries and product development.

Supporting Resources:

 

Is the changing landscape of AI enabling new opportunities for IBM Power Servers?

No single processor architecture for server applications has ever been successful because no two workloads are the same. For the past decade, Intel had all but dominated the server market, effectively shutting out AMD, the only other x86 server vendor, and leaving only HPC, mainframe, database and other specialized applications for different processor architectures. With the industry facing challenges due to security vulnerabilities from defective speculative execution implementations, changing workload requirements, slowing of Moore’s Law, and innovative new processor architectures, server OEMs and customers are all looking at a rapidly evolving landscape and choices. Many of the new processor vendors are focusing on specific segments to target. IBM Power appears to be well positioned to benefit from the tremendous interest in AI.

In addition to the changing technical and market dynamics, Intel has also stumbled in both manufacturing process and architecture. They have struggled to transition to the 14nm and 10nm process nodes and now appear well behind Samsung and TSMC, the major leading foundry service providers, in transitioning to the 7nm process node. Also, Intel has purposely limited memory bandwidth in its Xeon processors to promote two-socket servers over single-socket configurations. This evolution in the industry has created a renewed opportunity for IBM with its Power architecture.

So why AI and why now?

AI automates constant learning and discovery through data. However, AI is different from hardware-driven, robotic automation. Instead of automating manual tasks, AI performs frequent, high-volume, computerized tasks reliably and without fatigue. For this type of automation, a human inquiry is still essential to set up the system and ask the right questions.

AI adds intelligence to existing products. In most cases, AI will not be sold as an individual application. Instead, products you already use will be further enhanced with AI capabilities, much like Siri was added as a feature to a new generation of Apple products. Automation, conversational platforms, bots, and smart machines can be combined with large amounts of data to improve many technologies at home and in the workplace, from security intelligence to investment analysis.

AI adapts through progressive learning algorithms to let the data do the programming. AI finds structure and regularities in data so that the algorithm acquires a skill: The algorithm becomes a classifier or a predictor. So, just as the algorithm can teach itself how to play chess, it can teach itself what product to recommend next online. Moreover, the models adapt when given new data. Backpropagation is an AI technique that allows the model to adjust, through training and added data when the first answer is not quite right.

AI analyzes more and broader data using neural networks that have many hidden layers. Building a fraud detection system with five hidden layers was almost impossible a few years ago. All that has changed with incredible computing power and big data. You need lots of data to train deep learning models because they learn directly from the data. The more data you can feed them, the more accurate they become.

AI achieves incredible accuracy through deep neural networks – which was previously impossible. For example, your interactions with Alexa, Google Search, and Google Photos are all based on deep learning – and they keep getting more accurate the more we use them. In the medical field, AI techniques from deep learning, image classification, and object recognition can now be used to find cancer on MRIs with the same accuracy as highly trained radiologists.

AI gets the most out of data. When algorithms are self-learning, the data itself can become intellectual property. The answers are in the data; apply AI to get them out. Since the role of the information is now more critical than ever before, it can create a competitive advantage. If you have the best data in a competitive industry, even if everyone is applying similar techniques, the best data will win.

Mellanox AI Solutions accelerate many of the world’s leading artificial intelligence and machine learning platforms. Machine learning is a pillar of today’s technological world, offering solutions that enable better and more accurate decision making based on the enormous amounts of data being collected. Machine learning encompasses a wide range of applications, ranging from security, finance, and image and voice recognition, to self-driving cars, healthcare, and smart cities.

Mellanox adapters, switches, cables, and software implement the world’s fastest and most robust InfiniBand and Ethernet networking solutions for a complete, high-performance machine learning infrastructure. These capabilities ensure optimum application performance with:

  • Faster training and access to big data with throughput up to 200Gb/s
  • RDMA technology to accelerate machine learning frameworks such as TensorFlow,
  • Paddle, Caffe and Apache Spark
  • As low as 1us application latency
  • GPUDirect® technology to accelerate GPU-to-GPU communications
  • SHARP™ technology to accelerate machine learning algorithms

IBM Power Expands Open Source Frameworks Capabilities

Hardware is only half the story. In addition to the hardware improvements and network acceleration from Mellanox, IBM Power also developed tools expanding the capability of open source frameworks like TensorFlow and Caffe, as well as offering a machine-learning library called Snap ML that further reduces the training of new neural networks by leveraging generalized linear models rather than starting from scratch. Many of the libraries are being developed specifically for emerging applications like image and video recognition.

IBM Power Systems for AI

The IBM Power platform for AI

 

As workloads change, server customers are looking for more options to meet an increasing array of performance requirements by the growing number of diverse workloads. In the case of AI, IBM has developed a niche in combination with its partners NVIDIA for GPU’s and Mellanox for the world’s leading interconnect solutions that provides a platform that offers the highest performance for AI, along with IBM Power.

Additional Resources:

Mellanox ConnectX-4 Lx Dual Port 25 Gb/s Ethernet Mezzanine Card

Dell PowerEdge Servers Get New Mellanox I/O Enhancement with ConnectX-4

With the exponential increase in usage of data and the creation of new applications, the demand for the highest throughput, lowest latency, virtualization and sophisticated data acceleration engines continues to rise. The new Mellanox ConnectX-4 Lx EN with 10Gb/s and 25Gb/s Ethernet connectivity for Dell EMC PowerEdge Servers enables data centers to leverage the world’s leading interconnect adapter for increasing their operational efficiency, improving their servers’ utilization, maximizing applications productivity, all while reducing total cost of ownership (TCO).

The ConnectX-4 Lx EN rNDC Network Controller addresses virtualized infrastructure challenges, delivering best-in-class and highest performance to various demanding markets and applications. Providing true hardware-based I/O isolation with unmatched scalability and efficiency, achieving the most cost-effective and flexible solution for Web 2.0, Cloud, data analytics, database, and storage platforms.

Benefits

  • Highest performing adapter for applications requiring high bandwidth, low latency and high message rate
  • Industry leading throughput and latency for demanding Web 2.0, Cloud and Big Data applications
  • Cutting-edge performance in virtualized overlay networks
  • Efficient I/O consolidation, lowering data center costs and complexity
  • Virtualization acceleration
  • Power efficiency

In this digital economy, organizations are looking to technology as a competitive differentiator in driving higher engagement with customers, enabling new business models and staying ahead of the competition. CIOs are at the core of the transformation agenda, balancing between operational efficiency and forward-thinking projects. In both instances, servers are the bedrock of the modern software-defined data center and the key to building a flexible, efficient and cloud-enabled infrastructure. Dell EMC PowerEdge servers deliver a worry-free infrastructure that is secure and scalable, with no compromises.

Dell EMC PowerEdge servers provide a scalable business architecture, intelligent automation and integrated security for traditional workloads and applications through virtualization to cloud-native workloads. PowerEdge servers incorporate the embedded efficiencies of OpenManage systems management that enable IT pros to focus more time on strategic business objectives and spend less time on routine IT tasks. With open standards-based, x86 platforms, the PowerEdge portfolio of rack, tower and modular server infrastructure can help you quickly scale from the data center to the cloud. When teamed with Mellanox ConnectX-4 Lx EN you now have a solution which offers the most cost-effective Ethernet adapter solution for 10 and 25Gb/s Ethernet speeds, enabling seamless networking, clustering, or storage. The Mellanox adapter reduces application runtime, and offers the flexibility and scalability to make infrastructure run as efficiently and productively as possible.

Supporting Resources:

 

IBM Power Systems Enhances I/O Portfolio with Mellanox Delivers Superior Performance and Value for IBM Power 9 Servers

If anything is certain about the future, it’s that there will be more complexity, more data to manage and greater pressure to deliver instantly. The hardware you buy should meet today’s expectations and prepare you for whatever comes next. IBM Power Systems scale out servers are powerful, flexible servers built to deliver value for diverse workloads and mission critical applications in AIX, IBM I and Linux environments.

With the exponential growth of data being shared and stored by applications and social networks, the need for high-speed and high performance compute and storage data centers is skyrocketing. Mellanox provides exceptional high performance for the most demanding data centers, public and private clouds, Web2.0 and Big Data applications, as well as High-Performance Computing (HPC) and Storage systems, enabling today’s corporations to meet the demands of the data explosion. Mellanox provides an unmatched combination of 100Gb/s bandwidth, the lowest available latency, and specific hardware offloads, addressing both today’s and the next generation’s compute and storage data center demands.

I/O enhancements for IBM Power S924, S922, S914, H924, H922, and L922 servers

The PCIe3 Low Profile 2-port 100 Gb EDR InfiniBand Adapter x16 (#EC3E) from Mellanox is now available for IBM S922, L922, and H922 servers, and the PCIe3 2-port 100 Gb EDR InfiniBand Adapter x16 (#EC3F) from Mellanox is now available for IBM S924, S914, and H924 servers. These adapters provide high-speed connectivity with other servers or InfiniBand switches. Each port maximum of 100 Gb assumes no other system or switch bottlenecks are present. The adapters support the InfiniBand Trade Association (IBTA) specification version 2.

The two 100 Gb ports have QSFP+ connections that support EDR cables, either EDR DAC or EDR optical. One adapter can support either or both cable types. The user can choose to attach a cable to just one port if they want. Transceivers are included in the cables. IBM cable features EB50-EB54 (copper shorter distance) and features EB5A-EB5H (optical longer distance) are supported or their copper or optical Mellanox equivalents are supported. Other cables are not supported.

  • PCIe3 2-port 100 Gb EDR IB Adapter x16 (#EC3E, #EC3F) for IBM POWER9 scale-out servers.
  • PCIe3 1-port 100 Gb EDR IB Adapter x16 (#EC3U, #EC3T) for IBM POWER9 scale-out servers.

For More Information:

Ethernet Fabric from Mellanox manages the World’s Top HPC and Artificial Intelligence Supercomputer at Oak Ridge National Laboratory

Mellanox EDR InfiniBand accelerates the new world’s top high-performance computing (HPC) and Artificial Intelligence (AI) system, named Summit, at the Oak Ridge National Laboratory.  Summit delivers 200 Petaflop performance and leverages dual EDR InfiniBand network to provide overall 200 gigabit per second throughput to each compute server. Performance matters, even for management networks.  So, while Mellanox InfiniBand accelerated the front-end performance of this supercomputer, Mellanox Ethernet switches were tapped to provide the infrastructure for back-end management duties.

This Ethernet management network reliably connects all of Summit’s 4600 nodes, providing the performance necessary for carrying mission-critical management services like booting the nodes, NFS file access, LDAP, and Job Launch initiation.  On the same physical network, a separate virtual network was used for less bandwidth intensives tasks like IPMI, Console, Syslog, health-checking and network time assignment/synchronization.  A reliable management network is imperative for gathering important telemetry data.  It is this network that is used for periodic house-keeping duties like pushing firmware upgrades and applying security patches.

“We are proud to accelerate the world’s top HPC and AI supercomputer at the Oak Ridge National Laboratory, a result of a great collaboration over the last few years between Oak Ridge National Laboratory, IBM, NVIDIA and us,” said Eyal Waldman, president and CEO of Mellanox Technologies. “Our InfiniBand smart accelerations and offload technology delivers highest HPC and AI applications performance, scalability, and robustness. InfiniBand enables organizations to maximize their data center return-on-investment and improve their total cost of ownership and, as such, it connects many of the top HPC and AI infrastructures around the world. We look forward to be part and to accelerate new scientific discoveries and advances in AI development, to be performed and enabled by Summit.”

The need to analyze growing amounts of data, to support complex simulations, to overcome performance bottlenecks and to create intelligent data algorithms requires the ability to manage and carry out computational operations on the data as it is being transferred by the data center interconnect. Mellanox InfiniBand solutions incorporate the In-Network Computing technology that performs data algorithms within the network devices, delivering ten times higher performance, and enabling the era of “data-centric” data centers. Combined with Mellanox Ethernet fabric for back-end infrastructure management delivers the most robust solution available.

“Summit HPC and AI-optimized infrastructure enables us to analyze massive amounts of data to better understand world phenomena, to enable new discoveries and to create advanced AI software,” said Buddy Bland, Program Director at Oak Ridge Leadership Computing Faculty. “InfiniBand In-Network Computing technology is a critical new technology that helps Summit achieve our scientific and research goals. We are excited to see the fruits of our collaboration with Mellanox over the last several years through the development of the In-Network Computing technology, and look forward to take advantage of it for achieving highest performance and efficiency for our applications.”

Supporting Resources:

 

Earth Shattering I/O Performance for IBM Power9 servers

The world’s most demanding applications are requiring increasing amounts of compute power to handle the resource-intensive demands of workloads such as artificial intelligence and machine learning. IBM, with the debut of its latest generation Power9 chip in its new server portfolio, aims to take an industry leadership role in shaping how next generation computing will unfold. IBM’s Power9 processor is literally the Swiss Army knife of Machine Learning acceleration as it supports an astronomical amount of IO and bandwidth, and 10X of anything that is out there today.

One of the most intriguing aspects of the new Power9 servers is its attention to the I/O subsystem. The Power9 chip and servers will be the first to get to the I/O bus, the Gen4 Peripheral Component Interconnect Express. IBM will have Gen4 before anyone else in the industry, and that’s going to create a lot of value for clients. The new bus will have more bandwidth than all prior generations and allows Mellanox to be the industry leader in providing the highest performance Gen 4 interconnect solutions for the fastest IBM Power9 servers on the planet!

Another key new capability of IBM’s new Power9 architecture is found in the new Coherent Accelerator Processor Interface (CAPI) bus, which will attach to advanced generations of new storage class memories, along with new accelerators, such as field programmable gate arrays. This particular innovation is important, considering where computers are going and the increasing role accelerators and advanced memories will play. Recently, we at Mellanox announced the Innova-2 product family of FPGA-based smart network adapters. Innova-2 is the industry leading programmable adapter designed for a wide range of applications, including security, cloud, Big Data, deep learning, NFV and high performance computing.

The Innova-2 product line brings new levels of acceleration to Mellanox intelligent interconnect solutions and helps equip customers with new capabilities to develop their own innovative solutions, whether related to security, big-data analytics, deep learning training and inferencing, cloud or other applications. The solution also allows customers to achieve unprecedented performance and flexibility for the most demanding market needs.

The Innova-2 family of dual-port Ethernet and InfiniBand network adapters supports network speeds of 10, 25, 40, 50 and 100Gb/s, while the PCIe Gen4 and OpenCAPI (Coherent Accelerator Processor Interface) host connections offer low-latency and high-bandwidth. Innova-2 allows flexible usage models, with transparent accelerations using Bump-in-the-Wire or Look-Aside architectures.

In order for systems to truly deliver differentiated and enhanced performance, they must be designed differently. They need advanced I/O busses with better bandwidth and latency features to deliver performance to the workload. Power9 servers from IBM is showcasing advanced I/O busses, inventing new CAPI I/O busses and partnering with market leaders such as Mellanox to provide a high-performance attach of end-to-end smart interconnect solutions for data center servers and storage systems.

Supporting Resources:

What Does it Mean to Summit?

Many consider the Summit as the highest point of attainment or aspiration. It’s that exhilarating feeling of knowing you have reached the pinnacle. Mount Everest is the highest summit on earth. It is a place at once both awe-inspiring and unforgiving. It is where people go to live out their most challenging aspirations and dreams. The mountain remains the pinnacle of human feat and a pursuit representative of the greatest passion and drive.

Summit – The Next Peak in HPC

The Summit supercomputer, one of the two CORAL systems, will deliver the next leap in leadership-class computing systems for open science. With Summit, the supercomputing community will be able to address, with greater complexity and higher fidelity, questions concerning who we are, our place on earth, and in our universe.

The Summit supercomputer will deliver more than five times the computational performance of its predecessor Titan.  Summit will have a hybrid architecture, and each compute node will contain dual IBM POWER9 CPUs and multiple NVIDIA Volta GPUs. Each node will have over half a terabyte of coherent memory (high bandwidth memory + DDR4) addressable by all CPUs and GPUs plus 800GB of non-volatile RAM that can be used as a burst buffer or as extended memory. To provide a high rate of data throughput, the nodes will be connected in a non-blocking fat-tree topology using a dual-rail Mellanox EDR InfiniBand interconnect for both storage and inter-process communications traffic which delivers both 200Gb/s bandwidth between nodes and in-network computing acceleration for communications frameworks such as MPI and SHMEM/PGAS.

 

The Astonishing Engineering of Summit

  • Summit is the result of a collaboration amongst Oak Ridge National Labs, IBM, Mellanox, NVIDIA and Red Hat.
  • Each of the nodes in Summit can deliver more than 40 teraflops of performance. The overall cluster’s peak performance will be 5x-10x faster than its predecessor, Titan.
  • The hybrid Summit supercomputer design will interconnect thousands of compute nodes, each containing both IBM POWER CPUs and NVIDIA GPUs. They will depend on Mellanox’s dual network EDR 100Gb/s InfiniBand-based solution to communicate with each other, providing one of the most advanced architectures of its kind for high-performance computing applications.
  • Summit is connected by more than 136 miles of cabling if each of the approximately 5000 cables were connected end-to-end.
  • Summit will utilize 15 megawatts of power of the 20 megawatts of electricity allocated to it. This would be enough power for 12,000 Southern homes with their air conditioners cranking!
  • Summit will be housed in a space about the size of 2 basketball courts in a special facility located at the Oak Ridge Leadership Computing Facility of ORNL.
  • Summit will be the fastest HPC system on the planet, but its hybrid architecture also positions it as the largest research platform for deep learning and artificial intelligence to date.
  • Upon completion, Summit will allow researchers in all fields of science unprecedented access to solving some of the world’s most pressing challenges.

Summit Time Lapse Phase 1 from OLCF on Vimeo.

The Era of Smart Networks

We live in a world of data. Many of the new products and services being developed depend directly on the ability to analyze the growing amount of data we collect, and to do it at a faster pace. Self-driving vehicles, and smart cities are just two examples of how the new world of data is going to impact our lives. In response to these changing demands, datacenters are undergoing a technology transition in which data is analyzed and processed, not just in the server processors, but in the storage and in the data network itself.

The data center interconnect starts to deliver in-network computing, providing the means to analyze data as it is being transferred within the datacenter. Furthermore, the usage of programmable logic within the network devices will further increase, enabling users to migrate more algorithms and intelligence to the network devices. Mellanox’s new Innova-2 is a great example of a High-performance network adapter with an embedded high-density FPGA that delivers flexibility and intelligence where you need it. Innova-2 brings powerful capabilities to the datacenter:

  • Ability to perform in-line data processing
  • Up to 100Gb/s network speeds
  • Flexible host interface – PCIe Gen4 or OpenCAPI
  • And many more…

Innova-2 is the industry leading programmable adapter designed for a wide range of applications, including security, cloud, Big Data, deep learning, NFV and High Performance Computing (HPC).

Innova-2 can be delivered either open for customers’ specific applications or pre-programmed for security applications with encryption acceleration, such as IPsec, TLS/SSL and more. For security applications, Innova-2 delivers 6X higher performance while reducing total cost of ownership by 10X when compared to alternative options. For Cloud infrastructures, Innova-2 enables virtualizations and SDN offloads. Deep learning training and inferencing applications will be able to achieve higher performance and better system utilization by offloading algorithms into Innova-2 FPGA and the ConnectX® acceleration engines.

Innova-2 is based on an efficient combination of the state-of-the-art ConnectX-5 25/50/100Gb/s Ethernet and InfiniBand network adapter with Xilinx® Ultra-scale™ FPGA accelerator. Innova-2 adapters deliver best-of-breed network and storage capabilities as well as hardware offloads to CPU-intensive applications.

The Innova-2 family of products includes dual port Ethernet and InfiniBand network adapters with speeds ranging from 10Gb/s, 25G, 40G, 50G and up to 100Gb/s and supporting advanced host interfaces such as PCIe Gen4 and OpenCAPI (Coherent Accelerator Processor Interface). Innova-2 allows different usage models, with the possibility for transparent accelerations using Bump-in-the-Wire or Look-Aside architectures. The solution also fits any server with its standard PCIe card form factor (Half Height Half Length), enabling a wide variety of deployments in modern data centers.

Mellanox will be showcasing Innova-2 adapters at the OpenStack Summit, Nov 6-8, Sydney Australia, and at SC17, Nov 13-16, booth # 653, at the Colorado Convention Center. You can find more information Innova-2 here.

Mellanox Delivers ConnectX-4 LX for the IBM z14

The world is in the midst of a digital transformation that is having a profound impact on individuals, business, and society. As businesses adapt to capitalize on digital, trust will be the currency that drives this new economy. The new IBM® z14™ (z14) mainframe is the core of trusted digital experiences. It enables the ultimate protection for your data and simplifies compliance to regulations. With z14, you can apply machine learning to your most valuable data to create deeper insights. And z14 is designed to be open and connected in the cloud, enabling massive transaction scale of high volume encrypted workloads at the lowest cost.

Supporting the infrastructure and massive transaction scale needed by the z14 is a core competency delivered by Mellanox. Newly added support for the Mellanox ConnectX®-4 LX adapter as the, IBM 10GbE RoCE Express2, FC# 0412, offers the best cost effective Ethernet adapter solution for 10Gb/s Ethernet speeds, enabling seamless networking, clustering, or storage. The adapter reduces application runtime, and offers the flexibility and scalability to make infrastructure run as efficiently and productively as possible.

With the exponential increase in usage of data and the creation of new applications, the demand for the highest throughput, lowest latency, virtualization and sophisticated data acceleration engines continues to rise. Mellanox ConnectX-4 Lx EN enables data centers powered by the IBM z14 to leverage the world’s leading interconnect adapter for increasing their operational efficiency, improving servers’ utilization, maximizing applications productivity, while reducing total cost of ownership (TCO).

The Mellanox ConnectX-4 Lx EN provides an unmatched combination of 1, 10, 25, 40, and 50GbE bandwidth, sub-microsecond latency and a 75 million packets per second message rate.

Supporting Resources: