All posts by Itay Ozery

About Itay Ozery

Itay Ozery is Senior Product Manager at Mellanox Technologies, driving strategic product management and product marketing initiatives for Mellanox’s cloud networking solutions. Before joining Mellanox, Itay was Sr. Sales Engineer at NICE Systems Ltd., a Nasdaq listed corporation, where he led large-scale business and project in the fields of cyber security and intelligence. Prior to that, Itay held various positions for more than a decade in IT systems and networking with data centers and telecom service providers, where he acquired extensive experience in IT system and network engineering. Itay holds B.A. in Marketing and Information Systems from the College of Management Academic Studies, Israel.

New Mellanox SmartNICs and I/O Processing Unit (IPU) solutions provide best-in-class data-center security, performance and efficiency

Mellanox Introduces Revolutionary SmartNICs for Making Secure Cloud Possible

Mellanox is very excited to introduce ConnectX-6 Dx and BlueField-2 SmartNICs and I/O Processing Unit (IPU) solutions, enabling the next generation of clouds, secure datacenters and storage platforms. ConnectX-6 Dx and BlueField-2, with their cutting-edge hardware acceleration engines powered by best-in-class software programmability, are set to revolutionize the way hyperscale giants, enterprises, and telecom providers build secure and highly efficient cloud data-centers. The new Mellanox SmartNICs will become available in the market later this year.

This is a first of a series of blogs supporting the launch, which focuses on key security offerings of ConnectX-6 Dx SmartNICs and BlueField-2 IPU-based programmable SmartNICs.

How ConnectX-6 Dx and BlueField-2 SmartNICs Make Secure Cloud Possible

Security has become an immense challenge in cloud data-centers. The perimeters around data, scattered across the enterprise data-center and multiple service providers, are often broken. Factor in the added complexities of infrastructure virtualization and levels of attacks, and you get an enterprise that is extremely vulnerable to multiple attack vectors and potential breaches. Finally, the lack of visibility and control is error-prone, posing limitations on service providers and enterprises to implement effective security strategies.

Mellanox ConnectX-6 Dx and BlueField-2 SmartNICs transform data-center security by introducing innovative hardware engines that enable cybersecurity solutions including scalable crypto, resilient next-generation firewalls, and more. The following illustration describes the security engines in ConnectX-6 Dx SmartNICs.

Security engines in Mellanox ConnectX-6 Dx SmartNICs

ConnectX-6 Dx SmartNICs deliver a wide range of security engines for accelerating cloud data-centers. These provide the highest performance by offloading network processing from the CPU, freeing it up for money-making applications. ConnectX-6 Dx SmartNICs enable secure cloud use cases that previously were either impossible or too expensive to consider using conventional NIC solutions. As the best price/performance SmartNIC solution in the industry, ConnectX-6 Dx offers the perfect balance of purpose-built hardware acceleration, software programmability and advanced functionality.

The following illustration describes the security engines in BlueField-2 IPU-based programmable SmartNICs:

Security engines in Mellanox BlueField-2 IPU-based programmable SmartNICs:

Mellanox BlueField-2’s IPU-based programmable SmartNICs combine 64-bit Arm multi-core processing power with ConnectX-6 Dx advanced network and security offloads to accelerate a multitude of security applications at speeds of up to 200Gb/s Ethernet or InfiniBand. BlueField-2 offers high-performance, software programmable, networking capabilities for customizing and optimizing both control and data path operations.

BlueField-2 SmartNICs also take bare-metal clouds to new levels of functionality previously unseen in the market, including software-defined networking capabilities, storage disaggregation and enhanced security.

Welcome to the Encrypted Data-Center

Following the paths of hyper-scale cloud giants Google and Facebook, Mellanox recognizes that encryption is a prominent approach used in securing data-center connectivity, and in turn, customer data and privacy. At times when east-west communications dwarf the amount of data going in and out of data-centers, doing encryption inside the data-center feels like an impossible mission, since applying encryption would make performance and customer experiences take a massive hit. Introducing purpose-built hardware accelerators for IPsec and TLS data-in-motion encryption, and XTS-AES data-at-rest encryption – Mellanox ConnectX-6 Dx and BlueField-2 SmartNICs make the impossible possible!  Unlocking unmatched network performance and efficiency for securing data-center connectivity, web application delivery, and data storage systems, these new hardware engines offload the crypto operations for encryption/decryption from the host’s CPU to the SmartNIC.

Let’s take a closer look into the advanced crypto acceleration engines: IPsec and TLS inline encryption offloads address various communication encryption use-cases. As inline offloads, IPsec and TLS can be leveraged in conjunction with additional SmartNIC offload capabilities. Some notable examples for this type of application is deploying encrypted RoCE communication for secure node access to an NVMe storage device, and secure AI training operations. Both ConnectX-6 Dx and BlueField-2 outperform competing solutions by offering inline accelerated IPsec and TLS combined with best-in-class RoCE performance.

Another interesting use-case is deploying encryption in transparent IPsec mode. In this scenario the host sends/receives clear packets to/from the network, while the BlueField-2 SmartNICs add the encryption/ decryption pieces, establishing a secure and high-performance IPsec tunnel that connects the host to the network. For clarity, transparent IPsec mode means the host is completely unaware of the added encryption, as its wholly implemented in the SmartNIC. Transparent IPsec is ideally positioned for both bare-metal clouds where the host is not controlled by the cloud operator, and legacy environments that require encryption – deploying BlueField-2 SmartNICs in those environments enables secure cloud connectivity with minimal impact on workloads and service availability.

Finally, BlueField-2 IPU-based programmable SmartNICs provide a complete set of encryption and key infrastructure engines in hardware, including a true random number generator (TRNG), built-in PKI engine, and a secure key store that holds sessions keys’ encrypted in memory, that are only accessible to the hardware crypto engine. The PKI engine accelerates public-key operations that are used by OpenSSL and similar open-source libraries. The solution may be integrated with a central key manager to generate, store and rotate encryption keys, for improving scalability and operational agility

Redefining Next-Generation Firewalls

In the age of cloud computing and software-defined everything (SDx), security functions, having undergone a transformation, are being deployed at every host to provide visibility into, and enforcement of, a strict policy. This transformation calls for network solutions that deliver the maximum in speed and agility.

ConnectX-6 Dx and BlueField-2 SmartNICs best address this challenge, by delivering the latest generation of Mellanox ASAP2 – accelerated switching and packet processing technology. At the heart of ASAP2 is the “eSwitch” – an embedded switch built into Mellanox SmartNICs. The beauty of the eSwitch lies in how it allows the SmartNICs to handle a large portion of the packet processing operations in hardware, freeing up the host’s CPU, and providing high-throughput connectivity for virtual machines/containers. ASAP2 technology supports a range of network offload capabilities for Openvswitch (OVS) datapath and Linux Kernel TC, among network stacks.

Leveraging the Mellanox SmartNICs, the hardware-based eSwitch can be programmed to classify packets according to key fields (IPv4, IPv6, TCP, UDP, VXLAN and more), and perform actions like allow, deny, sample, etc., at full wire-speed!  The following diagram illustrates the eSwitch flow-based classification and action model.

eSwitch flow-based classification and action model

An important enhancement to the Linux kernel was made recently for offloading the tracking of TCP connection states to the SmartNIC hardware. The connection tracking (CT) offload capability enables stateful connection-based filtering. This on top of the existing ASAP2 offload capabilities of L3/L4 packet filtering in hardware presents breakthrough functionality for our customers and partners, allowing them to implement next-generation firewalls by leveraging the Mellanox SmartNICs to achieve unmatched performance, scale and efficiency.

Moreover, ConnectX-6 Dx and BlueField-2 maintain full backward compatibility while leveraging existing ASAP2 implementations, allowing customers and partners to benefit from enhanced capabilities, with a smooth transition path from previous ConnectX and BlueField generations.

Accelerate connection tracking

BlueField-2 Programmable SmartNICs Turn Zero-Trust to Hero-Trust

There is a saying in cybersecurity: “There are two types of organizations: Those that know they’ve been hacked, and those that don’t know it yet…”

By focusing on the security side of things, how can one protect the host from compromise if the potential attacker, protected data, and security function all share the same trust domain (the host)?! As zero-trust takes ground as a prominent cloud security model, enterprises and cloud service providers need to adapt their infrastructures to separate the security functions from the host to realize the full potential of zero-trust in the data-center.

Due to its unique form factor and features, a BlueField-2 SmartNIC installed in a host can act as a “computer-in-front-of-a-computer,” enabling security functions to run on its Arm cores, fully isolated from the host’s CPU and operating-system. This isolation is key in making BlueField-2 work best for zero-trust security solutions, as it delivers the needed separation of the security functions from the host, while delivering unmatched performance.  In the event a host has been compromised, the separation between the security functions and the compromised host helps stop the attack from spreading further throughout the data.

BlueField can act as a “computer-in-front-of-a-computer”

Mellanox BlueField-2 also is the perfect solution for enterprises that are reluctant to deploy security functions and/or agents directly on their computing infrastructures. Enterprises want and need visibility into workloads and to enforce their security policies in the data-center. However, the presence of legacy applications, compliance regulations and DevOps processes, often do not permit the deployment of agents. The resultant lack of visibility leaves enterprises with infrastructure silos where security policy enforcement cannot be applied. In these scenarios, the deployment of security agents onto BlueField-2, fully isolated from the host system, enables enterprises to gain visibility as well as enforce a consistent security policy across their infrastructures. In addition, the BlueField-2 programmable SmartNIC also features a dedicated out-of-band management port for empowering security management tools to deploy and orchestrate security agents on the device over an isolated network. Deploying agents on BlueField also unlocks server performance and is ideal in bare-metal and Kubernetes environments.

BlueField-2 programmable SmartNICs turn zero-trust to hero-trust

Summary

Continuing Mellanox’s innovation in high-performance cloud fabrics, ConnectX-6 Dx and BlueField-2 make the impossible possible by bringing cutting-edge hardware acceleration engines with best-in-class software programmability, for enabling next generation of clouds, secure datacenters and storage platforms. Stay tuned as we continue to bring new products to market in 2019 and beyond.

Thanks to Ariel Kit and Barbara Claman for their great contribution in drafting this blog.

To learn more about ConnectX-6 Dx and BlueField-2 SmartNICs and IPU solutions, check out these supporting resources:

Visit Mellanox at booth #1463 at VMworld 2019, San Francisco, CA on August 25-28 where you can learn more about the benefits of the Mellanox ConnectX-6 Dx and BlueField-2, the industry’s most advanced secure cloud SmartNICs.

Mellanox and Red Hat announce the next generation enterprise Linux OS for the hyrbid-cloud era.

Mellanox Accelerates Red Hat Enterprise Linux 8 Networking for the Hybrid Cloud Era

A few weeks ago, Mellanox was excited to witness the standing ovation at Red Hat Summit as the Red Hat team released their next-generation enterprise Linux OS for general availability. Red Hat Enterprise Linux (RHEL) 8 brings numerous groundbreaking enhancements and innovations. Primarily, it is the operating system redesigned for the hybrid cloud era – built to support the workloads and operations that stretch from enterprise datacenters to multiple public clouds. The rise of Linux containers, DevOps automation and artificial intelligence (AI) calls for an enterprise-grade, developers’ choice operating-system that unlocks maximum performance, simplicity and agility.

Mellanox collaborates closely with Red Hat and has for several years contributed to Red Hat Enterprise Linux as well as to Red Hat OpenStack Platform and Red Hat OpenShift Platform. RHEL 8 is another great opportunity to share our product achievements for the benefit and success of our joint end-customers.

Red Hat’s Enterprise Linux 8 is refining hybrid cloud innovation

Blazing Fast Meets Out-of-Box Simplicity

RHEL 8 ships with a pre-packaged Mellanox driver that offers advanced networking capabilities powered by Mellanox’s ConnectX network adapter cards. The Mellanox inbox driver within RHEL supports a wide range of ConnectX product families, Ethernet and InfiniBand networking protocols, and staggering speeds starting from 10, 25 and up to 100 Gb/s. Most recently, we’ve added RHEL certification for our ConnectX-5 Socket-Direct adapter cards, uniquely designed to enhance network and application performance in multi-socket servers, by connecting each CPU socket directly to the Mellanox NIC.

Red Hat Enterprise Linux 8 is powered by Mellanox ConnectX-5

The Mellanox inbox driver and the Fast Datapath feature in RHEL 8 further enable a broad set of network capabilities:

  • Single-root I/O virtualization, or SR-IOV, allowing virtual machines to directly access the NIC hardware. SR-IOV is a great way to achieve high network throughput while sharing a physical network adapter amongst multiple VMs.
  • VXLAN overlay offload is enabled by default, allowing for offloading the VXLAN packet handling (encapsulation/decapsulation) to the ConnectX NIC compared to use expensive CPU cycles
    to do so.
  • RDMA/RoCE is a transport protocol that allows for direct memory access from one computer into that of another, without involving either’s operating-system and CPU. The Mellanox inbox driver enables RDMA/RoCE communications over Mellanox ConnectX NICs in a RHEL environment, providing accelerated application performance for a range of workloads including NVMe over Fabric storage, machine learning (ML) and artificial intelligence (AI).
  • DPDK is a set of libraries and vendor NIC drivers for fast packet processing in the Linux user space. The Mellanox inbox driver includes the Mellanox’s poll-mode driver and enable DPDK-based acceleration in a RHEL environment. Mellanox ConnectX NICs outperform any other NIC in the industry for DPDK performance.
  • DPDK-accelerated Openvswitch (OVS) enables user-space datapath acceleration for OVS, thus superior performance vs. OVS that utilizes Linux kernel networking.
  • OVS offload, currently in Tech Preview in RHEL 8, is a mode in which OVS datapath is fully offloaded to the NIC hardware, delivering breakthrough performance powered by Mellanox’s ASAP2 advanced switching and packet processing technology embedded in select ConnectX adapter cards.

The Mellanox inbox driver for RHEL is developed with an upstream first mindset and is collaboratively integrated and tested into RHEL by Red Hat and Mellanox product engineering teams. In fact, this is what makes the integration of Red Hat’s open source software and Mellanox ConnectX adapters so easy to deploy and use, with a simplified out-of-box experience and clear escalation paths for customer support issues.

Enabling Hybrid Cloud Solutions

The Red Hat-Mellanox product collaboration efforts don’t stop at RHEL. As many of our mutual customers deploy Red Hat OpenStack Platform and Red Hat OpenShift Container Platform, the teams focus on delivering these value-add technologies in an orchestrated, cloud-driven manner. When the flagship Red Hat OpenStack Platform 13 came out in mid-2018, Mellanox was a pioneer vendor that introduced the OVS hardware offload technology together with Red Hat. To date, Red Hat OpenStack Platform is a long-life platform of choice for telecom service providers’ SDN/NFV data-centers as well as enterprises. On the container and Kubernetes fronts,Red Hat OpenShift 4.1, that was made generally available recently, adds supports for Mellanox cloud-ready ConnectX-4 Lx and ConnectX-5, with more advanced networking features in subsequent releases.

Mellanox is a true advocate of open source solutions and a major contributor to the Linux kernel among other open infrastructure projects. We believe this open, upstream-first mindset is what makes our collaboration with Red Hat works best for our mutual customers and the entire IT ecosystem.

Stay tuned as we continue the high-performance networking path in 2019 and beyond!

To learn more about Mellanox’s ConnectX adapter product family, visit Mellanox.com.

 

 

Mellanox BlueField SmartNIC can virtualize bare metal Kubernetes to achieve higher ROI.

Provision Bare-Metal Kubernetes Like a Cloud Giant!

Starting out my career as an IT engineer in the early 2000’s exposed me to early discussions about hypervisors and virtual machines, including how they can save you time on server provisioning. I was intrigued by the way server virtualization had disrupted enterprise IT over the years, delivering infrastructure efficiency and automation. By the time I moved on in my career into a business role in 2009, most workloads were running on highly distributed virtual environments, with just a handful of powerful bare-metals running high-speed SQL databases for performance-sensitive workloads.

A Shift to Bare-Metal Kubernetes

Today, the Kubernetes container orchestration platform is the de-facto driving force for agile delivery of cloud-native applications. Throughout the emergence and development of Kubernetes, most of its deployments have used virtual machines as the underlying infrastructure platform, hosted either on public clouds or in on-premise datacenters.

Lately, we see the growing trend of building new Kubernetes clusters from the ground up on bare-metal server infrastructures, eliminating the need to deploy hypervisors to abstract the physical hardware. We can largely attribute this shift to several key trends in the cloud-native ecosystem, including the rising demand for high-performance workloads such as big data analytics, machine learning and artificial intelligence. These are driving system architects and cloud operators to take the hypervisor out of the equation and achieve better application performance straight on metal. This demand has also been fueled by recent Kubernetes framework enhancements including GPU-powered node enablement, and CPU and memory resource management, which are collectively geared toward delivering superior performance and scale. Another reason enterprises and service providers undergoing digital transformation are embracing bare-metal Kubernetes lies in the push toward deploying workloads at the network’s edge. To unleash the full potential of edge computing, the underlying infrastructure must be optimized for performance, ultra-low latency and resiliency. Bare-metal servers that provide direct hardware access, coupled with a leading-edge computing software stack, typically outperform hypervisor-based platforms at the edge.

While bare-metal Kubernetes clusters deliver on the promise of performance, they also reveal a myriad of challenges around security, data storage and operations. In this blog I will introduce the advanced Mellanox BlueField™ SmartNIC and how it empowers bare-metal Kubernetes clusters.

Introducing BlueField SmartNIC

Mellanox BlueField SmartNIC is the world’s leading, fully-programmable network adapter. Integrating the best-in-class Mellanox ConnectX® network adapter with a set of Arm processors makes BlueField SmartNIC capable of delivering powerful functionality for cloud data-centers, high-performance networking and storage applications. Also, the combination of programmable hardware acceleration engines with general-purpose software and advanced network capabilities turns BlueField into the ideal platform for bare-metal provisioning, storage virtualization, and more.

BlueField provides built-in functional isolation between the host CPU and BlueField’s Arm-based system, protecting each individual workload while providing flexible control and visibility at the server level, reducing risk and increasing efficiency.

BlueField Simplifies Bare-Metal Kubernetes Provisioning

While enterprises opt to deploy bare-metal Kubernetes to obtain direct access to the underlying hardware, they also need to install suitable device drivers to utilize it. Traditionally, customers don’t like to install drivers on their systems, and for good reason—installing drivers adds much overhead to bare-metal provisioning and software management as it requires customizing images to include the needed drivers. This overhead is dramatically reduced in hypervisor-based environments, primarily because hypervisors abstract the hardware, so it is unnecessary to install drivers in guest virtual machines.

To address those challenges, Mellanox BlueField SmartNIC emulates a VirtIO network interface to the bare-metal host operating-system. As a standard Linux network driver, VirtIO enables network connectivity without having to deploy device drivers. BlueField’s hardware-accelerated VirtIO emulation capability provides great performance, infrastructure efficiency and operational agility.

BlueField Makes Composable Kubernetes Storage a SNAP

Bare-metal cloud environments usually install storage media on every host to deliver the best application performance. This comes at a price for the cloud operator by limiting their ability to efficiently provision remote storage, which is easier to migrate and protect. Therein lies a conflict when designing a bare metal environment, between what is best for the application (local storage) and what is best and most easily composable for the cloud operator (networked storage). By leveraging Mellanox BlueField NVMe SNAP technology, cloud operators can now virtualize the bare metal Kubernetes storage with zero impact on the applications, in effect, creating a win-win situation for both. Bare-metal hosts continue to use their standard operating system’s NVMe PCIe driver, with little to no performance degradation, while the service provider is gaining a richer offering with greater efficiency – storage is now virtualized, thin-provisioned, backed up, and can be migrated between servers, providing savings in terms of both CAPEX and OPEX.

BlueField Enables Agentless Bare-Metal Kubernetes Security

Virtualized environments have evolved over the years to offer a range of integrated security services that are built on the foundation of a unified and distributed software control plane for compute and networking. A notable example of such security service is micro-segmentation, which lets you enforce policies on the connectivity between workloads and application domains across the data-center. But deploying bare-metals for your Kubernetes cluster means you can no longer implement hypervisor-based micro-segmentation. There are ample security vendors offering competing agent-based solutions that can be deployed on bare-metal server infrastructures. The challenge here is two-fold, deploying security agents at an environment that was optimized for performance and DevOps automation is often not desirable. In some cases, deploying agents in certain workloads impacts regulatory or compliance and thus not permitted. Mellanox BlueField SmartNIC is perfectly positioned to enable agentless and high-performance security in bare-metal metal Kubernetes environments.

Due to its unique form factor and features, BlueField SmartNIC acts as a “computer-in-front-of-a-computer,” enabling applications to run on its CPU, fully isolated from the host’s CPU and operating-system. This isolation enables software agents to run on the SmartNIC when they cannot run on the host system, making BlueField work best for a range of cyber security solutions, including resilient micro-segmentation, stateful next-generation firewall, cloud-scale anti-DDoS, and more. Separating the security controls from the host, BlueField’s isolation capability also ensures that in the event a host has been compromised, the attack won’t spread further throughout the data-center.

Deploying BlueField SmartNICs in the datacenter, and specifically bare-metal environments, gives security teams enhanced visibility across cloud domains and enforces a consistent security policy in the enterprise, while offering unmatched performance.

BlueField Accelerates Containerized AI Applications

Kubernetes plays an important role in the emergent AI application ecosystem as new applications are built from the ground up as microservices. A key to what makes Kubernetes work best for AI is that it abstracts infrastructure management, enabling data scientists and software developers to focus their time and efforts on building effective AI-driven applications instead of on managing the infrastructure.

Mellanox BlueField SmartNIC offers in-hardware acceleration for Remote Direct Memory Access (RDMA/RoCE) communications, delivering best-in-class performance and usability. RDMA is a network technology that allows for direct memory access from one computer into that of another, without involving either’s operating-system and CPU. RDMA is especially useful in scenarios involving massively parallel computer clusters as it permits high-throughput, low-latency networking. Once an application performs an RDMA Read or Write request, the system delivers the application data directly to the network (zero-copy, fully offloaded by the network adapter), reducing latency and enabling fast message transfer. RDMA over Converged Ethernet (RoCE) is a network protocol that allows RDMA to run over an Ethernet network.

RDMA/RoCE are integrated today into the mainstream code of popular ML/AI frameworks, including TensorFlow, Microsoft Cognitive Toolkit, and others. Having RDMA/RoCE native support in AI frameworks enables applications using those same frameworks to take advantage of the predictable and scalable performance that RDMA delivers.

Mellanox has been working in the Linux and Kubernetes communities on a standardized solution to enable RDMA and RoCE transport technologies for containerized applications. The solution enables enterprises to run AI applications based on the various ML/AI frameworks on bare-metal Kubernetes, with Mellanox BlueField SmartNICs providing accelerated network performance.

Summary

As Kubernetes continues its path into mainstream commercial solutions in 5G wireless networks, autonomous vehicles, industrial IoT and more, enterprises and service providers will turn to bare-metal clouds to achieve higher ROI and lower TCO on their infrastructures. Mellanox BlueField SmartNIC is uniquely positioned to transform bare-metals with the unmatched performance, security and operational agility to unleash to full potential of bare-metal infrastructures.

To learn more about Mellanox BlueField SmartNICs for bare-metal clouds and Kubernetes, Watch this video: https://youtu.be/lQAN9SRviDQ check out this solution brief, or visit www.mellanox.com/products/smartnic

Visit Mellanox at KubeCon + CloudNativeCon Barcelona, Booth S33, where we will be showcasing our award-winning end-to-end Ethernet portfolio including intelligent and smart adapters, switches and cables.

 

Mellanox BlueField empowers zero-trust security solutions

Mellanox Turns Zero-Trust to HERO-Trust

From the early days of humanity, trust has been the foundation of social systems and thriving economies. With the development of information science and technology in the 20th century, trust has also played a key role in data security architectures, including cryptography and encryption, certificate creation and management, authentication, and public key infrastructures, among others. However, in the 21st century era of hybrid and multi-cloud computing, trust-based security models alone are incapable of protecting business and personal data.

Zero-Trust in the Datacenter

When it comes to protecting their data, enterprises embracing the cloud face a myriad of challenges. Often scattered across the enterprise data-center and multiple service providers, the perimeters around data are often broken. The added complexities of infrastructure virtualization and levels of attacks make enterprises extremely vulnerable to multiple attack vectors and potential breaches. Finally, the lack of visibility and control is error prone, posing limitations on service providers and enterprises to implement effective security strategies.

The emergent zero-trust architecture model aims to address cloud security challenges. The zero-trust concept guides enterprises not to trust anyone (humans nor machines) around their applications and data and calls for authentication and authorization of every connection attempt, even those originating from allegedly “trusted sources.” Zero-trust is picking up rapidly as many cybersecurity solutions and cloud service providers leverage this new concept to deliver cloud workload protection.

Is Zero-Trust Enough?

In the age of software-defined everything (SDx), networking and security controls have undergone a transformation, and are often delivered at the host-level, including virtual switches, virtual routers, security software agents and more. This transformation creates a twofold challenge—the first being the need to provision network solutions on every host to deliver maximum speed and agility; the second being the need to provision security controls to gain visibility into and enforce a strict security policy on every host. Yet by focusing on the security side of things, how can one protect the host from compromise if the potential attacker, protected data, and security controls all share the same trust domain (the host)?!

There is a saying in cybersecurity, “There are two types of organizations: Those that know they’ve been hacked, and those that don’t know it yet…” As zero-trust takes ground as a prominent cloud security model, enterprises and cloud service providers need to adapt their infrastructures to separate the security controls from the host to realize the full potential of zero-trust in the data-center.

Zero-Trust Approach with Mellanox BlueField Control Plane Isolation

A SmartNIC is a combination of a NIC and a CPU, integrated on the same device. In fact, a SmartNIC is a computer that runs a fully-functioning operating-system and applications, like any other computer in the data-center. Mellanox BlueField is an advanced programmable SmartNIC, delivering industry-leading performance, flexibility and efficiency that enable a wide range of cyber security applications, including resilient micro-segmentation, stateful next-generation firewall, cloud-scale anti-DDoS, and more.

 

The zero-trust approach using Mellanox BlueField SmartNIC

Figure 1: Mellanox BlueField SmartNIC for Cybersecurity Applications

 

Mellanox BlueField SmartNIC integrates the world-leading Mellanox ConnectX® network adapter with a set of Arm processors, addressing performance and security concerns of modern data-centers. Due to its unique form factor and features, BlueField installed in a host acts as a “computer-in-front-of-a-computer,” enabling applications to run on its CPU, fully isolated from the host’s CPU and operating-system. This isolation is key in making BlueField work best for zero-trust security solutions, as it delivers the needed separation of the security controls from the host, while delivering unmatched performance.  In the event a host has been compromised, the separation between the security controls and the compromised host helps stop the attack from spreading further throughout the data-center.

Mellanox BlueField protects the CPU and OS through isolation

Figure 2: BlueField Acts as “Computer in front of a Computer”

 

Mellanox BlueField also addresses those scenarios in which enterprises are reluctant to deploy security control agents directly on their computing infrastructures. Enterprises are looking to gain visibility into workloads and enforce their security policies in the data-center. However, the presence of legacy applications, compliance regulations and DevOps processes, often do not allow for the deployment of agents. The resultant lack of visibility leaves enterprises with infrastructure silos where security policy enforcement cannot be applied. In these scenarios, the deployment of security control agents onto BlueField, fully isolated from the host system, enables enterprises to gain visibility as well as enforce a consistent security policy across their infrastructures. In addition, deploying agents on BlueField also unlocks server performance and is ideal in bare-metal and Kubernetes environments.

Finally, BlueField’s unique design empowers zero-trust security solutions, including “Host-Unaware” solutions that transmit and receive data, while BlueField acts as a bump-in-the-wire for encryption/decryption, or any other type of manipulation. Additionally, a fundamental role of the zero-trust concept is to establish a highly secure access management framework. BlueField can act as a secure platform for key management to deliver secure access management to the host and/or business applications.

Where Performance Matters

For scalable and high-performant workloads, the intelligent ConnectX Ethernet adapters and BlueField SmartNIC both offer accelerated connection tracking performance, powered by Mellanox’s ASAP2 switching and packet processing technology. ASAP2 leverages the adapter ASIC embedded switch capabilities to deliver best of both worlds – the performance and efficiency of bare-metal server networking hardware, with the flexibility of virtual switching software.

ConnectX Ethernet adapters and BlueField SmartNIC offer accelerated connection tracking performance through ASAP2

 

The fully programmable switch (eSwitch) built into the intelligent ConnectX and BlueField SmartNIC, enables both adapters to handle a large portion of the packet processing operations in hardware. Mellanox ASAP2 frees up the CPU from the heavy compute to handle connection tracking, offering superior performance to non-offloaded connection tracking solutions, while delivering the highest total infrastructure efficiency, deployment flexibility and operational simplicity.

Summary

As 2019 continues to roll out, the zero-trust security model will become a priority to enterprise security teams. Why? Zero-trust is the most effective way to reduce risk and has been proven to be highly effective in heterogeneous cloud environments. In today’s software-defined data-centers, hyperconvergence doesn’t stop at the compute and storage functions; it also includes networking and security in the host software stack. Still, zero-trust by itself is not enough to protect business data and applications when the security controls share the same trusted domain as the attacker.

Mellanox BlueField SmartNIC is perfectly positioned to provide functional isolation that eliminates the risk of east-west attacks, enabling a range of cybersecurity applications with best-in-class network performance, and turning zero-trust to HERO-trust!

To learn more about Mellanox intelligent ConnectX and BlueField SmartNIC adapters for cybersecurity applications, visit Mellanox.com

Visit Mellanox at the RSA Conference, March 4-8 where we will be showcasing our award-winning end-to-end Ethernet portfolio including intelligent and smart adapters, switches and cables.

 

Mellanox Accelerates Apache Spark Performance with RDMA and RoCE Technologies

Looking back over the last decade, Apache Spark has really disrupted big data processing and analytics in many ways. With its vibrant ecosystem, Spark, a high-performance analytics engine for big data processing, is the most active Apache open- source project. Key factors driving Spark enterprise adoption are unmatched performance, simple programming and general-purpose analytics over massive amounts of data.

Spark performance benchmarks indicate that Spark runs Big Data workloads 100x faster than Hadoop MapReduce for both batch and streaming data. This performance gain is primarily attributed to Spark’s in-memory computation approach to data processing and analysis – an approach that is very fast and efficient, enabling large-scale machine learning and data analytics. Spark also utilizes a new data model called resilient distributed datasets (RDDs). This is basically a data structure that is stored in-memory while being computed, thus eliminating expensive intermediate disk writes.

Figure 1: Apache Spark

 

To handle data processing and analysis at scale, Spark operates a continuous event known as the shuffle—a mechanism for re-distributing data so that it’s grouped differently across partitions. Typically, copying data across executors and machines makes the shuffle a complex and costly operation since it involves disk I/O, data serialization, and network I/O. Therefore data scientists and software professionals use various techniques to avoid data shuffling as much as possible in the application design and constructs. Still, shuffle operations are a necessity for most workloads, thus compromising performance.

An Introduction to Remote Direct Memory Access (RDMA)

Remote Direct Memory Access (RDMA) is a network technology that allows for direct memory access of one computer into that of another, without involving either one’s operating-system and CPU. RDMA is especially useful in scenarios involving massively parallel computer clusters as it permits high-throughput, low-latency networking. Once an application performs an RDMA Read or Write request, the system delivers the application data directly to the network (zero-copy, fully offloaded by the network adapter), reducing latency and enabling fast message transfer. RDMA over Converged Ethernet (RDMA) is a network protocol that allows RDMA to run over an Ethernet network.

Mellanox recently announced a release of the open-source Spark-RDMA plugin, geared towards accelerating Spark’s shuffle operations.

Figure 2: Illustration of RDMA zero copy network transport

 

Mellanox Technologies has been a leading pioneer of the popular RDMA and RDMA over converged Ethernet (RoCE) networking technologies, starting in the high-performance computing (HPC) industry. In fact, Mellanox has just released its 8th generation of RDMA/ RoCE capable products including the intelligent ConnectX adapter cards and BlueField SmartNICs, which both have built-in RDMA and RoCE capabilities and deliver best-in class performance and usability.

How does RDMA accelerate Spark workloads?

RDMA today is integrated into the mainstream code of popular machine learning (ML) and artificial intelligence (AI) frameworks, namely TensorFlow, MXNet and Caffe2. Recently, Mellanox announced the v3 release of its Spark-compliant open-source SparkRDMA software plugin, which leverages RDMA communication technology to accelerate Spark’s shuffle operations. The plugin neither changes the mainstream Spark code nor impacts its functionality, making it a perfect fit for existing deployments.

Figure 3: Mellanox ConnectX and BlueField adapter cards

 

Figure 4 below illustrates how SparkRDMA reuses the Unsafe and Sort Shuffle Writer implementations of the mainstream Spark (appears in light green). While Shuffle data is written and stored identically to the original implementation, the all-new ShuffleReader and ShuffleBlockResolver provide an optimized RDMA transport when blocks are read over the network (appears in light blue).

Figure 4: Illustration of SparkRDMA software architecture

 

The following diagrams describe the shuffle read protocol in the original implementation, and when using RDMA (lower diagram). As indicated, using RDMA for Spark’s shuffle operations both greatly shortens and speeds up the process.

Figure 5: Illustrations of the Shuffle Read protocol

 

Spark over RDMA has shown substantial improvements in block transfer times (both latency and total transfer time), memory consumption and CPU utilization, compared to standard Spark’s implementation which uses over TCP. Moreover, the Spark RDMA plugin is designed with ease-of-use in mind, and supports per-job operation, allowing for incremental deployments and limited use for shuffle-intensive jobs.

Finally, the performance benefits of running Spark over RDMA are tremendous! Here are a few data points showing SparkRDMA in-action:

  • 2.6x performance improvement with Terasort compared to non-accelerated Spark

Figure 6: Spark performance appearances using Spark over RDMA

 

  • 4.4x faster shuffles compared to non-accelerated Spark (9.3 min compared to 2.1 min with RDMA accelerated Spark)

Figure 7: Faster Shuffles

 

  • 1,000x faster transfers compared to non-accelerated Spark (2 seconds compared to
    2 milliseconds with RDMA accelerated Spark)
Spark performance improves with RDMA

Figure 8: Faster transfers

 

  • Zero shuffle read time in RDMA accelerated Spark
Spark performance shuffle read times improve with RDMA

Figure 9: Shuffle Read times

Mellanox RDMA/RoCE NICs – the way to go for Spark

Apache Spark is today’s fastest growing Big Data analysis platform.  The Mellanox team is excited to partner with large-scale enterprises, Cloud and AI solution providers to unlock scalable, faster and highly efficient big-data analytics and machine learning for a wide range of commercial and research use-cases.

Learn more about RDMA and RoCE.

To learn more about Mellanox’s fully-featured end-to-end InfiniBand and Ethernet product lines visit our website.

Mellanox Introduces NEO-Host, the Industry’s New Software Solution for Network Adapter Management

You asked and we listened! Mellanox customers can now take advantage of NEO-Host. This Brand new unified tool is now available for managing, configuring, and diagnosing Mellanox Ethernet adapters, across various operating-systems and platforms, and adapter card generations.

Mellanox NEO-Host is a powerful solution for orchestrating and managing host networking. NEO-Host allows data-center operators to configure, monitor, and operate high-speed server Ethernet and InfiniBand network adapters. It simplifies deployment and operations of data-center networking, provides deep visibility into host configuration, and optimizes performance. In addition, Mellanox NEO-Host offers a comprehensive set of JSON-based APIs. To allow easy integration with ad-hoc management systems on top of the JSON commands, Mellanox NEO-Host enables the use of an SDK to send commands via scripting languages. Best of all, NEO-Host can be integrated with the flagship Mellanox NEO™ platform by deploying NEO-Host on Linux hosts managed by NEO.

Key features include:

  • In-depth Visibility into Host Networking – NEO-Host provides comprehensive information on Mellanox’s adapter cards installed in a host. This information is presented in a top-down hierarchal model, to streamline adapter management and monitoring tasks.
  • Adapter Configuration Management – NEO-Host supports configuration of various adapter software and hardware features, including but not limited to SR-IOV, PXE, RoCE, QoS, LLDP, and others. The GUI application is designed to simplify configuration and tuning of advanced adapter settings.
  • Adapter Software Management – NEO-Host enables management and upgrade of firmware images on Mellanox adapters, to enable additional features and address issues as they arise.
  • Advanced Diagnostics –NEO-Host allows for collecting advanced system and adapter diagnostics, allowing to enhance system and applications uptime, and to quickly recover in case of network issues.
  • Performance Monitoring – NEO-Host enables monitoring of adapter performance to achieve high-performance data-center networking, and address issues as they arise.
  • Enhanced Network Automation – NEO-Host’s JSON-based APIs and SDK offer a rich set of features through application programming and enhanced network automation.
  • NEO Platform Integration – The NEO platform is seamlessly integrated with NEO-Host. The NEO Platform features a centralized solution of NEO-Host for servers that are operated with Mellanox ConnectX® family adapters. NEO utilizes the NEO-Host APIs to offer deep host networking visibility, and a rich set of control functions.

Best of all, NEO-Host is offered free-of-charge to Mellanox customers and partners, and is available to download at: MyMellanox.