At Red Hat Summit 2018, Mellanox announced an open, high performance and easy to deploy Network Function Virtualization Infrastructure (NFVI) and cloud data center solution combining Red Hat Enterprise Linux cloud software with in-box support of Mellanox NIC hardware. Our close collaboration and joint validation with Red Hat, has yielded a fully integrated NFV and cloud data center solution that delivers high performance, efficiency, and is easy to deploy. The solution includes open source datapath acceleration technologies including Data Plane Development Kit (DPDK) and Open Virtual Switch acceleration.
Private cloud and communication service providers are transforming their infrastructure in order to achieve the agility and efficiency of Hyperscale public cloud providers. This transformation is based on two fundamental tenets: disaggregation and virtualization. Disaggregation decouples the network software from the underlying hardware. Server and network virtualization drives higher efficiencies through sharing of industry standard servers and networking gears using a hypervisor and overlay networks. While these disruptive capabilities offer benefits such as flexibility, agility, and software programmability, they impose significant network performance penalties due to kernel based hypervisor and virtual switching that inefficiently consumes host CPU cycles for network packet processing. Over-provisioning of CPU cores to solve degraded network performance leads to high CapEx, thus defeating the goal to gain hardware efficiency through server virtualization.
To address these challenges, Red Hat and Mellanox are bringing to market a highly efficient, hardware accelerated and tightly integrated NFVI and cloud data center solution combining Red Hat Enterprise Linux OS with Mellanox ConnectX-5 network adapters running DPDK and Accelerated Switching and Packet Processing (ASAP2) OVS offload technologies.
ASAP2 OVS Offload Acceleration:
An OVS hardware offload solution accelerates the slow software based virtual switch packet performance by an order of magnitude. Essentially, OVS hardware offloads offers the best of both worlds: hardware acceleration of the data path along with an unmodified OVS control path for flexibility and programming of match-action rules. Mellanox is a pioneer of this ground breaking technology and has led the open architecture needed to support this innovation within the OVS, Linux kernel, DPDK and Openstack open source communities.
As indicated in figure 1, Mellanox’s open ASAP2 OVS offload technology, fully and transparently offloads virtual switch and router datapath processing to the NIC’s embedded switch (e-switch). Mellanox has contributed heavily to the upstream development of the core framework and APIs such as TC Flower, making them now available in Linux kernel and OVS versions. These APIs dramatically accelerate networking functions such as overlays, switching, routing, security and load balancing. As verified during the performance tests conducted in Red Hat labs, Mellanox ASAP2 technology delivered near 100G line rate throughput for large VXLAN packets without consuming any CPU cycles. For small packets, ASAP2 boosted the OVS VXLAN packet rate by 10X, from 5 million packets per second using 12 CPU cores to 55 million packets per second consuming Zero CPU cores. Thus, cloud, communications service providers, and enterprises can achieve total infrastructure efficiency from an ASAP2 based high performance solution while freeing up CPU cores for packing more VNFs and cloud native applications on the same server. This will benefit customers in reducing the server footprint and achieve substantial CapEx savings. Mellanox ASAP2 is fully integrated with RHEL 7.5, and is available out of the box as tech preview for trials.
OVS DPDK Acceleration:
Customers who can want to maintain existing slower OVS virtio data path but still need some acceleration, can avail Mellanox’s DPDK solution to boost OVS performance. As shown in Figure 2 below, OVS over DPDK solution uses DPDK software libraries and poll mode driver (PMD) to substantially improve packet rate at the expense of consuming CPU cores.
Using open source DPDK technology, Mellanox ConnectX-5 NICs deliver industry’s best bare metal packet rate of 139 million packet per second for running OVS or VNF or cloud applications over DPDK, and is fully Red Hat supported for RHEL 7.5
Network architects are often faced with many options when choosing the best technology that fits their IT infrastructure needs. When it comes to deciding between ASAP2 and DPDK, thankfully the decision making is much easier due to substantial benefits of ASAP2 technology over DPDK. Due to SR-IOV data path, ASAP2 OVS offloads achieve dramatically higher performance than OVS over DPDK, which uses traditional slower virtio data path. Further, ASAP2 saves CPU cores by offloading flows to the NIC where as DPDK consumes CPU cores in order to sub optimally process the packets. Note that similar to DPDK, ASAP2 OVS offload is an open source technology that’s fully supported in the open source communities and is gaining wider adoption in the industry.
Mellanox is an open networking company and is among the top ten contributors in the Linux kernel community. Through our cutting-edge NIC technologies, and joint innovation with open software leader such as Red Hat, we have eliminated the performance barriers associated with deploying modern cloud DC and NFV solutions. Moreover, these groundbreaking performance numbers are achieved without sacrificing valuable server resources or ease of deployment. The intelligence and parallel flow processing capabilities of Mellanox ConnectX family of Ethernet Adapters imposes minimal burden on precious CPU and memory resources, empowering NFV platforms to do what they are supposed to do: network services and application processing, rather than handling packet I/O.