XDP Acceleration over Mellanox’s ConnectX NICs

 
Uncategorized

Visit Mellanox at OCP’s Virtual Virtual Global Summit to see how to achieve XDP Acceleration over Mellanox ConnectX-5® NICs.

XDP (eXpress Data Path) is a programmable Data path in the Linux Kernel network stack. It provides a framework to BPF and can enable high performance packet processing at run-time. XDP works in concert with Linux Network stack, it is not a kernel bypass. Since XDP runs in the Kernel network driver, it can read the ethernet frames from the RX ring of the NIC and take actions immediately. XDP plugs into the eBPF infrastructure through an RX hook implemented in the driver. XDP being an application of EBPF can trigger actions using return codes, modify packet contents and push/pull headers. XDP has various use cases, some of them are with packet filtering, packet forwarding, load balancing, DDOS mitigation and more. A common use case is XDP_DROP, which instructs the driver to drop a packet. This can be done by running a custom BPF program for parsing the incoming packets received in the driver. This program returns verdict or a return code (XDP_DROP) where the packet is dropped right at the driver level without wasting any further resources. Ethtool Counters can be used to verify XDP programs action.

XDP Acceleration over Mellanox’s ConnectX® NICs – Example

The XDP program runs as soon as it enters the network driver resulting in higher network performance and boosts CPU utilization. Mellanox ConnectX® NIC family allows metadata to be prepared by NIC Hardware. This metadata can be used to perform HW acceleration for applications that use XDP.

Let’s go over an example of how to run XDP_DROP using Mellanox ConnectX®-5.

  1. Check if the current kernel supports bpf and xdp:
    • sysctl net/core/bpf_jit_enable
    • If it is not found: Compile and run a kernel with BPF enabled. You can use any upstream kernel > 5.0.

    Enable the following kconfig flags:

    BPF
    
    BPF_SYSCALL
    
    BPF_JIT HAVE_BPF_JIT
    
    BPF_EVENTS

    Then Reboot to the new kernel

  2. Install the clang and llvm: yum install -y llvm clang libcap-devel
  3. Compile samples with the following steps:
    cd <linux src code>
    make samples/bpf/

    This will compile all available XDP applications.

  4. This will compile all available XDP applications.After compilation finishes, you’ll see all XDP applications under /sample/bpf.

Figure 1: XDP Applications under /sample/bpf

 

With the above installations you are now ready to run XDP Applications.

XDP applications can run in 2 modes:

  1. Driver Path – Must have implementation in the driver. Work in page resolution, no SKBs are created. Performance is significantly better. Mellanox NIC supports this mode.
  2. Generic Path – Work with any network device. Works in with SKBs. Performance is worst.

Let’s run XDP_DROP in the Driver Path. XDP_DROP is one of the simplest and fastest way to drop a packet in Linux. Here it instructs the driver to drop the packet at the earliest Rx stage in the driver, this simply means the packet is recycled back into the RX ring queue it just “arrived” on.

The xdp1 application located at <linux_source>/samples/bpf/ implements XDP Drop.

  1. Choose a traffic generator of your choice. We use TRex, available at : https://trex-tgn.cisco.com/trex/release/latest
  2. On the RX side, launch xdp1 in Driver Path using the following command :
    1. <PATH_TO_LINUX_SOURCE>/samples/bpf/xdp1 -N <INTERFACE>  # -N can be omitted
  3. XDP Drop Rate can be shown using the application output as well as ethtool counters:

Using ethtool : ethtool -S <intf> | grep -iE rx[0-9]*_xdp_drop.

Figure2: Verify XDP drop Counter using ethtool Counters

Please visit our booth at the OCP Virtual Global Summit on May 12-15, 2020 to see a live demo running over our ConnectX-5® OCP 3.0 NIC and discuss the solution with our team.

About Nandini Shankarappa

Nandini Shankarappa is a Senior Solution Engineer at Mellanox Technologies and works with Web2.0 and HPC customers. Prior to her stint at Mellanox she has worked at a few networking companies which includes Wireless, Storage networking and Software Defined Networking. Ms. Shankarappa holds a MS in Telecommunication Program from University of Colorado, Boulder and did her undergraduate studies in India.

Comments are closed.