Networking Your Nutanix Enterprise Cloud To Scale

 
Data Center, Switches, VXLAN, , , , , ,

Leaf-Spine Architecture with Mellanox Networking Builds Scalable and Efficient Infrastructure

Your enterprise cloud on the hyper-converged platform is built to scale. As you grow your business with more customers and new services, your enterprise cloud has to meet your business needs for both today and the future. Can your current network infrastructure also scale efficiently to accommodate future business needs? Keep in mind that it’s always more expensive to change when you have a fully operational network already in place.

There is a good chance that your current network is built on a Three-Tier Architecture. It is fairly simple to physically expand your network when applications are running on dedicated physical servers.

1The three-tier architecture consists of the access layer where servers are connected, the aggregation layer where the access switches are connected upstream, and the core layer that connects everything. When more servers are connected to the access layer, you add access switches to physically expand the switch ports at L2 if needed. This is quite straightforward – all you need to do is to calculate the switch ports required and check the rate of over-subscription to the upstream network for sufficient bandwidth.

Much of the data in this framework is processed and remains in the dedicated domain (L2 segment). When a service in one physical domain needs to reach another domain, then the traffic often flows north-south. For example, the request from the webserver goes upstream to the aggregation and core layers and then travels down to the database server in another physical L2 segment. The response data traverses through three layers in the same fashion. But this network topology cannot cope with the scalability and performance of hyper-converged infrastructure at modern data centers.

With hyper-converged infrastructure, a cluster of x86 servers are “glued” by a software control plane to form unified compute and storage pools. All applications are virtualized to run on a virtual machine (VM), or a container, and distributed (and migrated) across the cluster on policy-based automation. Application I/Os are managed at the VM level, but physical data is distributed across the cluster in a single storage pool.

Access to the shared storage, data protection mechanism (replication, backup, and recovery), and VM migration for load balancing now generates a deluge of network traffic between the nodes in the cluster, or so-called east-west traffic.

Now, the three-tier architecture reaches its limit and breaks down.

For the traffic switched within the L2 segment, the commonly used spanning-tree protocol (STP) takes its toll because disabling redundant links to cut the loop results in severe link capability under-utilization. Adding link capacity to accommodate the east-west traffic is quite expansive and is saddled with low efficiency.

For a large cluster that spans over multiple racks and L2 segments, the traffic has to go through the aggregation and core layers which results in increased latency. This large amount of upstream traffic leads to higher rate of oversubscription from the access layer to the aggregation and core layers which will inevitably cause congestion and degraded, unpredictable performance.

For storage I/Os, degraded and unpredictable performance presents the worst scenario possible.

Because of these architectural shortcomings, modern data centers are adopting the leaf-spine architecture instead. Constructed in two leaf (access) and spine layers, the leaf-spine architecture has a simple topology wherein every leaf switch is directly connected to every spine switch.

2In this topology, any pair of end points communicates with each other in a single hop, as this ensures consistent and predictable latency. By using OSPF or BGP with ECMP, your network utilizes all available links, and achieves maximal link capacity utilization. Furthermore, adding more links between each leaf and its spine can provide additional bandwidth between leaf switches.

In addition, the use of overlay technologies such as VXLAN can further increase efficiency. As a result, the leaf-spine architecture also delivers optimal and predictable network performance for hyper-converged infrastructure.

In a nutshell, the leaf-spine architecture provides maximal link capacity utilization, optimal and predictable performance and the best scalability possible to accommodate dynamic, agile data movement between nodes on hyper-converged infrastructure. For this reason, it is only fitting that the leaf-spine network is constructed with Mellanox Spectrum™ switches which provide line-rate, resilient network performance and enable a high-density, scalable rack design.

3Mellanox Spectrum switches deliver non-blocking line-rate performance at link speeds from 10Gb/s to 100Gb/s at any frame size. In particular, the 16-port SN2100 Spectrum switch offers most versatile TOR switch in a half-width, 1RU form factor.

The 16 ports on SN2100 can run speeds at 10, 25, 40, 50 and 100Gb/s. When more switch ports are needed, you can expand a single physical port into four, 10 or 25Gb/s ports using breakout cables. Therefore, SN2100 can be configured as 16-port 10G or 25Gb/s switch or 48-port 10/25Gb/s switch with four 40/100Gb/s ports for uplinks.

The half-width form factor of SN2100 allows you to install two of them side-by-side in a 1RU space on the rack, and run MLAG (Multi-chassis Link Aggregation Groups) between them to creates a highly available L2 fabric. Configuring link aggregation between physical switch ports and hyper-converged appliances utilizes all physical network connections to actively load balance VMs  ̶  a key advantage particularly in all-flash clusters.

It’s also worth pointing out that 100Gb/s uplinks available on Spectrum switches offer more link capacity between leaf and spine switches, which is very useful with all-flash-based platforms.

More details are illustrated in the recently published solution note by Nutanix. As the leading enterprise cloud solution provider, Nutanix sees more and more customers migrate their data centers to Nutanix hyper-converged platforms, from SMBs with a half-rack deployment to large enterprise customers whose cloud spans across multiple racks. Customers are consolidating more intensive workloads to their clouds and starting to use faster flash storage. For these Nutanix-based enterprise cloud deployments, “Designing and implementing a resilient and scalable network architecture ensures consistent performance and availability when scaling.”

Mellanox switches allow you to create a network fabric that offers predictable, low-latency switching while achieving maximum throughput and linear scalability,” noted Krishna Kattumadam, Sr. Director Solutions and Performance Engineering at Nutanix.

“Investing in, and deploying a Mellanox solution, future-proofs your network, ensures that it can support advances in network interface cards beyond the scope of 10 GbE NICs (to 25, 40, 50, or 100 GbE and beyond),” continued Krishna Kattumadam. “Coupled with a software-defined networking solution, Mellanox network switches offer such benefits as manageability, scalability, performance, and security, while delivering a unified network architecture with lower OpEx.”

If you are architecting your network for a Nutanix enterprise cloud, the Nutanix solution note presents solutions that can help you achieve scale and density with Mellanox networking. I will leave much for you to read, and would like to conclude this blog with the following network diagrams. As shown, SN2100s fit in nicely with the right port count for half-rack deployment of 4-12 nodes, typical of SMBs. When the data center grows and more server nodes are added, the same SN2100 switches can also support a full-rack deployment up to 24 nodes. For large enterprise cloud deployments consisting of multiple racks, Mellanox Spectrum switches can scale easily in a spine-leaf topology with great efficiency.

4

You can find more technical details about rack solution design using Mellanox Spectrum Switches in the Mellanox Community and on www.mellanox.com/Ethernet.

Follow us on Twitter: @MellanoxTech

About Jeff Shao

Jeff Shao is Director, Ethernet Alliances at Mellanox Technologies. Prior to Mellanox, he held senior product management and marketing roles at LSI (Avago), as well as Micrel, Vitesse Semiconductor & Promise Technology. He holds a MBA from University of California, Berkeley and a Bachelor of Science in Physics from University of Science & Technology of China.

Comments are closed.