As cloud use cases and the public cloud matures, hybrid cloud and multi-cloud adoption is growing significantly. Hybrid cloud is the preferred enterprise strategy, according to RightScale’s 2017 State of the Cloud Report. The trend clearly shows that more and more vendors are looking to deploy less critical workloads in the cloud and run critical database (or even apps) on premise data centers. The concept behind this trend is known as edge computing (sometimes also called fog computing) where most of the local and critical processing is done on the edge instead of sending all the data to the cloud. The trend of edge computing and the hybrid cloud is clearly identified by public cloud providers such as Azure and the on premise Azure stack — both evidence of a growing trend.
Almost all of the Enterprise using the cloud believes in a multi cloud approach; making sure that they are not locked in with just one cloud vendor and have either on premise or have multiple cloud vendors. So, hybrid cloud comes with two options:
- On premise + public cloud combination
- Public cloud 1 + public cloud 2 combination
In both cases, networking for the hybrid cloud is key.
Cloud Ready Networks
In past few years networking also evolved to support cloud use cases. BYoIP, multitenancy, agile workloads, devops, massive data growth, Machine Learning and advanced visibility requirements have helped networks to evolve.
- New technologies such as VXLAN, Unnumbered BGP, EVPN, Segment Routing and advanced visibility have evolved networking technologies to fit with cloud needs of scale, agility, programmability and flexibility.
- Open Networking has helped the tier-1 and tier-2 cloud vendors and cost savvy, “as a service” cloud providers to grow their data center exponentially and still keep costs down.
- Open source ecosystem such as OpenStack has helped to accelerate innovation and at same time bringing all components of cloud together without any vendor lock.
What about Hybrid Cloud Networks?
Hybrid cloud networks require special networking because the networks must connect the workloads sitting in different environments which, in turn, in different domains and likely running different protocols. For hybrid cloud networks, Data Center Interconnect (DCI) is another term used. In the past, many technologies have been available for DCI. QinQ has been a known technology wherein a VLAN can be encapsulated in other VLAN – essentially preserving the service w.r.t customer. Other than QinQ, there had been technologies such as EoMPLS, VPLS and OTV. All of these technologies were great for solving the challenges associate with older data centers.
New data centers which are designed for the latest cloud properties (multi tenancy, high speed, application level segregation, etc.) require more seasoned DCI technology. A protocol that has the ability to not only identify a customer network (in multi-tenant environment) but also the service running inside that customer network (which is another layer of segmentation inside customer network).
QinVNI for Hybrid Cloud Networks
In the past, QinQ was used to stretch customer VLANs between data centers. With VXLAN becoming the prominent way of connecting the cloud over L3 fabrics, the QinQ protocol has evolved into QinVNI. The concept remains same in terms of preserving the service and customer tag and map the right customer and service inside a multi-tenant environment. The following figure explains how this feature works.
In above example, single translation is happening at the edges and still the internal service tag is preserved and delivered right to the cloud. The technology is scalable to number of VXLAN supported in the edge switches.
With the rise of hybrid cloud, it is only a matter of time before you will need to connect to cloud. Technologies such as VPN gateway, direct connect, etc., are available from cloud providers. However, how flexibly you want to connect to cloud using those technology is up to you to design. With the granularity coming to workload level, it is high time that the networks are defined by service level using technologies like QinVNI.
Why QinVNI for Hybrid Cloud Networks
QinVNI is a new hybrid cloud network / Data center interconnect technology, which offers the best of VXLAN and QinQ in single protocol. Hybrid cloud use cases are expanding be it storage – Storage on premise /DR-backup in cloud; Enterprise – DB on premise/compute in cloud; or be it multi cloud scenarios. With the growing number of hybrid cloud use cases, the networking for such scenarios will become crucial. QinVNI provides multi-tenant Hybrid cloud networking by preserving VLANs inside VXLANs.
QinVNI with Mellanox
For QinVNI to work properly at scale, the switch at the edge should have scalable VXLAN implementation. Below is an example of a POC which was setup for a well-known cloud provider. The POC demonstrates multi-tenant environment, wherein the VLANs can be preserved inside VXLAN headers and can be delivered at on premise data center. Same VLANs has been used for different tenants.
The following section gives details on how one can configure Hybrid cloud networking on Spectrum-based Mellanox platforms.
The topology above shows the POC, which has following components
- Two Data centers (Data Center 1 and 2). Data center – 1 is customer’s public cloud (for customer A and B) and Data center – 2 is on premise for the same customers A and B.
- Each data center has two servers (each for a tenant), and each server has three VMs (each in one VLAN).
- Each tenant has same VLANs.
- Each customer is assigned to different VNIs.
- In this example we have VTEPs – one side on the Data center – 2 servers, and other side on the Edge TOR of Data center – 1.
- For simplicity, the configurations used static VXLAN tunnels between VTEP in compute and VTEP on TOR. EVPN can be used to advertise VTEP and MAC/IP, instead of statically configuring VTEP and learning MAC/IP in data plane.
This blog does not cover the underlay configuration, and assume that there is L3 connectivity on underlay.
Following are the configurations on the VTEP on TOR (connected to Data center – 1).
Following are on premise server configurations for configuring static VTEP (on server – 2):
QinVNI is latest and most mature technology for DCI and Hybrid Networks designed for new generation Leaf/Spine L3 fabrics. Mellanox customers have designed, tested and deployed multiple hybrid cloud networks running at 100GbE with best in class ASIC and without compromising on scale or performance. Contact us today on how we can improve your data center interconnect or Hybrid cloud networking challenges.
- Learn more: EVPN