Hybrid and Multi-Cloud Overlay — Part 2 — Use case

Ramesh Rajendran
3 min readSep 24, 2020

--

Some applications are quick wins in the cloud migration strategy, exchange can be quickly transformed to a SaaS application office365. But, custom made legacy applications cannot be easily transformed to cloud native applications. Major cloud service providers recommend to move application without changes as part of the cloud migration strategy in enterprise architecture. Overlay are really helpful when you want to extend your infrastructure over multiple cloud without affecting the underlying infrastructure.

To demonstrate a use case of overlay, I developed an overlay solution over hybrid and multi-cloud environment.

Virtual machines and Containers in same LAN

Think of it as VMs and containers connected to same LAN. Overlay protocols IPv6 and IPv4 are independent of underlying infrastructure. I used DevOps tools to deploy, test and destroy the entire solution dynamically.

Cloud environments that I picked up for this solution are

1. Public clouds AWS, AZURE, GCP, Oracle and Alibaba Cloud

2. Private datacentre based on vSphere

Even if you don’t want to deploy overlay or multi-cloud, you should be able to use just subset of the scripts to create Jenkins multi parallel pipelines, configure security policies, build virtual machines and attach multi NIC cards in the major cloud environments.

Initially, I started with point to point overlay just over two cloud.

Point to Point overlay

What is the fun in playing overlay over two cloud? I want to put all well-known cloud providers in the mix. I picked up hub and spoke scenario.

Hub and Spoke - vSphere as the hub

Design Rationale is that I need a topology to incorporate most of the public clouds to demonstrate overlay, and hub & spoke can accommodate multiple clouds. Moreover, if I use DevOps scripts, I should be able to swap hub and spoke dynamically.

Hub and Spoke - AWS as the hub

Overlay protocol can be switched between VXLAN and GENEVE. This solution can be implemented in other topologies such as multi hub and cloud within cloud. In hub and spoke scenario, all traffic between spokes will be transferred via Hub. All VMs and containers sit in the same IPv4 and IPv6 subnets, and they were able to communicate over both IPv6 and IPv4 with no dependency on underlying infrastructure.

Lets look at the components in this hub and spoke topology.

I am calling the virtual machine that establishes tunnel with other public clouds as a router and the virtual machine that establishes tunnel with router as a Client. Traffic between two spoke sites always send via Router in Hub site. Router VM hosts atleast three Docker containers in each environment. All VMs and containers have dual stack configurations. VMs are built using Ubuntu. All router virtual machines have two interfaces. Client virtual machine in public cloud has only one interface.

Components — Cloud
Components — vSphere

One difference is that client virtual machine in VMware has two interfaces because I preferred to have dedicated management interface in private cloud. All public cloud virtual machines are managed over internet and private cloud virtual machines are managed within the environment over dedicated management interface.

Part 1 — Video blog, covers some in Part 2
Part 2 — Video blog

Previous Page(Part 1) …………………………………………Next Page(Part 3)

--

--

Ramesh Rajendran
Ramesh Rajendran

Written by Ramesh Rajendran

Freelancer with 16 years of experience in Hybrid & multi-cloud, security, networking & Infrastructure. Working with C-level execs. Founder zerolatency.solutions

No responses yet