Hybrid and Multi-Cloud Overlay — Part 1 — Introduction
Do you like being in lock down?
Locked down but not locked out. An experience of lifetime.
Despite of challenges posed by lock down, we can all agree that one thing that was in abundance is time. There wasn’t much toilet rolls. But, there was plenty of time.
I took up a personal project inspired from one of the solutions that my team and I brainstormed and implemented on the fly before. A few years back, I was leading a hybrid cloud migrations project. My team had a strict deadline to meet a milestone.
This particular client had 100+ applications, multiple circuits with various 3rd parties, partners and suppliers. Our migration strategy had been moving all applications and infrastructure as they were without much changes. My team built the infrastructure to move the applications. One of the 3rd party circuit provider didn’t play ball, just weeks before migrations, the 3rd party said “they are slightly grey on the circuit delivery date, it will probably be delivered after the planned migration date”. Do we need to move the migrations date or move as we planned? We went ahead with the plan.
How did we resolve the issue?
We deployed a new layer 2 connection between the legacy datacentre and the new hybrid datacentre. Layer 2 overlay helped us to retain the design. We were able to follow the design as we planned, without much changes.
I worked on Application layer gateways, tunnelling protocols IPSec and GRE when I started my career as an engineer trainee. One of the interesting projects I worked on was that VLANs over GRE with IPSec protection on. We managed to extend the VLANs over Internet. This was back in 2004–2005. Overlay has evolved over period of time.
There are some disadvantages with overlay. Tunnelling is subject to routing/switching loops, and it is difficult to get deterministic performance in an overlay with the traffic overhead and bandwidth hungry traffic such as broadcast and multicast. Routing loops can be fixed by isolating the overlay routing from the underlay routing. Switching loops can be fixed with a simple spanning-tree solution. In overlay, your throughput and performance are highly dependent on the type of traffic that you send. However, these issues can be fixed by changing underlying infrastructure configurations. Though, you cannot avoid overhead, but you can reduce impact of overhead and fragmentation by increasing MTU and reducing latency in the links. Similarly, broadcasts and multicasts can be contained at source to avoid the impact of it.
Overlay is really helpful during migrations when you don’t want much changes in underlay infrastructure. For eg, If you want to host nested infrastructure such as hypervisor over hypervisor or If you want to run containers overlay or IPv6 overlay in IPv4. I have seen one of my customers using overlay to segregate traffic between different departments over shared infrastructure to meet security requirements.
In the above diagram, orange network and blue network don’t mix with each other even though they sit in same hypervisor.