Hybrid and Multi-Cloud Overlay — Part 3— DevOps, Tools and Scripting

Ramesh Rajendran
4 min readSep 24, 2020

I preferred to bring up the environment dynamically and tear down the environment when they are not required. To do this dynamically, I picked up

· OVS an open source implementations of virtual switch

· CI/CD tool Jenkins

· shell scripts

· ansible

· terraform

· packer

· Groovy that comes with Jenkins

· Docker

· Ubuntu Virtual machines

Scripts brought up Ubuntu virtual machines from ISO, Docker containers dynamically and configured overlay across all clouds.

I picked up multiple parallel pipelines within Jenkins that helped me to build the infrastructure faster in parallel within multiple clouds. Building virtual machine from ISO takes most of the time as I didn’t use pre-built virtual machine template. vSphere takes less than 5 minutes with pre built virtual machine template. In public cloud, script takes about 5 minutes to build the infrastructure.

Jenkins — multi parallel pipeline

I have uploaded the scripts under the below repository.


I have a bunch of environment variables in the script which really defines the backbone variables required in each cloud environment. I added comments throughout the scripts where I can and you can also find information in Jenkinsfile and README file.

With the environment variables, you can choose overlay protocol such as geneve or vxlan, you can prefer to destroy the resources once you tested them, you can choose subnets, and you can define cloud API key files that required to connect the cloud.

Jenkins — Environment variables — Important ones

  • CLOUD_LIST — “aws_azure_gcp_ali_oracle_vsphere”

You can either define them in the main Jenkins script or you can have it in the Jenkins GUI. You can find more environment variables in the code repository.

When comes to keys, I recommend you to use vault or any form of encryption to store the API keys. In this project, I am using environment variables to store the keys as I didn’t share the script with others during development phase. Keys won’t be visible in the logs. If you switch on the debug, keys will be visible in the logs. Give more attention when you are publishing scripts in github and when you are sharing the log files.

In this pipeline, you can see six stages.

Lets look at the stages one by one,

1In the first stage “Initializing variables”, as name states, I am assigning values for the variables. If you want to assign values dynamically prior to creating resources, you can do in this stage.

2In the 2nd stage, “Infrastructure changes”, terraform script is bringing up virtual machines in all environment. This stage also provisions the virtual machines with necessary packages.

3In the 3rd stage, “Containers & VMs config”, ansible scripts are bringing up containers in all router virtual machines. We are also gathering IP addresses to terminate tunnels in this stage.

4In the fourth stage, “tunnel config”, this is where all magic happens, connectivity is established based on the information prepared in the previous step. Ansible scripts are bringing up tunnels between hub and spoke sites over router virtual machines. Initially, I thought of writing the tunnel configurations in Python. However, I managed to accommodate within Groovy.

5In the final stage, “testing”, connectivity is tested from the client virtual machine located in hub site to the all containers and client virtual machines located in all spoke sites.

Once the testing is completed, scripted stage “unconfig” will be called from the main script to destroy all resources created in previous stages.

In some occasions, parallel pipeline try to access variables at the same time. To avoid over writing, I defined lock blocks within parallel pipeline. It is similar to semaphore, lock restricts you from accessing same variable multiple times at the same time.

Sample code —Jenkins pipeline — Multiple Stages — lock():

lock("ipv6 allocations lock") {
echo "$ipv6_index"
starting_index = ipv6_index

If a stage fails, you don’t want to stop the whole build process. You can dynamically mark a stage as failure using “unstable” function.

Jenkins — multi parallel pipeline — skipping stages when hub failed
Jenkins — multi parallel pipeline — dynamically removing the failed stage

You can dynamically add or remove stages.

Jenkins — multi parallel pipeline — single cloud
Jenkins — multi parallel pipeline — three clouds

As you see in this screenshot, I used only three clouds in this deployment. I greyed out unused clouds during build and removed all other clouds in the remaining stages.

Sample code — Declarative pipeline — Parallel build:

   stage('Infrastructure changes') {
when {
branch 'master'
failFast true
parallel {
stage('VSphere') {
when {
expression {
steps {
script {
try {
} catch (exc) {
echo "Caught: " + exc.toString()
unstable("${STAGE_NAME} failed!")

Sample code — Parallel build — Dynamic stages:

    stage('Containers & VMs config') {
steps {
script {
def containers_vm_config_list = [:]
active_cloud_list.split('_').each {
containers_vm_config_list["${it}"] = {
node ('master') {
stage("${it}") {
try {
} catch (exc) {

parallel containers_vm_config_list
Part 3— Video blog

Previous Page(Part 2) …………………………………………Next Page(Part 4)



Ramesh Rajendran

Freelancer with 16 years of experience in Hybrid & multi-cloud, security, networking & Infrastructure. Working with C-level execs. Founder zerolatency.solutions