Hybrid and Multi-Cloud Overlay — Part 4— Challenges — Terraform, Ansible, Packer and vSphere

Ramesh Rajendran
4 min readSep 24, 2020

--

I came across an old school looping issue while testing the tunnel. I tackled looping issue by implementing spanning-tree.

sudo ovs-vsctl set bridge l2-br stp_enable=true

I came across some challenges while writing this script. Especially, when I try to bring up two interfaces in router virtual machines. I will cover the challenges I had with each environment in the next video.

Lets look at the vSphere on premise environment.

VM templates require vCenter license. So that, you can clone the virtual machines. My setup didn’t have any vCenter.

How did I bring up VMs? I used packer to build the VM directly from ISO image.

Virtual machines dynamically pickup ip-addresses. There was no way for me to fetch the ip-addresses of virtual machines while building them. Especially, I need to know router public IP address to establish the tunnel.

How did I fixed the ip-addresses issue?

Well, I exercised an old school trick. Using Packer, I hard coded a predefined mac address in the router virtual machine. My DHCP server got the private IP reservation for this particular mac address. My ISP router has static NAT configurations for this particular private ip address. Router virtual machine will use the same ip address to talk to the internet.

"vmx_data": {
"ethernet0.networkName": "backend",
"ethernet1.networkName": "{{user `ESXI_MGMT_PG`}}",
"ethernet0.present": "TRUE",
"ethernet0.startConnected": "TRUE",
"ethernet0.virtualDev": "e1000",
"ethernet0.addressType": "generated",
"ethernet0.generatedAddressOffset": "0",
"ethernet0.wakeOnPcktRcv": "FALSE",
"ethernet1.present": "TRUE",
"ethernet1.startConnected": "TRUE",
"ethernet1.virtualDev": "e1000",
"ethernet1.addressType": "static",
"ethernet1.address": "{{user `VSPHERE_ROUTER_FRONT_INTF_MAC`}}",
"ethernet1.generatedAddressOffset": "1",
"ethernet1.wakeOnPcktRcv": "FALSE"
}

To connect back the virtual machines, I need to know the management ip address of them. It is not easy to find the ip of the client machine while provisioning. But, you can fix this by using “file download” in packer. Just grep the ip address from the interface, stored it in a file and download the file. You can get the management ip from the file.

"provisioners": [  
{
"type": "shell",
"inline": [
"ifconfig | sed -e '0,/RUNNING/d' | sed -ne 's/.*inet \\(.*\\) netmask.*/\\1/gp' | sed -n '1p' >> {{user `ESXI_CLIENT1_NAME`}}.txt"
],
"execute_command": "sudo -E -S bash '{{ .Path }}'",
"only": ["{{user `ESXI_CLIENT1_NAME`}}"]
},
{
"type": "file",
"source": "{{user `ESXI_CLIENT1_NAME`}}.txt",
"destination": "{{user `ESXI_CLIENT1_NAME`}}.txt",
"direction" : "download",
"only": ["{{user `ESXI_CLIENT1_NAME`}}"]
}
]

In Public cloud, only my router virtual machines needed two network interfaces. If you take a typical VMware environment, you probably have a dedicated VM management interface in addition to the payload interfaces. So, I deployed two network interfaces for each virtual machine. It is easy to assign multiple Ethernet interfaces to virtual machines using packer. What is not easy is that order of interfaces.

Where I used Terraform and ansible?

Packer is for building a golden image. Packer usually shuts down and deletes virtual machines once build is completed. Since I am not using any vcenter, it is better for me to retain the image rather than rebuilding the virtual machine again.

"keep_registered": "true"

I used terraform “null resource” to bring up the virtual machines when packer shuts down the virtual machines.

resource "null_resource" "poweron" {
depends_on = [vsphere_host_port_group.backend]
connection {
type = "ssh"
user = var.VSPHERE_USER
password = var.VSPHERE_PASSWORD
host = var.ESXI_HOST
}
provisioner "remote-exec" {
inline = [
"vim-cmd vmsvc/power.on $(vim-cmd vmsvc/getallvms | grep ${var.ESXI_ROUTER_NAME} | awk '{print $1}')",
"vim-cmd vmsvc/power.on $(vim-cmd vmsvc/getallvms | grep ${var.ESXI_CLIENT1_NAME} | awk '{print $1}')",
"echo ${var.VM_SSH_KEY_FILE} >> test",
]
}
}

In the last stage, How I deleted the virtual machines after testing? Tf state file doesn’t have virtual machines and Packer is not an appropriate tool to destroy the resources. Well, I used ansible to shut down and delete the virtual machines.

- name: PowerOff Router
ignore_errors: yes
shell:
vim-cmd vmsvc/power.off $(vim-cmd vmsvc/getallvms | grep {{ESXI_ROUTER_NAME}} | awk '{print $1}')

- name: PowerOff Client1
ignore_errors: yes
shell:
vim-cmd vmsvc/power.off $(vim-cmd vmsvc/getallvms | grep {{ESXI_CLIENT1_NAME}} | awk '{print $1}')

- name: Destroy Router
ignore_errors: yes
shell:
vim-cmd vmsvc/destroy $(vim-cmd vmsvc/getallvms | grep {{ESXI_ROUTER_NAME}} | awk '{print $1}')
- name: Destroy Client1
ignore_errors: yes
shell:
vim-cmd vmsvc/destroy $(vim-cmd vmsvc/getallvms | grep {{ESXI_CLIENT1_NAME}} | awk '{print $1}')

In vSphere firewalls, you need to allow SSH and VNC to build and provision virtual machines. vSphere firewall rules are not retained between reboots. You need to follow knowledge base mentioned here to make the rules as persistent.

https://kb.vmware.com/s/article/2011818

vSphere usually comes with the self-signed certificate for the management. Please make sure that certificate has vSphere hostname or ip address. Automation server should trust the certificate that you configured in the vSphere. I had some issue with untrusted certificates.

Part 4 — Video blog

Previous Page(Part 3) …………………………………………Next Page(Part 5)

--

--

Ramesh Rajendran

Freelancer with 16 years of experience in Hybrid & multi-cloud, security, networking & Infrastructure. Working with C-level execs. Founder zerolatency.solutions