I’m currently working on a project which will require me to deploy and destroy VMware Cloud Foundation multiple times (I know, super fun right!?!). As always, rather than having to deploy configure VCF manually every time I need to rebuild I chose to implement some automation instead. My weapon of choice today – Terraform. My challenge – Along with many other configurations I needed to ensure that the hosts were able to communicate with one of our Rubrik clusters. To make this happen, we need to add a distributed switch port group and configure the respective vmkernel adapters on each host. My flurry of “googles” didn’t bring up much as to how to use Terraform to configure vmkernel adapters – so – as I figured this out I thought I’d throw it out there for anyone else looking for this information.
The Terraform Setup
This part is going to be relatively the same – and matches many of the examples available today – we have a provider.tf file stating we want to use the vSphere provider, the terraform.tfvars file assigning values to our variables, etc. Below though, let’s walk through a bit of the uniqueness of the configuration
vars.tf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
variable "vsphere_server" { type = string description = "vCenter or ESXi host" } variable "vsphere_user" { type = string description = "User with permissions to create VM" } variable "vsphere_password" { type = string description = "Defined user password" } variable "esxi_hosts" { default = [ "esxi41.rubrik.us", "esxi42.rubrik.us", "esxi43.rubrik.us", "esxi44.rubrik.us" ] } |
Pretty simple stuff here – basically defining variables for our vCenter Server, Credentials, and an array of ESXi hosts which we would like to add the vmkernel adapters to.
data-sources.tf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
data "vsphere_datacenter" "dc" { name = "DR" } data "vsphere_host" "host" { count = "${length(var.esxi_hosts)}" name = "${var.esxi_hosts[count.index]}" datacenter_id = "${data.vsphere_datacenter.dc.id}" } data "vsphere_distributed_virtual_switch" "dvs" { name = "DR-DSwitch" datacenter_id = "${data.vsphere_datacenter.dc.id}" } |
Again, no real rocket science here – We are simply stating that within this vCenter we should find an existing datacenter named DR, all of the hosts we defined within our vars.tf file, as well as a distributed switch named DR-Switch. We will store all of these in variables to reference later
And finally, main.tf
And now let’s get into the part everyone has been waiting for – the resources. Shown below is the complete code, which I’ll explain afterwards:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
resource "vsphere_distributed_port_group" "pg-rubrik-data" { name = "Rubrik_Data" vlan_id = 150 distributed_virtual_switch_uuid = "${data.vsphere_distributed_virtual_switch.dvs.id}" active_uplinks = ["uplink1", "uplink2"] standby_uplinks = [] } resource "vsphere_vnic" "v1" { count = "${length(var.esxi_hosts)}" host = "${element(data.vsphere_host.host.*.id, count.index)}" distributed_switch_port = data.vsphere_distributed_virtual_switch.dvs.id distributed_port_group = vsphere_distributed_port_group.pg-rubrik-data.id ipv4 { ip = "192.168.150.${count.index + 41}" netmask = "255.255.255.0" gw = "192.168.150.1" } netstack = "defaultTcpipStack" } |
Alright, so we can see we have a couple of resources defined. Let’s take a look at each:
Creating the portgroup with vsphere_distributed_port_group
This one is pretty straightforward – Looking at the code you can see that I’m simply creating a new portgroup on the dvSwitch defined within our datasources. The portgroup will be named Rubrik_Data, be on VLAN 150, and contain both uplink1 and uplink2 as its’ uplinks.
Configuring vmkernel adapters with vsphere_vnic
The vsphere_vnic resource does the actual creation of the vmkernel adapters on the ESXi hosts. As far as this code goes, a count is defined at the beginning to match that of the length of our esxi_hosts variable, in our case, 4. We then go through and assign the host attribute to the id of the host (depending where we are within the count loop) – I found using element was the only way I could actually get this variable to evaluate. Then, it’s just a matter of passing the distributed switch id from our data sources, and the newly created portgroup id. If you have some sort of method to your madness around IP address management, you can do some quick math with the count index in order to define static IPs for your host (in this case, I wanted each hosts last octet to be 41-44 respectively). Also, even though the netstack attribute states it will use the defaultTcpipStack by default, I found that without this actually being defined nothing would work – so bit the bullet and add one extra line of code 🙂
A quick terraform init, plan, and apply and I have myself some fully configured vmkernel adapters on my hosts. Hopefully this helps if you are looking for how to use Terraform to configure vmkernel adapters. I’ll try and post any other little tidbits I come across as this current project will require quite a bit of scripting with Terraform and different aspects of SDDC Manager automation so stay tuned! Thanks for reading!