Recently we’ve undergone some massive networking changes within a lab I frequently use – the kind which involves re-iping nearly everything within the entire lab! For the most part, we’ve been pretty successful and the process has been fairly straightforward. That said, Kubernetes is still somewhat new to me, and re-iping both the controller and worker nodes is a little bit more involved than simply changing the netplan config on Ubuntu! So with that, let’s take a look at how to re-ip a Kubernetes Cluster.
Speaking of netplan – that’s step 1
This part is pretty straightforward and simply involves changing the IP of the underlying operating system itlsef – in my case, Ubuntu. Simply navigate to /etc/netplan and modify the file within that directory that contains your IP configuration – in my case, it meant changing it to something similar to below
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
# This file is generated from information provided by the datasource. Changes # to it will not persist across an instance reboot. To disable cloud-init's # network configuration capabilities, write a file # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following: # network: {config: disabled} network: ethernets: ens160: addresses: [10.8.112.40/22] nameservers: addresses: [10.8.112.30, 10.8.96.30] gateway4: 10.8.112.1 optional: true version: 2 |
Once it is saved apply it with
1 |
sudo netplan apply |
With that out of the way we can now get into the nitty-gritty of reconfiguring Kubernetes to see this new IP
And now, let’s start to re-ip the Kubernetes Cluster
If you go ahead and run some kubectl commands you’ll quickly realize that things are still looking at your old IP address – and simply changing your .kube/config file here to point to the new IP address won’t help things as the internal workings of Kubernetes is still pointing to that old address as well for services like the API server. There is a lot of information out there around how to reconfigure all of this, and I followed quite a few guides and found the below way to be the easiest and, well, let’s face it, even better, the least amount of work…
First, head to your controller (master) node and run the following
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
# Set IP Var IP=10.8.112.40 # Stop Services systemctl stop kubelet docker # Backup Kubernetes and kubelet mv -f /etc/kubernetes /etc/kubernetes-backup mv -f /var/lib/kubelet /var/lib/kubelet-backup # Keep the certs we need mkdir -p /etc/kubernetes cp -r /etc/kubernetes-backup/pki /etc/kubernetes rm -rf /etc/kubernetes/pki/{apiserver.*,etcd/peer.*} # Start docker systemctl start docker # Init cluster with new ip address kubeadm init --control-plane-endpoint $IP --ignore-preflight-errors=DirAvailable--var-lib-etcd |
Let’s break down what we are actually doing here
Line 2 simply populates the IP variable for use later – obviously use your preferred IP in this case
Line 5 stops the kubelet and docker services for us
Lines 8 & 9 create backups of both our /etc/kubernetes and our /var/lib/kubelet directories for us
Line 12 recreates a new /etc/kubernetes directory
Line 13 copies all of our precious certificates and keys back into our newly created directory
Line 14 goes ahead and removes both the apiserver certs and the etcd peer certs as these will be recreated
Line 17 fires docker back up for us
and finally Line 20 re-inits our Kubernetes configuration with our newly assigned IP address
Now, after you run the kubeadm init command pay attention to the output, especially the join commands for worker nodes. You should see something similar to below – copy it, keep it for later….
1 |
kubeadm join 10.8.112.40:6443 --token 3d6ftr.rjgho01xsddu4eyb --discovery-token-ca-cert-hash sha256:12399saf93902f209c09204924jfk029490249002kkf0kf209424902kf2e08218387edjo26c |
Go ahead and copy the newly created admin.conf file into your .kube config as follows
1 |
cp kubernetes/admin.conf ~/.kube/config |
You should now be able to execute a kubectl get nodes command to see the status of your nodes. Ideally, you will see this node (your control-plane/master node) with status of ready – and most likely all of your worker nodes will be broken. Go ahead and remove all of the broken worker nodes with the following command
1 |
kubectl delete node node-name |
We should now be left with just one node, our master node – at this point we are done with our master node.
Head on over to your worker node
Now it’s time to head on over to your worker node(s) – again, go ahead and modify the /etc/netplan and re-ip the underlying Operating system just as you did with the master node…
Once that’s done, we need to clear out all of the configurations on the worker node in order to rejoin it to our cluster – this is done with the following command
1 |
kubeadm reset |
You’ll get some nasty warnings – just continue, don’t worry, your pods will still live! Once reset, grab that kubeadm join command we copied earlier and execute it
1 |
<meta charset="utf-8">kubeadm join 10.8.112.40:6443 --token 3d6ftr.rjgho01xsddu4eyb --discovery-token-ca-cert-hash sha256:12399saf93902f209c09204924jfk029490249002kkf0kf209424902kf2e08218387edjo26c |
From there, head on over to your control plane, run “kubectl get nodes” and hopefully you are as lucky as me 🙂
And that’s it! At least, that’s the process that worked for me! Hopefully this post will find someone in need who is looking to completely re-ip a Kubernetes Cluster!