2. Evict all pods from the node before upgrade
kubectl drain k8s-worker-01 --ignore-daemonsets --delete-emptydir-data
This command will make sure no pods are running on the node you are upgrading
3. Now ssh into the node you want to upgrade
4. check the Ubuntu version before upgrade
The node should be running Ubuntu 20.04 LTS
5. Execute pre-requisites before the upgrade
#update gpg kubenetes repository
Adding the certificate for the kubenetes package repository has been changed compared to the installation of Ubuntu 20.04. so therefore you need to enter the commands below.
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
# Stop the kubelet service
Stopping the kubelet service is not required, but it's more safe and frees up resources for the upgrade
sudo systemctl stop kubelet
6. Upgrade to new ubuntu 22.04 LTS
You can start the upgrade by entering the commands below. This is just standard upgrade procedure via command-line for Ubuntu. Two remarks. The update-manager-core may already be installed, and I use the parameter -d after the do-release-upgrade command, since the upgrade for Ubuntu 22.04 has been released yet. This will be version 22.04.1. After this version has been released the -d is not longer required.
sudo apt update && sudo apt dist-upgrade -y
sudo apt install update-manager-core
sudo apt autoremove
sudo do-release-upgrade -d
The upgrade starts
Press ENTER to continue, You can setup a 2nd ssh session to the node via port 1022 if you want.
Press ENTER to continue. Third party repositories (like docker and kubernetes) will be disabled.
Now the upgrade to Ubuntu 22.04 LTS will start and will take some time.
The message "No candidate ver: docker-ce" can be ignored.
# /etc/sysctl.conf
At a certain point the following message will appear:
Select Y - Install the package maintainer´s version, after the installation we will correct this.
The installation will continue
In a later stage the following screen will appear.
# /etc/sshd_config
Select "keep local version currently installed", and TAB to select "OK" and press ENTER.
The upgrade will continue, and a moment later the following message will appear
# remove old packages
Press Y to continue, all old packages will be removed.
the installation will continue
After a moment the following message will appear:
# remove old kernel
Select No to abort kernel removal (old kernel will be removed)
# Reboot after upgrade is finished
The first part of the upgrade is finished, so press Y to reboot the node.
7. Post installation adjustments (required !)
After a few minutes you should be able to ssh back into your node.
# restore sources list from 3rd party
The first thing you need to do is to restore the apt sources from docker and kubernetes, which were disabled during the upgrade, by entering the commands below
sudo rm /etc/apt/sources.list.d/docker.list
sudo rm /etc/apt/sources.list.d/kubernetes.list
sudo mv /etc/apt/sources.list.d/docker.list.distUpgrade /etc/apt/sources.list.d/docker.list
sudo mv /etc/apt/sources.list.d/kubernetes.list.distUpgrade /etc/apt/sources.list.d/kubernetes.list
sudo apt update
The result should be like the screenshot below
# adjust system.conf
The next thing we to do is adjust the sysctl.conf file. We open it in nano via the follwing command:
sudo nano /etc/sysctl.conf
look the following line "#net.ipv4.ip_forward=1" and remove the "#" to uncomment it and active it.
It should look like the screenshot below:
# extra modules required for kubernetes
Next we need install the linux-modules-extra-raspi. This is required for the flannel containers to work properly, otherwise your pods will fail due to network issues.
Type in the following commands to install and reboot the additional linux modules
sudo apt install linux-modules-extra-raspi
sudo reboot
8. Final checks
After a few minutes you should be able to ssh back into your node.
# check version after upgrade
You should see that now Ubuntu 22.04 LTS has been installed
# check if node is ready
Check if your node is ready via the command
The result should look like the screenshot below
# uncordon kubernetes worker node
If the node is ready you can uncordon, so it can host pods again. This can be done by running the following commands:
kubectl uncordon k8s-worker-01
kubectl get nodes
The node should now be fully up and running again:
You have a genuine capacity to compose a substance that is useful for us. You have shared an amazing post about Kubernetes Cluster.Much obliged to you for your endeavors in sharing such information with us.
ReplyDelete