Featured image of post Local MacOS Kubernetes cluster with UTM

Local MacOS Kubernetes cluster with UTM

Let's get started setting up a local kubernetes cluster based on virtual machines. We'll utilise UTM with the QEMU hypervisor while getting into all the technical details!

Today we’re going to configure a Kubernetes cluster using virtual machines on our MacOS host. We are going to use UTM which utilises QEMU, a type-2 hypervisor to actually virtualise the new machines. I will be using a 2024 Macbook Pro with the Apple Silicon M4 chip, this guide can also be followed on older M-chips or Intel CPU’s.

Virtual Machines vs Dockerized Kubernetes

There is an abundance of easy to use tools out there to quickly spin up a local Kubernetes cluster. You may have heard of Minikube, KinD (Kubernetes-in-Docker) or K3S. Running Kubernetes inside actual VMs simulates a more realistic production environment. Each VM has their own kernel and networking stack, this doesn’t just offer a higher level of protection to your host machine it also allows us tighter controls over the OS & Kernel.

We’re going to be installing the Kubernetes (v1.32*) cluster using kubeadm, meaning that all of our control plane components are going to be running as Pods on the cluster (in the kube-system namespace, more about that later).

*At the time of writing, v1.32 is the latest minor release. Other minor versions can be installed following this guide, you just need to swap out the version numbers where specified. Learn more about Kubernetes versioning.

Prerequisites

Don’t know your CPU architecture? Run uname -m in a terminal of your choice.

đź’ˇ Got an Apple Silicon M-chip (arm64)? Use aarch64 where there is no specific arm64 download available.

Installation

This section will cover the installation of three virtual machine types. The Router will provide internet & inter-node connectivity to our Kubernetes cluster, using a router like this will allow us to connect to any internet device without the inter-node network being disrupted (i.e by changing IP addresses or firewall rules).

The control plane and worker node(s) are standard Kubernetes components, in this guide we’ll install one worker and one dedicated control plane node. Repeat the installation steps to add more control plane or worker nodes to your cluster for high-avalability.

âť—The amount of RAM (memory) is expressed in Mebibytes. The amount of processor cores refer to virtual CPU cores assigned to the VM. On Apple Silicon, these are distributed across efficiency and performance cores, while on Intel Macs, they may correspond to physical or logical hyperthreaded cores.

Router

Resource Value
Operating system Alpine Standard 3.21.3+
Memory 512 Mebibyte
Processor cores 1
Disk size 10 Gigabytes

Let’s get this party started by setting up our Router VM. Press the + icon in UTM and select Virtualize. On the next screen, select Linux

Getting started

On the Linux virtualisation page we are going to leave Apple Virtualization & Boot from kernal image off. Press Browse... and select the Alpine ISO we have just downloaded.

Linux virtualization engine

Fill in the memory, CPU core requirements & storage as per the values above. Leave OpenGL acceleration turned off. We are going to skip the Shared Directory page.

Now you should be greeted with the Summary page, here you can review what we’ve just configured before spinning up the vm. Give it a name of your liking and press Save.

Summary Page

Before we can continue installing Alpine on the machine we have to add a Serial (terminal) output to the vm. Right click on the machine in the lefthand overview and press Edit.

Head over to the Devices tab, we are going to remove the primary Display that’s configured for the machine and instead add a Serial.

Router devices configuration

We’re ready to install Alpine Linux! Hit Save & press the play button to spin up our virtual machine. After waiting for some seconds you’ll be greeted with a login screen, login with the root username and no password.

Alpine Linux login screen

We can begin the installation process by running the setup-alpine command in the terminal. We’ll be prompted by the setup wizard to configure a hostname, use a name that you’ll be able to identify the machine with, I choose router. For the Interface (networking) configuration, you can simply press enter. We want to configure the default interface to be assigned an IP address from our internet modem using DHCP.

Alpine initial installation configuration

Next, enter a memorable password for the root user. This is the most privileged user on the machine and it can perform any action, make it secure! After filling in your password you’ll be asked what timezone you’re from. From this part onwards, you can safely use the default values for sections Proxy, Network Time Protocol, APK Mirror and User. This will leave us to configure the system disk.

Alpine Linux disk configuration

Wait for the installer to finish and then exit out of the vm. We are going to go back into the settings for the vm by right clicking on it in our overview and pressing Edit (if the vm isn’t stopped yet, press Stop first to make sure it’s turned off!). Now, under the Drives section, select the USB Drive and delete it, this will make sure the installation image is no longer mounted when starting the machine.

USB Drive configuration

Make sure to save your changes! You’ve now installed Alpine Linux on a virtual machine. Turn on the vm and login with your previously configured password for the root user. We’re going to update our virtual machine, but before we can do that we have to make sure we have the right repositories set.

In the terminal, run the following command: cat /etc/apk/repositories Repositories list It’s important to set the official Alpine main and community repositories as seen above. If you do not see these two repositories, you can add them with the command below.

1
echo -e 'http://dl-cdn.alpinelinux.org/alpine/v3.21/main\nhttp://dl-cdn.alpinelinux.org/alpine/v3.21/community' > /etc/apk/repositories

Now, we’re ready to update our system. Run the apk update && apk upgrade -y command and wait for it to complete. We’re going to have to completely turn off the vm one last time to edit the network configurations. Our goal is to have our own isolated inter-node Kubernetes network that also has a gateway to the internet.

To achieve this, we’ll have to configure two network devices for our Router. A Kubernetes Host-only network device and one that’s set to the Bridge mode. This will allow our Kubernetes nodes to direct their internet requests via the router. Let’s dig into the machine configurations first.

Under Devices you’ll see a Network device. Select it and simply change the Network Mode from Shared Network to Bridged (Advanced). Select en0 as the Bridged Interface.

Bridged network device configuration

Add a new Network device and set it’s Network Mode to Host-Only. Save your configurations and spin up the machine. After you’ve authenticated in the vm, run the ip a command to list the network devices. Network device output We can ignore the lo device, it’s irrelevant to us. The eth0 network device already has ip address 192.168.0.171 assigned, this is an ip address in my local home network. It’s time for us to install some required networking related packages to turn this machine into a router.

Run the following command to start installing and configuring the necessary packages, and wait for the machine to reboot.

1
2
3
4
5
6
apk add iptables && \
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE && \
/etc/init.d/iptables save && \
rc-update add iptables && \
echo 'net.ipv4.ip_forward = 1' > /etc/sysctl.d/router.conf && \
reboot

We’re almost done with the router now. All we need to do is configure the host-only device and give it a static IP in our network. I’m going to go with the 192.168.100.0/24 subnet, where 192.168.100.1 will be our router, and we will increment per Kubernetes node.

First, let’s check in on our current network interfaces configuration. Run the cat /etc/network/interfaces command. Network interfaces As you can see, two of our network devices have default configurations to request IP addresses. The loopback interface lets the system communicate with itself using IP address 127.0.0.1, it allows the system to talk to itself without needing a physical network device.

In Bridged mode, the eth0 interface acts like a physical device directly connected to your home network. This means it gets an IP address from your home router’s DHCP server, just like any other device on your network (e.g., your laptop or phone).

As you can see, there is no entry for the eth1 interface yet. This is the also the reason why we didn’t see an IP address assigned to the eth1 interface previously. Let’s set it up!

Run the following command in your terminal to set default configurations for eth1

1
printf "auto eth1\niface eth1 inet static\n\taddress 192.168.100.1\n\tnetmask 255.255.255.0\n" >> /etc/network/interfaces

Verify that our eth1 entry is now added to /etc/network/interfaces Interfaces file with eth1 setup Apply the configuration by running rc-service networking restart. rc-service network restart VoilĂ ! You now have an Alpine Linux router virtual machine. Everything has been set up as it should, to verify our eth1 interface now has a static ip address of 192.168.100.1/24, run the ip a command one last time. Ip address overview To quickly recap: We’ve just installed Alpine Linux on a virtual machine, configured the box to allow the forwarding of network packets from our Host-Only network to our Bridged network and ultimately the internet.

ℹ️ Make sure you always have the router running alongside your Kubernetes control plane nodes. The machines that make up your Kubernetes cluster live inside the Host-Only network on your Mac, allowing them to talk to eachother but not your home router for internet access. If you notice an issue with internet connectivity, restarting the router is a great troubleshooting starting point.

It’s time for us to move on to the first Kubernetes node.

🛜 Switching Wi-Fi networks (e.g Office to Home)? Either run rc-service networking restart or reboot the router to make sure it’s assigned a new IP and connected to the new router.

Control plane

Resource Value
Operating system Ubuntu Server 24.04+
Memory 2048 Mebibyte
Processor cores 2
Disk size 30 Gigabytes

In Kubernetes, a node refers to a machine running a Kubernetes distribution. You have control plane and worker nodes, we’re going to start off with the control plane as it’s the brains of the operation and requires the most installation efforts of the two.

Start by repeating the three first steps to setting up a new virtual machine. Give it a name, select the operating system installation image (be sure to select Ubuntu!) and set the CPU, memory and disk specifications as outlined above.

Ubuntu control plane configuration overview Toggle the Open VM Settings checkbox and hit Save. In the machine settings, swap out the Display for a Serial again, and set the Network device to Host-Only mode. Save your settings and spin up the control plane.

At first you will be greeted with some fairly standard installation options, continue the installation in rich mode before selecting your language and geographical location.

Ubuntu installer

When asked what base you’d like to use, keep the default Ubuntu Server base.

Ubuntu base installation selection

The next screen is important, it’s the network configuration for our control plane. Select the network device, it should only be one as we configured the default device to sit in the Host-Only network. In the new dropdown box select Edit IPv4.

Ubuntu network configuration

Set the IPv4 Method to Manual and copy the values you see below. This will assign the control plane IP address 192.168.100.2 and use our router 192.168.100.1 as it’s gateway address.

Ubuntu ipv4 configuration

Before hitting Save, in the Nameservers tab, fill in 1.1.1.1. Now you can save the network configuration! For the next three steps up until Profile configuration you can safely use the default values.

In the profile configuration, set the machine’s name to controlplane. Fill in your own username for the user account along with a secure password.

Ubuntu profile configuration

The next pages can safely be skipped using the default configurations. Wait for the system installer to complete before fully turning off the machine.

Ubuntu system installer progress

Once the vm is fully turned off, edit it and remove the USB Drive from it’s attached Devices again. Afterwards turn on the control plane again and login using your previously chosen username and password.

We’re going to need access to the entire system while installing Kubeadm. To get privileged access to the machine, switch to the root user by running sudo su - and typing in your password.

Let’s test our internet connectivity by saying hello to google.com using a ping. Google ping test Beautiful! Our control plane machine was able to find google.com on the internet when the machine itself doesn’t have an internet connection. The control plane uses the router to access the internet. To test this, fully shutdown the router and run the same ping command again. Ping command without internet As you can see, the machine cannot even resolve an ip address for google.com, this is because the nameserver we have configured is Cloudflare’s 1.1.1.1, in order for us to resolve a new domain name we need an internet connection in order to do so.

With the network sorted we can pivot our attention to setting up the node for Kubernetes. Run the following command to set up the required system settings & kernel modules.

1
2
3
4
5
6
7
8
sudo swapoff -a && \
sudo sed -i '/\/swap.img/ s/^/#/' /etc/fstab && \
sudo systemctl mask swap.img.swap && \
sudo sysctl -w net.ipv4.ip_forward=1 && \
echo 'net.ipv4.ip_forward=1' | sudo tee -a /etc/sysctl.conf && \
sudo sysctl --system && \
sudo modprobe overlay && \
sudo modprobe br_netfilter

We’ve enabled IPv4 Forwarding, the overlay & br_netfilter kernel modules and disabled memory swapping.

That’s not all though, we need to install a Container Runtime Interface before we can bootstrap a Kubernetes cluster on the machine. We’ll use containerd, you can install containerd from source or using a package repository. We’re going to use the latter.

The containerd team does not publish their updates to package repositories directly, this is done by the Docker team. Docker uses containerd internally and they also publish the apt packages. This means that we’re going to have to add the Docker apt repository as a source for our machine to find packages from.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
sudo apt-get update && \
sudo apt-get install -y apt-transport-https ca-certificates curl gpg && \
sudo install -m 0755 -d /etc/apt/keyrings && \
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc && \
sudo chmod a+r /etc/apt/keyrings/docker.asc && \
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null && \
sudo apt-get update

That was a long command! It pulled Docker’s GPG key that’ll be used to check if packages we download really come from Docker, afterwards we update the available package list with an apt-get update.

Now the containerd.io package should be available for us to be installed, run apt install containerd.io to install it!

Containerd apt installation

Verify that the installation went successful by running.systemctl status containerd, you should see the service is in an active (running) state. Containerd system service status Do you see our previously installed kernel driver being executed? ExecStartPre=/sbin/modprobe

Run the following commands to generate and tweak the default containerd configuration.

1
2
3
containerd config default > /etc/containerd/config.toml && \
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml && \
sudo systemctl restart containerd

With our CRI installed we’ve finally completed the necessary prerequisites to transform this vm into a Kubernetes control plane, so let’s do that! Just like with the Docker repository previously, we have to download the official Kubernetes apt repository GPG key to verify we are installing the legitimate packages. Run the following command to do so;

1
2
3
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/Kubernetes-apt-keyring.gpg && \
echo 'deb [signed-by=/etc/apt/keyrings/Kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/Kubernetes.list && \
sudo apt-get update

We’ve now updated apt to also look for packages inside the Kubernetes repository. To check what packages and their versions are available to us, run apt-cache madison <package>. Where <package> can either be kubeadm, kubelet or kubectl (we’re going to be installing all three!). All available kubeadm versions All available kubelet versions These packages follow the Semantic Versioning standard. We have three packages with major release 1, minor release 32 with varying patch versions 0, 1 and 2. We aren’t too worried about the specific patches, let’s just go with the latest 1.32.2-1.1.

Run the sudo apt install kubelet kubeadm kubectl -y command to install the required Kubernetes binaries. Afterwards, pin their versions to make sure you don’t accidentally upgrade any of them using sudo apt-mark hold kubelet kubeadm kubectl.

(Optional) Let’s quickly verify the versions of the installed components directly. Kubeadm version commandKubelet version command Kubectl version command Hm… the kubectl command shows us two versions and a connection error. Thankfully we see that the Client Version is set to our target version. The client version refers to the version of kubectl we have installed, which as per our apt command earlier is 1.32.2. Kustomize is a Kubernetes configuration manager which allows you to update manifest files without a templating language like Helm’s chart templates. This isn’t relevant to our installation however, so we can safely ignore the Kustomize version.

That leaves us with a connection refused. Kubectl automatically tries to connect to the kube-apiserver on localhost:8080 if no alternative destination is configured. In our case this fails because we haven’t got the control plane components running yet.

It’s finally time for us to bootstrap a Kubernetes cluster using kubeadm. Let me introduce you to two very important initialization flags. The --service-cidr flag lets you set a subnet that Kubernetes will use for all of its services. The --pod-network-cidr lets you specify a subnet that will be used to assign IP addresses to pods.

It’s important that both of these subnets are not being used on your network! Thankfully, our Kubernetes cluster lives inside our Mac’s Host-Only network and does not automatically have access to a wider network, limiting the chance for a collision, however it’s still a good idea to decide on a dedicated subnet. I am going to use the default service cidr, and I’m going to specify 192.168.0.0/16 as the pod network cidr.

🛜 Be careful and don’t randomly assign a subnet to your cluster! It’s best to use a private subnet as your pod cidr, as per RFC 1918 192.168.0.0/16 is a private subnet. 10.96.0.0/12 (default service cidr) is technically not private, but practically it is so we’re fine with keeping the default.

That leaves us to run the kubeadm init --pod-network-cidr=192.168.0.0/16 command to get our cluster started.

⚠️ Having issues getting kubeadm init to work? Check if cri isn’t a disabled plugin in containerd, run cat /etc/containerd/config.toml | grep disabled_plugins. If you see disabled_plugins = ["cri"], scroll down to the Troubleshooting > Containerd CRI Disabled Plugin section of this post before retrying the init command and continuing.

If all is well, your terminal output should look something like this; Kubeadm init command stdout That’s a lot of information, let’s laser in on what we actually need.

Kubeadm init thankyou output

Our main area of concern here is the long kubeadm join command at the very bottom, save this in a note. As you can see, it points to our control plane machine 192.168.100.2 at port 6443. We haven’t explicitly opened a port, or a ran an application that binds to that port ourselves… Run the export KUBECONFIG=/etc/kubernetes/admin.conf command followed by kubectl get pods -A. Kubernetes system pods We just ran the command to list all pods in every namespace. As we’ve just bootstrapped this cluster using kubeadm, our control plane components are ran as pods inside the kube-system namespace. As you can see, the coredns pods are in a Pending state. Let’s have a look at how our control plane node is doing. Kubernetes node list Our control plane node has a status of Not Ready, this is because CoreDNS is not running. Our CoreDNS pods will stay in this pending state until we have installed a Container Networking Interface. There’s a couple of options to choose from such as; Flannel, Calico and Cilium. Pick whichever one you prefer, I’m going to pick Cilium.

ℹ️ The Flannel CNI is only a layer 3 networking solution for Kubernetes. Many Kubernetes features such as network policies are not supported and do not take effect when using Flannel.

Though we can install Cilium using Kubernetes manifest files, they have made it easy for us to get everything set-up with their own installer. Let’s install the Cilium CLI.

đź’ˇ Preparing for the CKS exam? Cilium is the CNI used on the exam, by installing Cilium in your local cluster you can fully utilise it’s features such as the eBPF based network stack, Cilium Network Policies (layer 3, 4 & 7) and Mutual Authentication (mTLS). Keep your eyes on the exam curriculum to stay up to date.

1
2
3
4
5
6
7
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

We now have access to the command cilium. We can install Cilium onto our cluster using the cilium install command. I am going to install it with the SPIRE server enabled which is required for mTLS.

1
2
3
cilium install \
--set authentication.mutual.spire.enabled=true \
--set authentication.mutual.spire.install.enabled=true

Cilium install command So far so good, let’s check on our Cilium installation with the cilium status command. Cilium status Everything works! We can verify this one more time by running kubectl get pods -A again. Kubernetes pod list with cilium Look at that! Cilium is installed and is running it’s pods, our CoreDNS pods are now also in a Running state. Let’s check on our node again to see if it’s changed it’s status using kubectl get nodes. Control plane ready status Perfect! Our cluster is up and running! Remember that kubectl version command that was giving us the connection error from before? Kubectl version command working as intended No more connection issues, we now have a working kube-apiserver to handle the request. However, in it’s current state our cluster isn’t going to run any of our applications just yet. Kubectl create nginx pod Kubectl nginx pod pending Our pod is stuck in a Pending state. This is because our control plane machine has a taint on it that prevents us to accidentally schedule a workload on the control plane. It’s a best practice to keep your control plane isolated from any of your other applications, both for cluster security and to ensure your control plane has enough resources to function. Control planes come with the node-role.kubernetes.io/control-plane taint which has a NoSchedule effect. Node taint If we really wanted to, we can remove this taint and all of our applications are able to be scheduled freely on the control plane. You can also choose to control what applications can run on the control plane by giving it a toleration.

We’re going to follow the best practice and leave the control plane as is, it’s time to set up a worker machine.

Worker(s)

Resource Value
Operating system Ubuntu Server 24.04+
Memory 1024 - 4096 Mebibyte
Processor cores 1 - 4
Disk size 30+ Gigabytes

Our worker nodes are going to be the ones actually running our applications. Usually your worker nodes are bigger and more powerful than your control plane node, as in my case I am not going to be hosting any large applications on my cluster I have equipped my worker node with the minimal needed system resources. If you need a more powerful test environment, consider allocating more system resources to your workers.

In UTM, create a new virtual machine with the same Ubuntu base image as the control plane node. Remove the Display from the Devices section and don’t forget to change the Network to Host-Only.

Worker machine configuration overview

Spin up the machine and follow the same installation steps as the control plane. Don’t forget to change the assigned IP Address in the network tab to 192.168.100.3. Set the same Cloudflare 1.1.1.1 nameserver.

Network configuration

You can safely use the defaults for all the installation steps, just fill in the device Profile section with your username and password again.

Ubuntu profile configuration

After completing the installation wizard, fully turn off the VM and remove the USB Drive from it’s Devices, remember? Start up the machine again and switch to the root user sudo su -. Update the system packages apt update && apt upgrade -y. Apt update command To get our worker to join the cluster, we need to perform the same tweaks and installations as the control plane node. Copy and paste the following command to enable kernel modules, turn off swap, allow for ip forwarding and install the Kubernetes distribution binaries (I’ve chained all of our previous commands into one!).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
sudo swapoff -a && \
sudo sed -i '/ swap / s/^/#/' /etc/fstab && \
sudo sysctl -w net.ipv4.ip_forward=1 && \
echo 'net.ipv4.ip_forward=1' | sudo tee -a /etc/sysctl.conf && \
sudo sysctl --system && \
sudo modprobe overlay && \
sudo modprobe br_netfilter \
sudo apt-get update && \
sudo apt-get install -y apt-transport-https ca-certificates curl gpg && \
sudo install -m 0755 -d /etc/apt/keyrings && \
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc && \
sudo chmod a+r /etc/apt/keyrings/docker.asc && \
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null && \
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/Kubernetes-apt-keyring.gpg && \
echo 'deb [signed-by=/etc/apt/keyrings/Kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/Kubernetes.list && \
sudo apt-get update && \
sudo apt install containerd.io kubeadm kubelet kubectl -y && \
containerd config default > /etc/containerd/config.toml && \
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml && \
sudo systemctl restart containerd kubelet

Remember that kubeadm join command you saved from before? We’ll run it on our worker node to join the cluster (you can omit the --v=5 flag). Kubeadm join output Success! Our worker node has joined the cluster according to kubeadm, on our control plane we can run kubectl get nodes to have a look at the worker status. Kubectl node list We can see our worker node has joined the cluster and is ready! By default, when joining a cluster, the kubeadm join command does not assign a role label to nodes. If you would like to explicitly label your node with the worker role run kubectl label node worker node-role.kubernetes.io/worker=. Functionally this doesn’t make a difference for us, but it’s nice to have. Kubernetes node list with worker labelled You can substitute the role of your node with anything you like. This allows you to dedicate nodes per role in your landscape. Instead of categorising your nodes as control plane & worker nodes, you can separate workers based on their workloads (i.e database, backend, etc.) Kubernetes node list with a custom worker label Running kubectl get pods -A -o custom-columns="NAME:.metadata.name,NODE:.spec.nodeName,STATUS:.status.phase will show us all of the pods running on the system and on which node they have been scheduled. Kubernetes pod list We can see that the necessary Cilium and kube-proxy pods are running on the new worker node. And look at that, our nginx image is now out of it’s limbo state and is running on the worker node.

🎉 Congratulations! You now have a fully functioning, host-isolated Kubernetes cluster based on virtual machines. Adding more workers or control planes for high availability is as easy as just repeating the steps we’ve taken previously. Just make sure you add the --control-plane flag on the kubeadm join command for additional control planes.

Challenge: Connect Your Mac to the Isolated Kubernetes Cluster

Your cluster is now up and running inside its own host-only network—completely isolated from your Mac and the rest of your home network. But can you bridge the gap and manage your cluster directly from macOS?

Right now, kubectl only works inside the control plane VM. Your mission: configure secure connectivity so you can run kubectl from your Mac, just like you would with any remote cluster.

Hints to get started:

  • You’ll need to copy the admin.conf kubeconfig file from your control plane VM to your Mac.
  • The Kubernetes API server (kube-apiserver) is only accessible from inside the host-only network. Consider using SSH port forwarding, a VPN tunnel or setting up the required internal networking routes to expose it to your Mac.
  • Make sure your Mac can reach the control plane node’s IP (e.g., 192.168.100.2:6443), but don’t expose it to your entire home network for security reasons.

Can you set up seamless, secure access? Give it a try and unlock full control of your cluster from your Mac!

Troubleshooting

Node stuck on NotReady after reboot / Persistent swap issues

Restarted your worker node and it’s status stays stuck on NotReady? It could be that you still have swap enabled on your machine(s). If you followed our steps before, you probably have swap disabled in the usual locations but it doesn’t hurt to check!

Have a look into at the Kubelet logs with journalctl -n 10 -u kubelet --no-pager. Kubelet logsAt the bottom we see the error running with swap on is not supported, please disable swap or set –fail-swap-on flag to false. This means we still have swapping enabled on our system. First and foremost, temporarily disable swap with sudo swapoff -a.

We need to make sure all of the swap entries in /etc/fstab are commented out. fstab file At the very bottom, there is a line with /swap.img, we need to make sure this line is commented out. Either manually edit the line to #/swap.img or run

1
sudo sed -i '/\/swap.img/ s/^/#/' /etc/fstab

Let’s check if there’s any systemd units left that enable swapping systemctl list-units --type swap. Systemctl unit list This machine has a systemd unit that enables swapping, meaning that after a reboot swapping will be enabled again even after disabling it in /etc/fstab. Copy the name of the UNIT and run the following command, changing swap.img.swap if it differentiates from your unit name.

1
systemctl mask swap.img.swap

Restart your machine, repeat the journalctl command for the kubelet. Kubelet logs after swap disable The swap errors have disappeared! Our kubelet is running healthy again!

Containerd CRI Plugin Disabled

The CRI plugin is necessary for the kubelet to interact with containerd. During the installation of containerd it’s possible that the CRI plugin get’s added to the disabled_plugins list in containerd, let’s check. Containerd config file disabled plugin list As we can see, cri is in my disabled_plugins list. Run the following command to remove CRI from the disabled plugin list and restart containerd.

1
2
sed -i '/disabled_plugins/s/\["cri"\]/[]/' /etc/containerd/config.toml && \
systemctl restart containerd
Built with Hugo
Theme Stack designed by Jimmy