kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME home-rpi-1 Ready <none> 6h22m v1.19.2 192.168.1.74 <none> Ubuntu 20.04.1 LTS 5.4.0-1019-raspi containerd://1.3.3-0ubuntu2 home-rpi-2 Ready <none> 6h22m v1.19.2 192.168.1.209 <none> Ubuntu 20.04.1 LTS 5.4.0-1019-raspi containerd://1.3.3-0ubuntu2 home-rpi-3 Ready <none> 6h17m v1.19.2 192.168.1.194 <none> Ubuntu 20.04.1 LTS 5.4.0-1019-raspi containerd://1.3.3-0ubuntu2 home-rpi-4 Ready <none> 6h11m v1.19.2 192.168.1.145 <none> Ubuntu 20.04.1 LTS 5.4.0-1019-raspi containerd://1.3.3-0ubuntu2 home-server-1 Ready master 3d12h v1.19.2 192.168.1.111 <none> Ubuntu 20.04.1 LTS 5.4.0-48-generic cri-o://1.19.0 home-server-2 Ready <none> 9h v1.19.2 192.168.1.140 <none> Ubuntu 20.04.1 LTS 5.4.0-48-generic cri-o://1.19.0
Homelabs are cool. Happy I made one too. Let’s me test things against non-prod clusters like yolo rook configurations and advanced networking with multus. Though this is about building one for now.
A few things are used to make this all work.
|Baremetal Servers||You can get these many places||Put an OS on a USB and install it from boot (Or go PXE if you’re feeling fancy)|
|Deployer||kubeadm||Instructions followed based on your distro|
|Container Runtime Interface (CRI)||Containerd or CRI-O recommended||Same as above|
|Container Network Interface (CNI)||Kube Router||All devices on same network|
|Storage||Rook Ceph||Raw Devices w/ No Partitions More Info|
|Load Balancing||Metallb||Layer 2 more w/ a range of IPs reserved for Load Balancer service type|
|Monitoring||Prometheus Operator||Above installed|
|Git Ops||FluxCD||A git repo accessible by cluster|
Following the docs in the above repo make for a great home cluster. You can even go as far as to run Virtual Machines using kubevirt once you have it all running.
Now, this isn’t really a step-by-step how-to guide. More of a reference. Though you are free to get started with a quick gist for Ubuntu 20 (likely for all DEB based systems, though untested) to prep your nodes and my git-ops repo that includes the above. You’ll need to fork it though and apply your own configurations based on your needs.
Some Extra Gotcha’s for ARM
If you’re going to run a cluster of Raspberry Pi’s or mixed cluster of Raspberry Pi’s and normal servers (ARM/AMD64 mix) you’ll want to do a few things.
- Ensure you have
cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1added to
/boot/firmware/cmdline.txtor similar for your distro. This enables cgroup features for containers on those nodes.
- Running a mix with Rook Ceph isn’t possible at this time unless you use a differerent set of images. Check out the raspbernetes multiarch repo for more info on that here
I hope this serves as a starting point for your adventures!