Table of Contents

  1. 1. What will we build?
  2. 2. Motivation
  3. 3. Kubernetes Crashcourse
    1. 3.1. đŸ–„ïž Nodes
    2. 3.2. 🧭 Namespaces
    3. 3.3. đŸ§± Pods
      1. 3.3.1. Example YAML
    4. 3.4. 📩 Deployments
      1. 3.4.1. Example YAML
    5. 3.5. 🌐 Services
      1. 3.5.1. Example YAML
    6. 3.6. đŸ’Ÿ Persistent Volume Claims
      1. 3.6.1. Example YAML
    7. 3.7. Kubernetes Distributions
  4. 4. VPS Choice
    1. 4.1. CPU Architecture
    2. 4.2. Number of CPU cores
    3. 4.3. RAM
    4. 4.4. Storage / Disk
  5. 5. Node Operating System
  6. 6. Security Considerations for Kubernetes Nodes
    1. 6.1. Keep the Kubernetes Nodes Updated Automatically
    2. 6.2. Node-to-Node Communication Must Be Private
    3. 6.3. Public-Facing Nodes Need Tight Firewalls
    4. 6.4. SELinux: Don’t Disable It
    5. 6.5. Backups
  7. 7. Node Provisioning
    1. 7.1. Setup Base-System
    2. 7.2. Enable dnf-automatic
    3. 7.3. Configure Network connection
      1. 7.3.1. Network check
    4. 7.4. Install microk8s
    5. 7.5. Configure Firewall
    6. 7.6. Combine nodes into one HA K8s Cluster

What will we build?

We setup a Kubernetes Cluster with 3 Nodes. Each node is running AlmaLinux as base OS. The base OS is configured in a secure way with fundamental hardening (SELinux enabled, Firewall active, SSH, automatic updates). We then install microk8s on each Node and combine the nodes into one HA cluster. Once the Cluster is setup we configure metallb as Loadbalancer, install nginx as Ingress Controller and then use Cert-Manager to issue TLS Certificates to our workloads

Motivation

My grandfather often said, “To truly appreciate the easy way, you must first tackle a problem the hard way; only then will its value become clear.” A strange way to kick off a blog post about Kubernetes, right? Stick with me, I promise it’ll make sense in a few minutes. But first let me take a selfie I’d love to walk you through my experiences with hosting things, to visualize why I never knew but always needed Kubernetes.

It all started with my brother and me hacking away on local PHP websites using XAMPP. We’d build small projects, test them locally, and when we were happy, we’d upload everything via FTP to some low-cost web hoster. Simple times—but frustrating too. We’d often run into issues caused by differing PHP versions locally and on the server, or find ourselves debugging quirks introduced during upload. Still, it was fun. Until it wasn’t.

Looking for better tools, we tried out CMSs like WordPress and Drupal. They promised faster setups and more functionality out of the box. But with that came a new beast: vulnerabilities. We found ourselves spending more time patching plugins and closing security holes than actually building cool things.

Eventually, I decided to go deeper: I rented a VPS and began self-hosting services like GitLab Community Edition, Mattermost, and Grafana. Bare-metal Linux gave me control, but it came at a cost—systems slowly bloated over time, cluttered with leftover configs and packages from manual installs. Maintaining order became a full-time job.

That’s when containers entered the picture. Docker—and later Podman—let me isolate apps, remove installation traces, and wipe the slate clean with a single command. I began containerizing all my self-hosted apps, writing custom systemd units for autostart, implementing health checks, and ensuring everything came back up on reboot.

Then came docker-compose, a revelation for multi-container setups—finally, I could define my app stack declaratively. But as I added more services, a single VPS wasn’t enough. I introduced a load balancer and deployed apps to multiple VPSes using round-robin balancing. It worked
 kind of. But resource utilization was suboptimal, and managing updates across multiple servers became another rabbit hole.

I gave Docker Swarm a try—it looked promising, but the deeper I went, the more I realized it wasn’t production-ready. That’s when I finally landed on Kubernetes. It seemed daunting at first, but eventually I set up a self-hosted MicroK8s cluster in high-availability mode with 6 nodes. And for the first time, things felt right. Resources were pooled, apps were scheduled where they fit best, and the cluster just worked.

Today, I run a variety of workloads on that cluster:

  • Websites for open source projects, including public download mirrors
  • Privacy-focused frontends like Nitter, Redlib, and Searx
  • Sites for small businesses and local communities
  • My own blog, of course
  • And a suite of internal monitoring and analytics tools—Grafana, Prometheus, the ELK stack, and more

This journey—from clunky FTP deployments to full-blown container orchestration—was anything but smooth. But in hindsight, my grandfather’s words ring true. Kubernetes isn’t something you start with—It’s something you grow into. It rewards bottom-up knowledge: understanding the layers beneath, the pain points, and the limitations of simpler tools. Only after wrestling with the chaos do you truly appreciate the simplicity and power Kubernetes brings.

Kubernetes Crashcourse

Kubernetes is incredibly powerful—but at first glance, it can feel like you’re drowning in terminology. When I first approached it, I seriously considered retreating back to the comfort of docker-compose. But here’s the truth: once you get past the jargon and understand the core concepts, Kubernetes is actually quite logical—and even elegant.

Kubernetes

In this section, we’ll break down Kubernetes into its essential building blocks. We’ll do this by writing small YAML files and feeding them to Kubernetes using its command-line interface, kubectl. Each YAML file defines a specific component—like a Pod, Deployment, or Service—and together, they describe what your application should look like, how it should run, and how it should behave.

To keep things approachable, we’ll walk through a simple example: deploying a basic web server using Nginx, listening on port 80. As we go, we’ll explore each Kubernetes concept one step at a time. By the end, you’ll have a complete, containerized, and “Kubernetized” web server setup—including persistent storage and multiple replicas for high availability.

đŸ–„ïž Nodes

A Kubernetes node is a single machine—either a physical server or a virtual private server (VPS)—that runs part of your Kubernetes cluster. Each node provides the runtime environment for your workloads: it’s where your containers actually run.

There are two main types of nodes:

  • Control plane nodes (also called master nodes). These are responsible for the brainwork of the cluster. They run the Kubernetes control plane components like the API server, scheduler, controller manager, and etcd (which stores cluster state). They don’t typically run your applications.
  • Worker nodes: These are the muscle. They run your application containers inside Pods, and they report to the control plane.

In a highly available (HA) Kubernetes setup, it’s common to run multiple nodes to ensure redundancy and resilience. Kubernetes takes care of intelligently scheduling your workloads across available worker nodes, automatically restarting Pods if they fail, and balancing traffic between them. Interestingly, if no separate worker nodes are present, Kubernetes can also schedule workloads directly onto the control plane nodes. This means you don’t strictly need dedicated worker nodes—an HA cluster can consist of just three nodes, each hosting both the control plane components and your application workloads. While this setup requires careful resource planning, it’s perfectly viable for home labs or small-scale self-hosted clusters.

💡 Tip: To run a highly available Kubernetes cluster, you need a minimum of three nodes. That means you’ll need to provision three VPS instances or physical machines to get started with HA.

🧭 Namespaces

Namespaces are a way to organize resources within a Kubernetes cluster. They let you logically divide your cluster into isolated environments, even though everything runs on the same underlying infrastructure.

A Namespace tells Kubernetes:

  • Where a resource “lives” (like a folder in a file system)
  • How to scope access with RBAC (role-based access control)
  • How to separate environments (e.g., dev, staging, prod)
  • How to avoid name conflicts (you can reuse the same resource name in different namespaces)

Namespaces make it easier to manage multi-team, multi-app, or multi-tenant clusters.

✅ You can apply quotas, limits, and policies at the namespace level—giving you fine-grained control over resource usage and access.
Example YAML

apiVersion: v1
kind: Namespace
metadata:
  name: io-rtrace-blog-k8s-crashcourse

This YAML file defines a Kubernetes Namespace named io-rtrace-blog-k8s-crashcourse. The apiVersion is v1, and the kind is Namespace, which tells Kubernetes to create a new logical space in the cluster.

Once the namespace exists, you can create resources like Deployments, Services, or ConfigMaps within that namespace using the -n flag (e.g., kubectl apply -f deployment.yaml -n io-rtrace-blog-k8s-crashcourse). Resources inside this namespace are isolated from those in other namespaces by default.

Namespaces are not full security boundaries, but they are essential for organizing workloads, applying policies, and avoiding conflicts. For example, two teams can each deploy a service named web in their own namespaces without colliding.

If your cluster has multiple users or applications, Namespaces are a best practice for clean separation and control.

đŸ§± Pods

A Pod is the smallest deployable unit in Kubernetes. Think of it as a wrapper around one or more containers that should always run together.

  • Usually, a pod runs a single container (like your web app).
  • Sometimes, it runs multiple tightly coupled containers (e.g., a sidecar for logging or a reverse proxy).
  • All containers in a pod share the same network IP and volume mounts.

💡 Tip: If you’re coming from Docker, think of a Pod as a mini-VM that runs your container(s) with some extra glue.

Example YAML

apiVersion: v1
kind: Pod
metadata:
  name: io-rtrace-blog-k8s-crashcourse-webserver-pod
spec:
  containers:
    - name: webserver
      image: docker.io/nginx:latest
      ports:
        - containerPort: 80

This YAML file defines a basic Kubernetes Pod, which is the smallest deployable unit in the Kubernetes ecosystem. It starts by specifying the API version (v1), which tells Kubernetes what version of its internal API to use for interpreting this object. The kind is set to Pod, meaning this configuration is describing a Pod resource. Under metadata, the Pod is given a name io-rtrace-blog-k8s-crashcourse-webserver (it can obviously be any name you wish). The spec section defines what the Pod should actually run. In this case, it lists a single container with the name nginx, using the nginx:latest image pulled from Docker Hub. The container is set to expose port 80, which is the default HTTP port used by nginx. However, this configuration only exposes the port inside the Pod itself; it does not make it accessible from outside the cluster. That step would require additional resources like a Service which we’ll get to in a moment.

📩 Deployments

Deployments are a means to declaratively run and manage Pods. You don’t create Pods directly (unless you’re debugging). Instead, you use a Deployment.

A Deployment tells Kubernetes:

  • What container to run (image, version, env vars, ports, etc.)
  • How many replicas to run (e.g., 3 pods for load balancing)
  • How to update the app (rolling updates, no downtime)
  • How to keep it alive (auto-restart, auto-replace crashed pods)

Deployments make your app self-healing and scalable.

✅ You write a Deployment YAML file once, and Kubernetes handles the rest—rolling updates, restarts, crash recovery.

Example YAML

apiVersion: apps/v1
kind: Deployment
metadata:
  name: io-rtrace-blog-k8s-crashcourse-webserver-dep
spec:
  replicas: 3
  selector:
    matchLabels:
      app: io-rtrace-blog-k8s-crashcourse-webserver-dep
  template:
    metadata:
      labels:
        app: io-rtrace-blog-k8s-crashcourse-webserver-dep
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          ports:
            - containerPort: 80

This YAML file defines a Kubernetes Deployment, which is a higher-level abstraction used to manage and maintain a set of identical Pods. The apiVersion is set to apps/v1, which is the stable API version for Deployments. The kind is Deployment, indicating that this resource will ensure a specified number of Pods are running and automatically replace any that fail or become unresponsive. Under metadata, the deployment is named io-rtrace-blog-k8s-crashcourse-webserver-dep, serving as its unique identifier within the cluster. The spec section begins by declaring replicas: 3, which tells Kubernetes to always keep three Pods running based on the template provided. The selector field uses matchLabels to tell the Deployment how to identify its managed Pods—in this case, by looking for the label app: nginx. The template section defines the structure of each Pod to be created. Inside template.metadata, the Pods are given the label app: nginx, which must match the selector. Then, under template.spec, a single container is described with the name nginx, running the nginx:latest image from Docker Hub, and exposing port 80 internally within the container. This Deployment ensures that Kubernetes continuously maintains three healthy Nginx Pods and can update or restart them automatically if needed. Unlike a standalone Pod, a Deployment gives you lifecycle management features like rolling updates, rollbacks, and self-healing—making it a foundational building block for production-grade workloads.

🌐 Services

Pods are ephemeral—they come and go. Their IP addresses constantly change. That’s where Services come in.

A Service gives your pods:

  • A stable IP or DNS name
  • Load balancing between pods (round-robin)
  • A way to expose your app (internally or to the internet)

There are 3 common types of Services:

  • ClusterIP: Internal-only access (default)
  • NodePort: Expose via a static port on every node (simple, not ideal)
  • LoadBalancer: Exposes your app via a cloud/network load balancer (or MetalLB in self-hosted setups)

🔗 Services connect clients to your pods, even as pods are replaced or scaled up/down.

Example YAML

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: ClusterIP

This YAML file defines a Kubernetes Service of type LoadBalancer, which is used to expose the NGINX Deployment to external traffic outside the Kubernetes cluster. The apiVersion is set to v1, and the kind is Service, indicating that this configuration creates a network endpoint for accessing a group of Pods. Under metadata, the service is given the name nginx-service, which is how you’ll refer to it within the cluster. In the spec section, the selector matches Pods that have the label app: nginx—these are the Pods created by the NGINX Deployment. This tells Kubernetes which Pods should receive traffic routed through this Service. The ports section specifies that the Service listens on port 80 and forwards that traffic to port 80 on the target Pods, using TCP as the protocol. The key field here is type: LoadBalancer, which instructs Kubernetes to provision an external IP address (if supported by the underlying infrastructure or configured via a load balancer like MetalLB in bare-metal environments). This allows users outside the cluster to access the service using a stable, routable IP address. Once applied, traffic to that external IP on port 80 will be distributed across the three NGINX Pod replicas, enabling simple round-robin load balancing and redundancy. This is an essential pattern for exposing applications publicly and is often used in conjunction with ingress controllers or external DNS configurations.

đŸ’Ÿ Persistent Volume Claims

Containers are stateless by default—they lose data if restarted. But some apps (like databases) need to store files reliably.

Kubernetes uses:

  • PersistentVolume (PV): Actual disk storage (local disk, NFS, Ceph, etc.)
  • PersistentVolumeClaim (PVC): A request for storage by your app

Your app declares a PVC, and Kubernetes connects it to a PV that meets the request. The PVC is then mounted into your Pod.

Example YAML

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nginx-html
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

then we also need to update our deployment definition to reference that newly created PVC.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          volumeMounts:
            - name: html-volume
              mountPath: /usr/share/nginx/html
          ports:
            - containerPort: 80
      volumes:
        - name: html-volume
          persistentVolumeClaim:
            claimName: nginx-html

This configuration introduces persistent storage to the NGINX Deployment by using a Kubernetes PersistentVolumeClaim, or PVC. The first YAML defines a PVC named nginx-html, which requests 1Gi of storage with the ReadWriteOnce access mode, meaning the volume can be mounted for read/write by a single node at a time. This claim will dynamically provision storage, assuming the cluster has a default StorageClass configured (which MicroK8s does, if enabled).

In the updated Deployment manifest, the NGINX container mounts the volume at /usr/share/nginx/html, which is the directory from which NGINX serves static content by default. The volumeMounts section attaches the volume inside the container, while the volumes section in the Pod spec connects the volume to the previously defined PVC. This means that any static HTML file placed into the PVC’s storage will be served by NGINX when you access the service. It also ensures that the content persists across pod restarts or re-creations, which is critical in Kubernetes where pods are ephemeral by nature.

To make this work fully, you’d typically pre-load the static HTML file into the PVC either by manually writing to the volume from another pod, using an init container, or binding it to a hostPath in development setups.

Kubernetes Distributions

Before diving into MicroK8s, It’s worth stepping back and asking: What actually is Kubernetes? And why are there so many different ways to run it?
At its core, Kubernetes is a container orchestration platform. That means it helps you run, scale, and manage containers—like Docker or Podman containers—across a group of machines, as if they were a single system. It handles the hard stuff for you: rolling updates, automatic restarts, scaling up and down, load balancing, and ensuring your apps stay online even when things go wrong.

But here’s the twist: Kubernetes isn’t a single binary you just install and run. It’s a complex system made up of many components—an API server, scheduler, controller manager, etc.—and they all need to be installed, configured, and wired together correctly. Out of the box, Kubernetes is more of a specification than a product. That’s where distributions come in.

A Kubernetes distribution packages all the necessary components together—often adding installation tools, configuration defaults, extra security layers, and support for different environments (cloud, on-prem, edge, etc.). It’s similar to how Linux itself is just a kernel, but you install Ubuntu, Fedora, or Arch because you want a complete system with sane defaults.

Over time, different distributions have emerged to serve different needs.

For example, K3s is a lightweight Kubernetes distribution designed for edge computing, IoT, or small VPS setups. It strips out non-essential components and reduces memory usage, making it ideal for resource-constrained environments. It’s popular for its simplicity and ease of setup—great for labs, Raspberry Pi clusters, or quick demos.

MicroK8s, the one I use and recommend in this post, is Canonical’s take on a streamlined, zero-ops Kubernetes. It’s minimal by default but modular—so you can enable features like ingress, DNS, storage, and monitoring with a single command. It runs natively on Ubuntu (but also supports other OSes), and It’s designed to be easy to bootstrap both single-node setups and full high-availability clusters. It strikes a nice balance between being lightweight and still supporting “real” production workloads.

Then there’s OpenShift, RedHat’s enterprise-grade Kubernetes platform. It comes with a lot more than just Kubernetes—it includes a full developer workflow, built-in CI/CD tools, advanced security policies, and extensive GUI dashboards. It’s designed for large organizations that need support, governance, and a polished user experience across teams. However, It’s also much heavier and more opinionated than vanilla Kubernetes.

On the cloud side, Azure Kubernetes Service (AKS) is Microsoft’s managed Kubernetes offering. Similar to AWS’s EKS or Google’s GKE, it abstracts away much of the operational complexity. You don’t worry about installing or upgrading the control plane—Microsoft handles it. This is ideal if you’re already invested in the Azure ecosystem and want to spin up clusters quickly without diving into the nuts and bolts. But like with most managed services, you give up some low-level control and pay for convenience.

Finally, there’s kind—short for “Kubernetes IN Docker.” It’s not meant for production at all. Instead, It’s designed for local testing and CI pipelines. Kind runs Kubernetes clusters entirely in Docker containers, making it incredibly easy to spin up disposable clusters for testing out configurations or developing against real Kubernetes APIs without needing a VM or cloud account.

Each distribution exists to solve a different problem. There’s no single “best” Kubernetes—It’s about finding the right balance of control, simplicity, scalability, and overhead for your use case. For self-hosters and homelabbers who want a real Kubernetes experience that “just works TM”, MicroK8s hits a particularly sweet spot. And that’s where this post really begins.

VPS Choice

CPU Architecture

ARM-based machines are generally more cost-effective, often offering similar performance to x86_64 systems at a lower price point. This makes ARM a compelling choice for home labs and budget-conscious self-hosters. However, even in 2025, not all container images are built with ARM compatibility in mind—some lack multi-architecture support, which can lead to frustrating compatibility issues. If you’re certain that the workloads you plan to run support ARM (such as NGINX, Grafana, Prometheus, or most privacy frontends), then there’s no downside to choosing ARM. But if you’re unsure about image compatibility—or simply want a more plug-and-play experience without having to troubleshoot architecture mismatches—it’s safer to stick with x86_64 (AMD/Intel). It may cost a bit more, but it maximizes compatibility and saves time in the long run.

Number of CPU cores

  • đŸȘ« Minimum: 2 cores
  • 🔋 Recommended: >= 4 cores per node

Core Kubernetes components like kubelet, containerd, and etcd consume a baseline amount of CPU resources just to keep the cluster running smoothly. If you plan to run multiple replicas, or host compute-heavy workloads—such as applications that perform intensive processing, data analysis, or simulations—you’ll want at least 4 CPU cores per node. While this isn’t a hard technical requirement, 4 cores is a practical baseline that provides enough headroom for most workloads to perform reliably without bottlenecking. It’s not a precise rule, but based on real-world experience, most self-hosted applications will run comfortably within that range.

RAM

  • đŸȘ« Minimum: 2 GB (only for test clusters or very lightweight apps)
  • 🔋 Recommended: >= 8 GB per node

For production-ish HA setups, 8 GB RAM per node is a great sweet spot. If you have memory-intensive applications, huge databases, or applications that cache lots of data in-memory, more than 8GB should be needed

Storage / Disk

Use SSD-based storage (not HDD), ideally with decent IOPS.

Node Operating System

One of the most underrated but critical decisions when setting up a high-availability Kubernetes cluster—especially in a home-lab or low-cost VPS environment—is picking the right underlying Linux distribution. It’s not just about “what works”; it’s about long-term maintainability, update cycles, and system stability. After weighing my options and doing some testing, I chose AlmaLinux, and I haven’t looked back.

AlmaLinux is a stable, enterprise-grade, community-driven clone of Red Hat Enterprise Linux (RHEL). It combines the long-term support guarantees of RHEL with the flexibility of a free and open-source system. One of its biggest advantages for home-labbers is its 10+ year lifecycle. This means you’re not forced to reinstall or reconfigure your nodes every couple of years just to stay supported—something that quickly becomes painful in Kubernetes environments where every node is part of a delicate ecosystem.

Unlike Fedora, which moves fast and breaks things (great for testing, not for infrastructure), AlmaLinux is conservative and focused on stability. It supports Snap packages out of the box (or with a simple install), which is essential for running MicroK8s. And since it’s RHEL-compatible, you also benefit from powerful tooling like dnf, dnf-automatic for unattended upgrades, and robust SELinux security, which is tightly integrated and production-proven. All of this contributes to a hardened base system that can confidently run critical services long-term without constant babysitting.

You might ask, why not just go with Ubuntu, the default recommendation for MicroK8s? The answer is simple: Ubuntu is too bloated out of the box. For minimal VPS setups where every megabyte counts, AlmaLinux is leaner, more focused, and just easier to strip down and maintain. Debian, while popular for its minimalism, unfortunately lags behind when it comes to security patch velocity, which is a serious concern for anyone exposing services to the internet.

In short, AlmaLinux hits the sweet spot: stable, secure, lightweight enough, supports Snap (and therefore MicroK8s), and doesn’t force you into an upgrade treadmill. If you’re building a self-hosted HA Kubernetes cluster and want to “set it and forget it” without compromising on security or performance, AlmaLinux is a rock-solid choice.

Security Considerations for Kubernetes Nodes

Running a Kubernetes cluster—even in a home-lab or on cheap VPS instances—still means you’re operating a distributed system exposed to the internet. That comes with real security responsibilities. Here’s how I approach securing my MicroK8s nodes without going overboard, while still maintaining reliability and automation.

Keep the Kubernetes Nodes Updated Automatically

The foundation of any secure system is staying up-to-date. On AlmaLinux, this is effortless thanks to dnf-automatic, which enables unattended security updates. I configure it to run regularly and also allow automatic reboots when necessary. To avoid downtime in a highly available cluster, reboots are staggered across nodes using a simple randomized delay mechanism—this way, there are always at least 3 healthy nodes online, keeping the cluster functional during maintenance windows. Kubernetes is designed to tolerate node failures, and by making reboots predictable and isolated, you’re simply making use of its strengths.

Node-to-Node Communication Must Be Private

MicroK8s nodes rely on constant communication between each other to share cluster state, control plane data (via etcd), and workload traffic. This inter-node traffic should never be exposed to the public internet. If you’re using cloud VPS providers, always use private networking/VLANs for internal Kubernetes traffic. If that’s not an option, WireGuard is an excellent fallback. A full mesh WireGuard setup, where every node can talk securely to every other node, ensures encryption, privacy, and proper cluster behavior—even across different datacenters or providers. It’s more secure than relying on cloud firewalls alone and is bandwidth-efficient.

Public-Facing Nodes Need Tight Firewalls

If your VPS nodes are directly exposed to the internet, hardening the network layer is critical. AlmaLinux ships with firewalld, which makes setting up and managing firewall rules straightforward. I limit ingress to the bare essentials: just ports 80/tcp and 443/tcp for web traffic, and 22/tcp for SSH access (mostly for Git operations and remote admin tasks). Everything else is blocked by default. This not only reduces the attack surface but also helps contain any potential misconfiguration or container compromise.

SELinux: Don’t Disable It

Finally, and this is important: leave SELinux enabled. It can be tempting to turn it off when you’re debugging something obscure, but that’s usually a sign of a misconfigured container or volume—not a fault of SELinux itself. AlmaLinux, being RHEL-compatible, has great SELinux integration, and running in enforcing mode adds a critical layer of protection. It confines processes, limits what services can access which files or ports, and prevents privilege escalation inside compromised containers. In a Kubernetes environment, where multiple services and workloads share the same node, SELinux offers a level of defense that’s difficult to replicate otherwise.

Backups

There are several ways to handle backups. Some rely on automated hypervisor-level snapshots provided by their VPS host, which is a convenient first layer of protection. Others—including myself—prefer to set up an additional, independent backup procedure. In my case, I use restic, a tool that securely and efficiently creates encrypted snapshots of your data. These backups can be stored on a variety of supported backends, such as SFTP servers, local directories, object stores like S3 or MinIO, or even mounted remote filesystems.

Node Provisioning

every Node you’re wanting to add to the system should follow the same setup. This section shows you rudimentary bash commands, that you can use to get a basic setup for a single node.

Setup Base-System

sudo dnf upgrade -y
sudo dnf install epel-release -y

# we need kernel-modules for a proper functioning microk8s
sudo dnf install kernel-modules -y

# Set modern cryptography policy
sudo update-crypto-policies --set FUTURE

# disable swap
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Enable dnf-automatic

sudo dnf install git  dnf-automatic -y
sudo systemctl enable --now dnf-automatic.timer

Configure the Commands section in /etc/dnf/automatic.conf to apply all updates.
On Node 1, set the random_sleep to 0; On Node 2, set the random_sleep to 1024, and on Node 3, set it it 5012. This way not all nodes go down at the same time.

[commands]
upgrade_type = default
random_sleep = 1024
network_online_timeout = 500
download_updates = yes
apply_updates = yes
reboot = when-needed
reboot_command = "shutdown -r +5 'Rebooting after applying package updates'"

Configure Network connection

Each node has a second network interface—separate from the public one—used for private communication between nodes. This can be achieved using a provider-managed VLAN or a WireGuard mesh (setting this up is beyond the scope of this article; consult your VPS host’s documentation to see how private networking is supported). For the purposes of this setup, we’ll place all nodes in the 10.0.1.0/24 subnet.

# on the first node (use 10.0.1.11); on the second node use 10.0.1.12; on the third node use 10.0.1.13;
# in your case the connection of your VLAN interface is not called "Wired connection 1"
# use 'nmlci connection show' to get the names of all interfaces
sudo nmcli connection modify "Wired connection 1" IPv4.address 10.0.1.11/24
sudo nmcli connection modify "Wired connection 1" IPv4.method manual
sudo nmcli connection modify "Wired connection 1" IPv4.gateway 10.0.1.1
sudo nmcli connection modify "Wired connection 1" connection.autoconnect yes
sudo nmcli connection up "Wired connection 1"
# on each VPS add a new firewall zone for node-to-node communication, and bind it to the interface (e.g. VLAN)
sudo firewall-cmd --permanent --new-zone=k8s
sudo firewall-cmd --permanent --zone=k8s --set-target=ACCEPT
sudo firewall-cmd --permanent --zone=k8s --change-interface=eth1
sudo firewall-cmd --reload

for convenenience I also recommend to change /etc/hosts to reflect this. This way you can nicely refer to your machines with hostnames without the need for a dedicated DNS server. Unless you’re planning to add/change/remove entries frequently, this is really helpful

127.0.0.1        localhost localhost.localdomain localhost4 localhost4.localdomain4
::1              localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.1.11        k8s-n01 
10.0.1.12        k8s-n02
10.0.1.13        k8s-n03

Network check

#!/usr/bin/env bash

nodes=(
    "10.0.1.11"
    "10.0.1.12"
    "10.0.1.13"
    "k8s-n01"
    "k8s-n02"
    "k8s-n03"
)

echo "Connectivity Test to Nodes                                     raffael@rtrace.io"
echo "================================================================================"
echo "Output Format: Node (IP or Hostname); ping status; traceroute status;           "
echo "Starting Connectivity Test ..."

for node in "${nodes[@]}"; do
    echo -n "  Node: $node"

    # execute ping against host
    if ping -c 1 -W 2 "$node" &> /dev/null; then
        echo -n "✅ "
    else
        echo -n "❌ "
    fi

    # execute traceroute against host
    if traceroute -w 2 "$node" &> /dev/null; then
        echo -n "✅ "
    else
        echo -n "❌ "
    fi

    echo ""
done

Install microk8s

sudo dnf install snapd -y
sudo systemctl enable --now snapd.socket
sudo ln -s /var/lib/snapd/snap /snap
sudo snap install microk8s --classic --channel=latest/stable

sudo usermod -a -G microk8s $USER
mkdir -p ~/.kube
chmod 0700 ~/.kube

sudo systemctl enable --now snapd.socket
sudo ln -s /var/lib/snapd/snap /snap

microk8s status --wait-ready

Configure Firewall

Calico is a CNI (= Container Network Interface) plugin used for pod networking in Kubernetes. It uses VXLAN tunnels and virtual interfaces like cali+ to move traffic between pods on different nodes.

sudo firewall-cmd --permanent --new-zone=calico
sudo firewall-cmd --permanent --zone=calico --add-masquerade 
sudo firewall-cmd --permanent --zone=calico --set-target=ACCEPT
sudo firewall-cmd --permanent --zone=calico --add-interface=vxlan.calico
sudo firewall-cmd --permanent --zone=calico --add-interface="cali+"
sudo firewall-cmd --reload

now a little bit of cleanup.

# remove irrelevant services
sudo firewall-cmd --permanent --zone=public --remove-service=cockpit
sudo firewall-cmd --permanent --zone=public --remove-service=dhcpv6-client

# set default to reject all incoming traffic
sudo firewall-cmd --permanent --zone=public --set-target=REJECT

# ... except HTTP and HTTPS port (add more if you need it)
sudo firewall-cmd --permanent --zone=public --add-port=80/tcp
sudo firewall-cmd --permanent --zone=public --add-port=443/tcp

Combine nodes into one HA K8s Cluster


Thank you for reading

I hope you enjoyed reading this article. Maybe it was helpful to you, maybe it was not? Maybe you learned something new? You disliked the article? I'd be glad to hear your opinions. Your feedback is much appreciated and very welcome. Either (anonymously) write a comment in the section below, or send a mail to blog@rtrace.io