Reproducible k3s clusters and deployments in pure Nix.
This is an example repository that shows how to set up a lightweight k3s cluster together with Kubernetes resources in pure Nix. It provides configurations to run a Node exporter DaemonSet, a Prometheus Deployment, a Grafana Deployment and a nginx Helm chart in a cluster with two nodes (server & agent). It also deploys secrets via sops-nix.
$ kubectl get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
agent Ready <none> 130m v1.32.1+k3s1 192.168.1.1 <none> NixOS 25.05 (Warbler) 6.12.16 containerd://1.7.23-k3s2
server Ready control-plane,master 129m v1.32.1+k3s1 192.168.1.2 <none> NixOS 25.05 (Warbler) 6.12.16 containerd://1.7.23-k3s2K3s offers several useful features that enable the management of Kubernetes resources directly
through the filesystem. Additionally, the NixOS k3s module provides convenient options to integrate
these features into your NixOS configuration. For instance, the
services.k3s.manifests
option lets you configure
auto-deploying manifests (AddOns),
while the the
services.k3s.images
option lets you specify container images that k3s imports at startup. Check out
all k3s options
for more information.
This setup makes it easy to manage both infrastructure and deployments from a single repository with a single command, making it an excellent choice for home labs, air-gapped environments, CI/CD pipelines, and more.
Clusters built in this way come fully equipped with everything they need, eliminating the need to download anything at runtime. This makes it possible to run clusters in air-gapped environments or in NixOS VM tests. However, this approach is optional—you can also choose to omit container images from the configuration and let the nodes download them as needed during runtime.
The cluster and deployments are fully reproducible, particularly when the container images are included in the build. If it works in your tests, you can be confident it will run seamlessly in production as well.
K3s is lightweight Kubernetes. Half the memory than regular Kubernetes in a binary less than 100 MB.
The Kubernetes API server can be locked down, as there is no need for runtime modifications to the cluster.
Note
The container images for the test are pulled for x86_64-linux only. Thus the test currently
doesn't work on other systems, although it can adapted for aarch64-linux by pulling the right
container images.
This flake provides an auto deploy test that starts the cluster and
checks that the deployments are healthy. You can also use an interactive test driver that lets you
explore the cluster. Build it with nix build .#checks.x86_64-linux.autoDeploy.driverInteractive
and run ./result/bin/nixos-test-driver. Start the nodes by running start_all() inside the python
repl. This spins up two virtual machines, a server node and an agent node, and forwards some ports
to your host (see ./tests/interactive.nix) so you can interact with the
test nodes. Run ssh root@localhost -p 20022 to access the server node (use port 10022 for the
agent) and run kubectl commands. Depending on your hardware, everything is up and running after
approximately 2 minutes.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
grafana-65bdd57cc7-nlkj2 1/1 Running 0 115s
hello-world-85544cf5fd-mj292 1/1 Running 0 46s
node-exporter-7x9m8 1/1 Running 0 78s
node-exporter-xsm2d 1/1 Running 0 112s
prometheus-67dcbf6f46-g47gf 1/1 Running 0 115sThe testing driver also forwards ports 80 and 443 to 20080 and 20443 respectively. Visit the
Grafana deployment at http://localhost:20080/grafana (username admin, password k3snix) and
nginx at http://localhost:20080/hello. Grafana is provisioned with two dashboards, Kubernetes API
server and Node Exporter Full.
Alternatively, build qcow2 images with nix build .#server and nix build .#agent and run them
with a tool of your choice. You can also install the configurations on real hardware. Set
services.k3s.serverAddr in agent.nix to the server IP when running outside the
NixOS test.
Caution
Make sure to not overwrite an existing kubeconfig that you still need.
You can get a kubeconfig and use it to access the cluster externally. Copy the kubeconfig with
scp -P 20022 root@localhost:/etc/rancher/k3s/k3s.yaml ~/.kube/config and modify the server port
with sed -i 's/:6443/:26443/' ~/.kube/config.
The test may crash with really weird I/O errors. This usually means that the tmpfs, which the testing driver uses as backing storage for test VMs, has no space left. You can increase the tmpfs size temporary, adapt the command if necessary.
sudo mount -o remount,size=6G /run/user/1000Important
The keys to decrypt secrets.yaml are placed in the keys directory. In a normal setup you should keep the keys always secret!
This uses sops-nix and its templates feature to deploy secrets. The idea is to create templates of Kubernetes secret resources and let sops-nix substitute placeholders with the actual secrets at activation time. For an example of how this is implemented, see ./modules/secrets.nix. This approach also works with other secret provisioning tools that support templating and custom paths.
In order to decrypt and change the secrets run nix develop, this will set SOPS_AGE_KEY_FILE and
make sops available. Consequently run sops edit secrets.yaml to
change the secrets.
Install Helm charts via the
k3s Helm controller with the
services.k3s.autoDeployCharts option. See
./modules/helm-hello-world.nix for an example
