Skip to content

onedr0p/home-ops

πŸš€ My Home Operations Repository 🚧

... managed with Flux, Renovate, and GitHub Actions πŸ€–

DiscordΒ Β  TalosΒ Β  KubernetesΒ Β  FluxΒ Β  Renovate

Home-InternetΒ Β  Status-PageΒ Β  Alertmanager

Age-DaysΒ Β  Uptime-DaysΒ Β  Node-CountΒ Β  Pod-CountΒ Β  CPU-UsageΒ Β  Memory-UsageΒ Β  Power-UsageΒ Β  Alerts


πŸ’‘ Overview

This is a mono repository for my home infrastructure and Kubernetes cluster. I try to adhere to Infrastructure as Code (IaC) and GitOps practices using tools like Ansible, Terraform, Kubernetes, Flux, Renovate, and GitHub Actions.


🌱 Kubernetes

My Kubernetes cluster is deployed with Talos. This is a semi-hyper-converged cluster, workloads and block storage are sharing the same available resources on my nodes while I have a separate server with ZFS for NFS/SMB shares, bulk file storage and backups.

There is a template over at onedr0p/cluster-template if you want to try and follow along with some of the practices I use here.

Core Components

  • Networking & Service Mesh: cilium provides eBPF-based networking, while istio powers service-to-service communication with L7 proxying and traffic management. cloudflared secures ingress traffic via Cloudflare, and external-dns keeps DNS records in sync automatically.
  • Security & Secrets: cert-manager automates SSL/TLS certificate management. For secrets, I use external-secrets with 1Password Connect to inject secrets into Kubernetes.
  • Storage & Data Protection: rook provides distributed storage for persistent volumes, with volsync handling backups and restores. spegel improves reliability by running a stateless, cluster-local OCI image mirror.
  • Automation & CI/CD: actions-runner-controller runs self-hosted GitHub Actions runners directly in the cluster for continuous integration workflows.

GitOps

Flux watches the clusters in my kubernetes folder (see Directories below) and makes the changes to my clusters based on the state of my Git repository.

The way Flux works for me here is it will recursively search the kubernetes/apps folder until it finds the most top level kustomization.yaml per directory and then apply all the resources listed in it. That aforementioned kustomization.yaml will generally only have a namespace resource and one or many Flux kustomizations (ks.yaml). Under the control of those Flux kustomizations there will be a HelmRelease or other resources related to the application which will be applied.

Renovate watches my entire repository looking for dependency updates, when they are found a PR is automatically created. When some PRs are merged Flux applies the changes to my cluster.

Directories

This Git repository contains the following directories under Kubernetes.

πŸ“ kubernetes
β”œβ”€β”€ πŸ“ apps       # applications
β”œβ”€β”€ πŸ“ components # re-useable kustomize components
└── πŸ“ flux       # flux system configuration

Flux Workflow

This is a high-level look how Flux deploys my applications with dependencies. In most cases a HelmRelease will depend on other HelmRelease's, in other cases a Kustomization will depend on other Kustomization's, and in rare situations an app can depend on a HelmRelease and a Kustomization. The example below shows that atuin won't be deployed or upgrade until the rook-ceph-cluster Helm release is installed or in a healthy state.

graph TD
    A>Kustomization: rook-ceph] -->|Creates| B[HelmRelease: rook-ceph]
    A>Kustomization: rook-ceph] -->|Creates| C[HelmRelease: rook-ceph-cluster]
    C>HelmRelease: rook-ceph-cluster] -->|Depends on| B>HelmRelease: rook-ceph]
    D>Kustomization: atuin] -->|Creates| E(HelmRelease: atuin)
    E>HelmRelease: atuin] -->|Depends on| C>HelmRelease: rook-ceph-cluster]
Loading

Networking

Click here to see my high-level network diagram network

😢 Cloud Dependencies

While most of my infrastructure and workloads are self-hosted I do rely upon the cloud for certain key parts of my setup. This saves me from having to worry about three things. (1) Dealing with chicken/egg scenarios, (2) services I critically need whether my cluster is online or not and (3) The "hit by a bus factor" - what happens to critical apps (e.g. Email, Password Manager, Photos) that my family relies on when I no longer around.

Alternative solutions to the first two of these problems would be to host a Kubernetes cluster in the cloud and deploy applications like HCVault, Vaultwarden, ntfy, and Gatus; however, maintaining another cluster and monitoring another group of workloads would be more work and probably be more or equal out to the same costs as described below.

Service Use Cost
1Password Secrets with External Secrets ~$65/yr
Cloudflare Domain and S3 ~$50/yr
GCP Voice interactions with Home Assistant over Google Assistant Free
GitHub Hosting this repository and continuous integration/deployments Free
Migadu Email hosting ~$20/yr
Pushover Kubernetes Alerts and application notifications $5 OTP
Total: ~$10/mo

🌎 DNS

In my cluster there are two instances of ExternalDNS running. One for syncing private DNS records to my UDM Pro Max using ExternalDNS webhook provider for UniFi, while another instance syncs public DNS to Cloudflare. This setup is managed by creating ingresses with two specific classes: internal for private DNS and external for public DNS. The external-dns instances then syncs the DNS records to their respective platforms accordingly.


βš™ Hardware

Compute

ASUS NUC 14 Pro (Core Ultra 5 125H) Γ— 3 Β· 96 GB RAM Β· Talos / Kubernetes

  • OS β€” 480 GB HPE/Samsung SM863a SATA SSD
  • Local storage β€” 1 TB Corsair MP600 Micro NVMe (2242)
  • Rook-Ceph β€” 800 GB Micron 7450 MAX NVMe (2280)
  • Out-of-band β€” JetKVM with DC extension

Storage

45Drives HL15 2.0 Β· 256 GB RAM Β· TrueNAS SCALE / ZFS

  • Boot β€” 2 Γ— 1 TB WD Blue SN550 NVMe (2280), mirrored
  • Bulk pool
    • 12 Γ— 22 TB Seagate Exos X22 HDD β€” 2Γ— 6-wide RAIDZ2
    • 2 Γ— 1.92 TB Samsung PM9A3 NVMe (22110) β€” metadata / SLOG
    • 375 GB Intel Optane DC P4800X β€” L2ARC
  • Fast pool
    • 2 Γ— 1 TB Samsung 870 EVO SATA SSD β€” mirrored

Networking β€” UniFi

  • UDM Pro Max β€” router & NVR Β· 2 Γ— 4 TB WD Red Plus HDD (mirror)
  • USW Enterprise 24 PoE β€” 2.5 G PoE+ switch
  • US XG 16 β€” 10 G SFP+ switch
  • USP PDU Pro β€” PDU

Power

APC SMT1500RM2U β€” 1500 VA rackmount UPS

πŸ“Έ Expand for eye candy rack

🌟 Stargazers


πŸ™ Gratitude and Thanks

Thanks to all the people who donate their time to the Home Operations Discord community. Be sure to check out kubesearch.dev for ideas on how to deploy applications or get ideas on what you could deploy.


DeepWiki

About

Wife approved HomeOps driven by Kubernetes and GitOps using Flux

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors