Skip to content

Ximaz/local-homelab

Repository files navigation

Local Homelab

This repository contains an infrastructure as code used to deploy a kubernetes cluster on Debian 13 virtual machines provided by Vagrant through Virtual Box.

It is not a production ready setup, nor a high-availability setup, it is just something I can use to test my infrastructure and learn new things about K8s.

Generate Ansible SSH keypair

Ansible is used in order to setup all the required packages and files for the K8s cluster to run correctly. Therefore, it is recommended that you create a set of SSH public and private keys to let Ansible access the machines. The resulting public key will be mounted inside virtual machines by Vagrant.

To generate a dedicated SSH keypair for Ansible, you can paste the following command into your terminal :

ssh-keygen -t ed25519 -f ./ansible-credentials -N ''

It will create two new files :

  • ./ansible-credentials/ansible : the SSH private key
  • ./ansible-credentials/ansible.pub : the SSH public key

Deploy the machines

To deploy the machines, you must have Vagrant as well as Virtual Box installed on the host machine.

To make things easier, a vagrant plugin is required to manage all the hostnames of the different virtual machines, so that an IP address is bound to a hostname in your local /etc/hosts file. To install this plugin, use the following command :

vagrant plugin install vagrant-hostmanager

Once installed, you can deploy the virtual machines using the following command :

vagrant up

Each virtual machine gets 2 CPU cores as well as 2048mb of memory. You can tweak those settings in the Vagrantfile at the root of the project. Also, three VMs are deployed :

  • k8s-master.cluster.local : The K8s control plane node
  • k8s-slave-1.cluster.local : A worker node inside the K8s cluster
  • jenkins-agent-1.cluster.local : A VM used to execute Jenkins jobs

You can configure the number of worker nodes as well as the number of jenkins agents at the top of the Vagrantfile.

You may get prompted the first time to allow vagrant-hostmanager to modify the /etc/hosts file. If so, enter your password.

Setting up Ansible

Now that VMs are deployed, we have to setup the K8s cluster. We are using the Kubespray project through Ansible for that.

First of all, create a python virtual environment to not polluate your host machine :

python3 -m venv .venv
source .venv/bin/activate

Then, install poetry as well as the dependencies of the project :

pip install poetry
poetry install
ansible-galaxy install -r requirements.yml --force

Setting up Kubespray

ansible-playbook --become playbook.kubespray.yml

Setting up Kubernetes Core

ansible-playbook playbook.kubernetes-core.yml

Setting up Jenkins Agent

ansible-playbook playbook.jenkins-agent.yml

Setting up Jenkins Controller

ansible-playbook playbook.jenkins-controller.yml

All in one go

ansible-playbook --become playbook.kubespray.yml
ansible-playbook playbook.jenkins-agent.yml
ansible-playbook playbook.kubernetes-core.yml
ansible-playbook playbook.jenkins-controller.yml

Using kubectl from host (after Kubespray deployment)

kubectl --kubeconfig admin.conf get nodes

Nginx ingress controller

This project uses Nginx ingress controller. The domain is setup in the YAML file located at ./vars/nginx_ingress.yml.

It contains the ports (HTTP and HTTPS) that the ingress will listen on as well as the ingress_hostname, which is the domain you would use to reach services.

You should be able to reach the application by going on : https://<app>.cluster.local:30443 for HTTPS and http://<app>.cluster.local:30080 for HTTP, if you didn't change the current configurations.

Make sure that, when deploying custom applications, they are linked to a service which itself is linked to an Nginx ingress resource, else it will be unreachable by using subdomains.

About

A local K8s homelab deployed on VMs powered by Vagrant et VirtualBox, using Ansible and Kubespray

Topics

Resources

Stars

Watchers

Forks

Contributors