Skip to content

Commit c590cd6

Browse files
committed
add getting-started doc
1 parent 6f62fe3 commit c590cd6

5 files changed

Lines changed: 142 additions & 0 deletions

File tree

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
---
2+
title: Setup Kahuna
3+
description: Let's start with the orchestration core
4+
weight: 3
5+
---
6+
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
title: Getting Started
3+
description: Deploy your first Kowabunga instance !
4+
weight: 3
5+
---
Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
---
2+
title: Hardware Requirements
3+
description: Prepare hardware for setup
4+
weight: 1
5+
---
6+
7+
Setting up a Kowabunga platform requires you to provide the following hardware:
8+
9+
- 1x Kahuna instance (more could used if high-availability is expected).
10+
- 1x Kiwi instance per-region (2x recommended for production-grade)
11+
- 1x Kaktus instance per-region (a minimum of 3x recommended for production-grade, can scale to N).
12+
13+
{{< alert color="warning" title="Important" >}}
14+
Note that while it should work on any kind of Linux distribution, Kowabunga has only been tested (understand it as supported) with Ubuntu LTS. Kowabunga comes pre-packaged for Ubuntu.
15+
{{< /alert >}}
16+
17+
## Kahuna Instance
18+
19+
**Kahuna** is the only instance that will be exposed to end users. It is recommended to have it exposed on public Internet, making it easier for DevOps and users to access to but there's no strong requirement for that. It is fairly possible to keep it local to your private corporate network, only accessible from on-premises network or through VPN.
20+
21+
Hardware requirements are lightweight:
22+
23+
- 2-cores vCPUs
24+
- 4 to 8 GB RAM
25+
- 64 GB for OS + MongoDB database.
26+
27+
Disk and network performance is fairly insignificant here, anything modern will do just fine.
28+
29+
We personnally use and recommend using small VPS-like public Cloud instances. They come with public IPv4 address and all that one needs for a monthly price of $5 to $20 only.
30+
31+
## Kiwi Instance
32+
33+
**Kiwi** will act as a network software router and gateway. Even more than for **Kahuna**, you don't need much horse-power here. If you plan on setting your own home labs, a small 2 GB RAM Raspberry Pi would be sufficient (keep in mind that SoHo routers and gateways are lightweight than that).
34+
35+
If you intend to use it for enteprise-grade purpose, just pick the lowest end server you could fine.
36+
37+
It's probably going to come bundled with 4-cores CPU, 8 GB of RAM and whatever SSD and in any cases, it would be more than necessary, unless you really intend to handle 1000+ computing nodes being a multi-Gbps traffic.
38+
39+
## Kaktus Instance
40+
41+
**Kaktus** instance are another story. If there's one place you need to put your money on, here would be the place. The instance will handle as many virtual machines as can be and be part of the distributed Ceph storage cluster.
42+
43+
Sizing depends on your expected workload, there's no accurate rule of thumb for that. You'll need to think capacity planning ahead. How much vCPUs do you expect to run in total ? How many GBs of RAM ? How much disk ? What overcommit ratio do you expect to set ? How much data replication (and so ... resilience) do you expect ?
44+
45+
These are all good questions to be asked. Note that you can easily start low with only a few **Kaktus** instances and scale up later on, as you grow. The various **Kaktus** instances from your fleet may also be heterogeneous (to some extent).
46+
47+
As a rule of thumb, unless you're doing setting up a sandbox or home lab, a minimum of 3 **Kaktus** instance would be recommended. This allows you to move workload from one to another, or simply put one in maintenance mode (i.e. shutdown workload) while keeping business continuity.
48+
49+
Supposing you have X **Kaktus** instances and expect up to Y to be down at a given time, the following applies:
50+
51+
> **Instance Maximum Workload**: (X - Y) / X %
52+
53+
Said differently, with only 3 machines, don't go above 66% average load usage or you won't be able to put one in maintenance without tearing down application.
54+
55+
Consequently, with availability in mind, better have more lightweight instances than few heavy ones.
56+
57+
Same applies (even more to Ceph storage cluster). Each instance local disk will be part of Ceph cluster (a [Ceph OSD](https://docs.ceph.com/en/latest/man/8/ceph-osd/) to be accurate) and data will be spread across those, from the same region.
58+
59+
Now, let's consider you want to achieve 128 TB usable disk space. At first, you need to define your replication ratio (i.e. how many time objects storage fragments will be replicated across disks). We recommend a minimum of 2, and 3 for production-grade workloads. That means you'll actually need a total of 384 TB of physical disks.
60+
61+
Here are different options to achieve it:
62+
63+
- 1 server with 24x 16TB SSDs
64+
- 3 servers with 8x 16TB SSDs
65+
- 3 servers with 16x 8TB SSDs
66+
- 8 servers with 6x 8TB SSDs
67+
- [...]
68+
69+
From a purely resilient perspective, last option would be the best. It provides the more machines, with the more disks, meaning that if anything happens, the smallest fraction of data from the cluster will be lost. Lost data is possibly only ephemeral (time for server or disk to be brought up again). But while down, Ceph will try to re-copy data from duplicated fragments to other disks, inducing a major private network bandwidth usage. Now whether you only have 8 TB of data to be recovered or 128 TB may have a very different impact.
70+
71+
Also, as your virtual machines performance will be heavily tight to underlying network storage, it is vital (at least for production-grade workloads) to use NVMe SSDs with 10 to 25 Gbps network controllers and sub-millisecond latency between your private region servers.
72+
73+
So let's recap ...
74+
75+
Typical **Kaktus** instances for home labs or sandbox environments would look like:
76+
77+
- 8-cores (16-threads) CPUs.
78+
- 32 GB RAM.
79+
- 2x 1TB SATA or NVMe SSDs (shared between OS partition and Ceph ones)
80+
- 1 Gbps NIC
81+
82+
While **Kaktus** instances for production-grade workload could easily look like:
83+
84+
- 32 to 128 cores CPUs.
85+
- 128 GB to 1.5 TB RAM.
86+
- 2x 256 GB SATA RAID-1 SSDs for OS.
87+
- 6 to 12x 2-8 TB NVMe SSDs for Ceph.
88+
- 10 to 25 Gbps NICs with link-agregation.
89+
90+
{{< alert color="warning" title="Important" >}}
91+
Remember that you can start low and grow later on. All instances must not need to be alike (you can perfectly have "small" 32-cores servers and higher 128-cores ones). But completely heterogenous instances (especially on disk and network constraints) could have disastrous effects.
92+
93+
Keep in mind that all disks form all instances will be part of the same Ceph cluster, where any virtual machine instance can read and write data from. Mixing 25 Gbps network servers with fast NVMe SSDs with low-end 1 Gbps one with rotational HDDs would lower down your whole setup.
94+
{{< /alert >}}
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
title: Setup Kahuna
3+
description: Let's start with the orchestration core
4+
weight: 3
5+
---
Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
---
2+
title: Software Requirements
3+
description: Get your toolchain ready
4+
weight: 2
5+
---
6+
7+
Kowabunga's deployment philosophy relies on IaC (Infrastructure-as-Code) and CasC (Configuration-as-Code). We heavily rely on:
8+
9+
- [Terraform](https://developer.hashicorp.com/terraform) or better, [OpenTofu](https://opentofu.org/) for IaC
10+
- [Ansible](https://www.ansible.com/) for CasC.
11+
12+
## Kobra Toolchain
13+
14+
While natively compatible with the aformentionned, we recommend using [Kowabunga Kobra](https://github.com/kowabunga-cloud/kobra) as a toolchain overlay.
15+
16+
**Kobra** is a DevOps deployment swiss-army knife utility. It provides a convenient wrapper over **OpenTofu**, **Ansible** and **Helmfile** with proper secrets management, removing the hassle of complex deployment startegy.
17+
18+
Anything can be done without **Kobra**, but it makes things simpler, not having to care about the gory details.
19+
20+
**Kobra** supports various secret management providers. Please choose that fits your expected collaborative work experience.
21+
22+
At runtime, it'll also make sure you're **OpenTofu** / **Ansible** toolchain is properly set on your computer, and will do so otherwise (i.e. brainless setup).
23+
24+
## Setup Git Repository
25+
26+
Kowabunga comes with a ready-to-consumed platform template. One can clone it from Git through:
27+
28+
```sh
29+
$ git clone https://
30+
```
31+
32+
or better, fork it in your own account, as a boostraping template repository.

0 commit comments

Comments
 (0)