You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/docs/admin-guide/create-kaktus.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,7 +31,7 @@ If you only have limited disks on your system (e.g. only 2), Ceph storage will b
31
31
- possibly do the same on another disk so you can use software RAID-1 for sanity.
32
32
- partition the rest of your disk for future Ceph usage.
33
33
34
-
In that case, **parted** is your friend for the job. It also means you need to ensure, at OS installation stage, that you don't let distro partitionner use your full device.
34
+
In that case, **parted** is your friend for the job. It also means you need to ensure, at OS installation stage, that you don't let distro partitioner use your full device.
As much as can be, we however recommend you to have dedicated disks for Ceph cluster. An enterprise-grade setup would use some small SATA SSDs in RAID-1 for OS and as many dedicated NVMe SSDs as Ceph-reserved data disks.
Note that setting **kowabunga_netplan_disable_cloud_init** is an optional step. If you'd like to keep whatever configuration cloud-init has previously set, it's all fine (but it's always recommended not to have dual sourc eof truth).
134
+
Note that setting **kowabunga_netplan_disable_cloud_init** is an optional step. If you'd like to keep whatever configuration cloud-init has previously set, it's all fine (but it's always recommended not to have dual source of truth).
135
135
{{< /alert >}}
136
136
137
137
{{< alert color="success" title="Information" >}}
@@ -168,7 +168,7 @@ Having more than 3 monitors in your cluster is not necessarily useful. If you ha
168
168
Also be sure that those nodes (and those nodes only !) are defined in **Kiwi**'s DNS regional configuration under **ceph** record.
169
169
{{< /alert >}}
170
170
171
-
Ceph cluster also comes with **managers**. As in real-life, they don't do much ;-) Or at least, they're not as vital as **monitors**. They however expose various metrics. Having one is nice, more than that will only help with failover. As for **monitors**, one can enale it for a **Kaktus** in **ansible/inventories/host_vars/kaktus-eu-west-a-{1,2,3}.yml** instance-specific file:
171
+
Ceph cluster also comes with **managers**. As in real-life, they don't do much ;-) Or at least, they're not as vital as **monitors**. They however expose various metrics. Having one is nice, more than that will only help with failover. As for **monitors**, one can enable it for a **Kaktus** in **ansible/inventories/host_vars/kaktus-eu-west-a-{1,2,3}.yml** instance-specific file:
172
172
173
173
```yaml
174
174
kowabunga_ceph_manager_enabled: true
@@ -192,14 +192,14 @@ So let's define where to store these files in **ansible/inventories/group_vars/k
Once provisionned, you'll end up with a regional sub-directory (e.g. **eu-west**), containing 3 files:
195
+
Once provisioned, you'll end up with a regional sub-directory (e.g. **eu-west**), containing 3 files:
196
196
197
197
- ceph.client.admin.keyring
198
198
- ceph.keyring
199
199
- ceph.mon.keyring
200
200
201
201
{{< alert color="warning" title="Important" >}}
202
-
These files are keyring and extermely sensitive. Anyone with access to these files and your private network gets a full administrative control over the Ceph cluster.
202
+
These files are keyring and extremely sensitive. Anyone with access to these files and your private network gets a full administrative control over the Ceph cluster.
203
203
204
204
So keep track of them, but do it smartly. As they are plain-text, let's ensure you don't store them on Git that way.
before being pushed. Ansible will automatically decrypt them on the fly, should they end up with *.sops* extension.
214
214
{{< /alert >}}
215
215
216
-
### Disks provisionning
216
+
### Disks provisioning
217
217
218
-
Next step is about disks provisionning. Your cluster will contain several disks from several instances (the ones you've either partitionned or left untouched at pre-requisite stage). Each instance may have different toplogy, different disks, different sizes etc ... Disks (or partitions, whatever) are each managed by a Ceph **OSD** daemon.
218
+
Next step is about disks provisioning. Your cluster will contain several disks from several instances (the ones you've either partitioned or left untouched at pre-requisite stage). Each instance may have different topology, different disks, different sizes etc ... Disks (or partitions, whatever) are each managed by a Ceph **OSD** daemon.
219
219
220
220
So we need to reflect this topology into each instance-specific **ansible/inventories/host_vars/kaktus-eu-west-a-{1,2,3}.yml** file:
221
221
@@ -229,11 +229,11 @@ kowabunga_ceph_osds:
229
229
weight: 1.0
230
230
```
231
231
232
-
For each instance, you'll need to declare disks that are going to be part of the cluster. The **dev** parameter simply maps to the device file itself (it is **more than recommended** to use **/dev/disk/by-id** mapping instead of boggus **/dev/nvme0nX** naming, which can change across reboots). The **weight** parameter will be used for Ceph scheduler for object placement and corresponds to each disk size in TB unit (e.g. 1.92 TB SSD would have a 1.92 weight). And finally the **id** identifier might be the most important of all. This is the **UNIQUE** identifier across your Ceph cluster. Whichever the disk ID you use, you need to ensure than no other disk in no other instance uses the same identifier.
232
+
For each instance, you'll need to declare disks that are going to be part of the cluster. The **dev** parameter simply maps to the device file itself (it is **more than recommended** to use **/dev/disk/by-id** mapping instead of bogus **/dev/nvme0nX** naming, which can change across reboots). The **weight** parameter will be used for Ceph scheduler for object placement and corresponds to each disk size in TB unit (e.g. 1.92 TB SSD would have a 1.92 weight). And finally the **id** identifier might be the most important of all. This is the **UNIQUE** identifier across your Ceph cluster. Whichever the disk ID you use, you need to ensure than no other disk in no other instance uses the same identifier.
233
233
234
234
### Data Pools
235
235
236
-
Once we have disks agregated, we must create data pools on top. Data pools are a logical way to segment your global Ceph cluster usgage. Definition can be made in **ansible/inventories/group_vars/kaktus_eu_west/main.yml** file, as:
236
+
Once we have disks aggregated, we must create data pools on top. Data pools are a logical way to segment your global Ceph cluster usage. Definition can be made in **ansible/inventories/group_vars/kaktus_eu_west/main.yml** file, as:
237
237
238
238
```yaml
239
239
kowabunga_ceph_osd_pools:
@@ -268,9 +268,9 @@ In that example, we'll create 4 data pools:
268
268
- 2 of type **rbd** (RADOS block device), for further be used by KVM or a future Kubernetes cluster to provision virtual block device disks.
269
269
- 2 of type **fs** (filesystem), for further be used as underlying NFS storage backend.
270
270
271
-
Each pool relies on [Ceph Placement Groups](https://docs.ceph.com/en/latest/rados/operations/placement-groups/) for objects fragments distribution across disks in the cluster. There's no rule of thumb on how much one need. Itdepends on your cluster size, its number of disks, its replication factor and many more parameters. You can get some help thanks to [Ceph PG Calculator](https://linuxkidd.com/ceph/pgcalc.html) to set an appropriate value.
271
+
Each pool relies on [Ceph Placement Groups](https://docs.ceph.com/en/latest/rados/operations/placement-groups/) for objects fragments distribution across disks in the cluster. There's no rule of thumb on how much one need. It depends on your cluster size, its number of disks, its replication factor and many more parameters. You can get some help thanks to [Ceph PG Calculator](https://linuxkidd.com/ceph/pgcalc.html) to set an appropriate value.
272
272
273
-
The **replication** parameter controls the cluster's data redundancy. The bigger the value, the more replicated data will be (and the less prone to disaster you will be), but the fewer usuable space you'll get.
273
+
The **replication** parameter controls the cluster's data redundancy. The bigger the value, the more replicated data will be (and the less prone to disaster you will be), but the fewer usable space you'll get.
Once again, we interate over **kowabunga_region_vlan_id_ranges** variable to create our global configuration for **eu-west** region. After all, both **Kiwi** instances from there will have the very same configuration.
111
+
Once again, we iterate over **kowabunga_region_vlan_id_ranges** variable to create our global configuration for **eu-west** region. After all, both **Kiwi** instances from there will have the very same configuration.
112
112
113
113
This will ensure that VRRP packets flows between the 2 peers so one always ends up being the active router for each virtual network interface.
114
114
@@ -183,8 +183,8 @@ Finally, edit the SOPS-encrypted **ansible/inventories/group_vars/kiwi.sops.yml*
As names stand, first 2 variables will be used to expose **PowerDNS** API (which will be consumed by **Kiwi** agent) and last twos are MariaDB credentials, used by **PowerDNS** to connect to. None of these passwords really matter, they're server-to-server internal use only, no use is ever going to make use of them. But let's use something robust nonetheless.
@@ -210,4 +210,4 @@ We're finally done with **Kiwi**'s configuration. All we need to do now is final
210
210
$ kobra ansible deploy -p kowabunga.cloud.kiwi
211
211
```
212
212
213
-
We’re now ready for [provisionning Kaktus HCI nodes](/docs/admin-guide/create-kaktus/) !
213
+
We’re now ready for [provisioning Kaktus HCI nodes](/docs/admin-guide/create-kaktus/) !
Copy file name to clipboardExpand all lines: content/en/docs/admin-guide/create-region.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: Let's setup a new region and its Kiwi and Kaktus instances
4
4
weight: 4
5
5
---
6
6
7
-
Orchestrator being ready, we can now boostrap our first region.
7
+
Orchestrator being ready, we can now bootstrap our first region.
8
8
9
9
Let's take the following assumptions for the rest of this tutorial:
10
10
@@ -111,8 +111,8 @@ Despite that, each instance will have its own agent, to establish a WebSocket co
111
111
Let's continue with the 3 **Kaktus** instances declaration and their associated agents. Note that, this time, instances are associated to the zone itself, not the region.
112
112
113
113
{{< alert color="success" title="Information" >}}
114
-
Note that **Kaktus** instance creaion/update takes 4 specific parameters into account:
115
-
-**cpu_price** and **memory_price** are purely arbitrary values that express how much actual money is worth your metal infrastructure. These are used to compute virtual cost calculation later, when you'll be spwaning**Kompute** instances with vCPUs and vGB of RAM. Each server being different, it's fully okay to have different values here for your fleet.
114
+
Note that **Kaktus** instance creation/update takes 4 specific parameters into account:
115
+
-**cpu_price** and **memory_price** are purely arbitrary values that express how much actual money is worth your metal infrastructure. These are used to compute virtual cost calculation later, when you'll be spawning**Kompute** instances with vCPUs and vGB of RAM. Each server being different, it's fully okay to have different values here for your fleet.
116
116
-**cpu_overcommit** and **memory_overcommit** define the [overcommit](https://en.wikipedia.org/wiki/Memory_overcommitment) ratio you accept your physical hosts to address. As for price, not every server is born equal. Some have hyper-threading, other don't. You may consider that a value of 3 or 4 is fine, other tend to be stricter and use 2 instead. The more you set the bar, the more virtual resources you'll be able to create but the less actual physical resources they'll be able to get.
117
117
{{< /alert >}}
118
118
@@ -219,7 +219,7 @@ By default, if you only intend to use plain old **Kompute** instances, virtual d
219
219
220
220
If you however expect to further use **KFS** or running your own Kubernetes flavor, with an attempt to directly use Ceph backend to instantiate PVCs, exposing the VLAN 102 is mandatory.
221
221
222
-
To be on the safe side, and furure-proof, keep it exposed.
222
+
To be on the safe side, and future-proof, keep it exposed.
223
223
{{< /alert >}}
224
224
225
225
@@ -281,7 +281,7 @@ $ kobra tf apply
281
281
What have we done here ? Simply iterating over VNETs to associate those with VLAN IDs and the name of Linux bridge interfaces which will be created on each **Kaktus** instance from the zone (see [further](/docs/admin-guide/create-kaktus/)).
282
282
283
283
{{< alert color="success" title="Note" >}}
284
-
Note that while services instances will have dedicated reserved networks, we'll (conventionnally) add the VLAN 0 here (which is not really a VLAN at all).
284
+
Note that while services instances will have dedicated reserved networks, we'll (conventionally) add the VLAN 0 here (which is not really a VLAN at all).
285
285
286
286
**Kaktus** instances will be created with a **br0** bridge interface, mapped on host private network controller interface(s), where public IP addresses will be bound. This will allow further create virtual machines to be able to bind public IPs through the bridged interface.
Subnet objects are associated with a given virtual network and usual network settings (such as CIDR, route/rgateway, DNS server) are associated.
310
+
Subnet objects are associated with a given virtual network and usual network settings (such as CIDR, route/gateway, DNS server) are associated.
311
311
312
312
Note the use of 2 interesting parameters:
313
313
@@ -380,8 +380,8 @@ Note that we arbitrary took multiple decisions here:
380
380
- Reserve the first 69 IP addresses of the **10.50.102.0/24** subnet for our region growth. Each project's **Kawaii** instance (one per zone) will bind an IP from the range. That's plain enough room for the 10 projects we intend to host. But this saves us some space, shall we need to extend our infrastructure, by adding new **Kaktus** instances.
381
381
- Use of /24 subnets. This is really up to each network administrator. You could pick whichever range you need which wouldn't collapse with what's currently in place.
382
382
- Limit virtual network to one single subnet. We could have added as much as needed.
383
-
- Reserve the first 5 IPs of each subnet. Remember, our 2 **Kiwi** instances are already configured to bind **.2** and **.3** (and **.1** is the VIP). We'll save a few exra room for future use (one never knows ...).
384
-
- Reserve the subnet's last 3 IP addresses for **Kawaii** gateways virtual IPs. We only have one zone for now, so 1 would have been anough, but again, we never know what the future holds ...
383
+
- Reserve the first 5 IPs of each subnet. Remember, our 2 **Kiwi** instances are already configured to bind **.2** and **.3** (and **.1** is the VIP). We'll save a few extra room for future use (one never knows ...).
384
+
- Reserve the subnet's last 3 IP addresses for **Kawaii** gateways virtual IPs. We only have one zone for now, so 1 would have been enough, but again, we never know what the future holds ...
Copy file name to clipboardExpand all lines: content/en/docs/admin-guide/kahuna-setup.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -102,7 +102,7 @@ collections:
102
102
version: 0.1.0
103
103
```
104
104
105
-
By default, your platform is configured to pull a tagged official release from Ansible Galaxy. You may however prefer to pull it directly from Git, using latest commit for instance. This can be accomodated through:
105
+
By default, your platform is configured to pull a tagged official release from Ansible Galaxy. You may however prefer to pull it directly from Git, using latest commit for instance. This can be accommodated through:
Copy file name to clipboardExpand all lines: content/en/docs/admin-guide/provision-users.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: Let's populate admin users and teams
4
4
weight: 3
5
5
---
6
6
7
-
Your **Kahuna** instance is now up and running, let's get things and create a few admin users accounts. At first, we only have the super-admin API key that was previously set through Ansible deployment. We'll make use of it to provision further users and associated teams. After all, we want a nominative user acount for each contributor, right ?
7
+
Your **Kahuna** instance is now up and running, let's get things and create a few admin users accounts. At first, we only have the super-admin API key that was previously set through Ansible deployment. We'll make use of it to provision further users and associated teams. After all, we want a nominative user account for each contributor, right ?
8
8
9
9
Back to TF config, let's edit the **terraform/providers.tf** file:
0 commit comments