Skip to content

Commit cc590ba

Browse files
committed
spell check
1 parent 4de7dac commit cc590ba

22 files changed

Lines changed: 54 additions & 54 deletions

File tree

content/en/docs/admin-guide/create-kaktus.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ If you only have limited disks on your system (e.g. only 2), Ceph storage will b
3131
- possibly do the same on another disk so you can use software RAID-1 for sanity.
3232
- partition the rest of your disk for future Ceph usage.
3333

34-
In that case, **parted** is your friend for the job. It also means you need to ensure, at OS installation stage, that you don't let distro partitionner use your full device.
34+
In that case, **parted** is your friend for the job. It also means you need to ensure, at OS installation stage, that you don't let distro partitioner use your full device.
3535

3636
{{< alert color="success" title="Recommendation" >}}
3737
As much as can be, we however recommend you to have dedicated disks for Ceph cluster. An enterprise-grade setup would use some small SATA SSDs in RAID-1 for OS and as many dedicated NVMe SSDs as Ceph-reserved data disks.
@@ -131,7 +131,7 @@ kowabunga_netplan_apply_enabled: true
131131
```
132132
133133
{{< alert color="success" title="Information" >}}
134-
Note that setting **kowabunga_netplan_disable_cloud_init** is an optional step. If you'd like to keep whatever configuration cloud-init has previously set, it's all fine (but it's always recommended not to have dual sourc eof truth).
134+
Note that setting **kowabunga_netplan_disable_cloud_init** is an optional step. If you'd like to keep whatever configuration cloud-init has previously set, it's all fine (but it's always recommended not to have dual source of truth).
135135
{{< /alert >}}
136136
137137
{{< alert color="success" title="Information" >}}
@@ -168,7 +168,7 @@ Having more than 3 monitors in your cluster is not necessarily useful. If you ha
168168
Also be sure that those nodes (and those nodes only !) are defined in **Kiwi**'s DNS regional configuration under **ceph** record.
169169
{{< /alert >}}
170170
171-
Ceph cluster also comes with **managers**. As in real-life, they don't do much ;-) Or at least, they're not as vital as **monitors**. They however expose various metrics. Having one is nice, more than that will only help with failover. As for **monitors**, one can enale it for a **Kaktus** in **ansible/inventories/host_vars/kaktus-eu-west-a-{1,2,3}.yml** instance-specific file:
171+
Ceph cluster also comes with **managers**. As in real-life, they don't do much ;-) Or at least, they're not as vital as **monitors**. They however expose various metrics. Having one is nice, more than that will only help with failover. As for **monitors**, one can enable it for a **Kaktus** in **ansible/inventories/host_vars/kaktus-eu-west-a-{1,2,3}.yml** instance-specific file:
172172
173173
```yaml
174174
kowabunga_ceph_manager_enabled: true
@@ -192,14 +192,14 @@ So let's define where to store these files in **ansible/inventories/group_vars/k
192192
kowabunga_ceph_local_keyrings_dir: "{{ playbook_dir }}/../../../../../files/ceph"
193193
```
194194
195-
Once provisionned, you'll end up with a regional sub-directory (e.g. **eu-west**), containing 3 files:
195+
Once provisioned, you'll end up with a regional sub-directory (e.g. **eu-west**), containing 3 files:
196196
197197
- ceph.client.admin.keyring
198198
- ceph.keyring
199199
- ceph.mon.keyring
200200
201201
{{< alert color="warning" title="Important" >}}
202-
These files are keyring and extermely sensitive. Anyone with access to these files and your private network gets a full administrative control over the Ceph cluster.
202+
These files are keyring and extremely sensitive. Anyone with access to these files and your private network gets a full administrative control over the Ceph cluster.
203203
204204
So keep track of them, but do it smartly. As they are plain-text, let's ensure you don't store them on Git that way.
205205
@@ -213,9 +213,9 @@ $ mv ceph.client.admin.keyring ceph.client.admin.keyring.sops
213213
before being pushed. Ansible will automatically decrypt them on the fly, should they end up with *.sops* extension.
214214
{{< /alert >}}
215215

216-
### Disks provisionning
216+
### Disks provisioning
217217

218-
Next step is about disks provisionning. Your cluster will contain several disks from several instances (the ones you've either partitionned or left untouched at pre-requisite stage). Each instance may have different toplogy, different disks, different sizes etc ... Disks (or partitions, whatever) are each managed by a Ceph **OSD** daemon.
218+
Next step is about disks provisioning. Your cluster will contain several disks from several instances (the ones you've either partitioned or left untouched at pre-requisite stage). Each instance may have different topology, different disks, different sizes etc ... Disks (or partitions, whatever) are each managed by a Ceph **OSD** daemon.
219219

220220
So we need to reflect this topology into each instance-specific **ansible/inventories/host_vars/kaktus-eu-west-a-{1,2,3}.yml** file:
221221

@@ -229,11 +229,11 @@ kowabunga_ceph_osds:
229229
weight: 1.0
230230
```
231231
232-
For each instance, you'll need to declare disks that are going to be part of the cluster. The **dev** parameter simply maps to the device file itself (it is **more than recommended** to use **/dev/disk/by-id** mapping instead of boggus **/dev/nvme0nX** naming, which can change across reboots). The **weight** parameter will be used for Ceph scheduler for object placement and corresponds to each disk size in TB unit (e.g. 1.92 TB SSD would have a 1.92 weight). And finally the **id** identifier might be the most important of all. This is the **UNIQUE** identifier across your Ceph cluster. Whichever the disk ID you use, you need to ensure than no other disk in no other instance uses the same identifier.
232+
For each instance, you'll need to declare disks that are going to be part of the cluster. The **dev** parameter simply maps to the device file itself (it is **more than recommended** to use **/dev/disk/by-id** mapping instead of bogus **/dev/nvme0nX** naming, which can change across reboots). The **weight** parameter will be used for Ceph scheduler for object placement and corresponds to each disk size in TB unit (e.g. 1.92 TB SSD would have a 1.92 weight). And finally the **id** identifier might be the most important of all. This is the **UNIQUE** identifier across your Ceph cluster. Whichever the disk ID you use, you need to ensure than no other disk in no other instance uses the same identifier.
233233
234234
### Data Pools
235235
236-
Once we have disks agregated, we must create data pools on top. Data pools are a logical way to segment your global Ceph cluster usgage. Definition can be made in **ansible/inventories/group_vars/kaktus_eu_west/main.yml** file, as:
236+
Once we have disks aggregated, we must create data pools on top. Data pools are a logical way to segment your global Ceph cluster usage. Definition can be made in **ansible/inventories/group_vars/kaktus_eu_west/main.yml** file, as:
237237
238238
```yaml
239239
kowabunga_ceph_osd_pools:
@@ -268,9 +268,9 @@ In that example, we'll create 4 data pools:
268268
- 2 of type **rbd** (RADOS block device), for further be used by KVM or a future Kubernetes cluster to provision virtual block device disks.
269269
- 2 of type **fs** (filesystem), for further be used as underlying NFS storage backend.
270270
271-
Each pool relies on [Ceph Placement Groups](https://docs.ceph.com/en/latest/rados/operations/placement-groups/) for objects fragments distribution across disks in the cluster. There's no rule of thumb on how much one need. Itdepends on your cluster size, its number of disks, its replication factor and many more parameters. You can get some help thanks to [Ceph PG Calculator](https://linuxkidd.com/ceph/pgcalc.html) to set an appropriate value.
271+
Each pool relies on [Ceph Placement Groups](https://docs.ceph.com/en/latest/rados/operations/placement-groups/) for objects fragments distribution across disks in the cluster. There's no rule of thumb on how much one need. It depends on your cluster size, its number of disks, its replication factor and many more parameters. You can get some help thanks to [Ceph PG Calculator](https://linuxkidd.com/ceph/pgcalc.html) to set an appropriate value.
272272
273-
The **replication** parameter controls the cluster's data redundancy. The bigger the value, the more replicated data will be (and the less prone to disaster you will be), but the fewer usuable space you'll get.
273+
The **replication** parameter controls the cluster's data redundancy. The bigger the value, the more replicated data will be (and the less prone to disaster you will be), but the fewer usable space you'll get.
274274
275275
### File Systems
276276
@@ -335,4 +335,4 @@ $ kobra ansible deploy -p kowabunga.cloud.kaktus
335335

336336
We’re all set with infrastructure setup.
337337

338-
One last step of [services provisionning](/docs/admin-guide/provision-services/) and we're done !
338+
One last step of [services provisioning](/docs/admin-guide/provision-services/) and we're done !

content/en/docs/admin-guide/create-kiwi.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ kowabunga_network_failover_settings:
108108
{{- res -}}
109109
```
110110
111-
Once again, we interate over **kowabunga_region_vlan_id_ranges** variable to create our global configuration for **eu-west** region. After all, both **Kiwi** instances from there will have the very same configuration.
111+
Once again, we iterate over **kowabunga_region_vlan_id_ranges** variable to create our global configuration for **eu-west** region. After all, both **Kiwi** instances from there will have the very same configuration.
112112
113113
This will ensure that VRRP packets flows between the 2 peers so one always ends up being the active router for each virtual network interface.
114114
@@ -183,8 +183,8 @@ Finally, edit the SOPS-encrypted **ansible/inventories/group_vars/kiwi.sops.yml*
183183
```yaml
184184
secret_kowabunga_powerdns_webserver_password: ONE_STRONG_PASSWORD
185185
secret_kowabunga_powerdns_api_key: ONE_MORE
186-
secret_kowabunaga_powerdns_db_admin_password: YET_ANOTHER
187-
secret_kowabunaga_powerdns_db_user_password: HERE_WE_GO
186+
secret_kowabunga_powerdns_db_admin_password: YET_ANOTHER
187+
secret_kowabunga_powerdns_db_user_password: HERE_WE_GO
188188
```
189189
190190
As names stand, first 2 variables will be used to expose **PowerDNS** API (which will be consumed by **Kiwi** agent) and last twos are MariaDB credentials, used by **PowerDNS** to connect to. None of these passwords really matter, they're server-to-server internal use only, no use is ever going to make use of them. But let's use something robust nonetheless.
@@ -210,4 +210,4 @@ We're finally done with **Kiwi**'s configuration. All we need to do now is final
210210
$ kobra ansible deploy -p kowabunga.cloud.kiwi
211211
```
212212

213-
We’re now ready for [provisionning Kaktus HCI nodes](/docs/admin-guide/create-kaktus/) !
213+
We’re now ready for [provisioning Kaktus HCI nodes](/docs/admin-guide/create-kaktus/) !

content/en/docs/admin-guide/create-region.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Let's setup a new region and its Kiwi and Kaktus instances
44
weight: 4
55
---
66

7-
Orchestrator being ready, we can now boostrap our first region.
7+
Orchestrator being ready, we can now bootstrap our first region.
88

99
Let's take the following assumptions for the rest of this tutorial:
1010

@@ -111,8 +111,8 @@ Despite that, each instance will have its own agent, to establish a WebSocket co
111111
Let's continue with the 3 **Kaktus** instances declaration and their associated agents. Note that, this time, instances are associated to the zone itself, not the region.
112112

113113
{{< alert color="success" title="Information" >}}
114-
Note that **Kaktus** instance creaion/update takes 4 specific parameters into account:
115-
- **cpu_price** and **memory_price** are purely arbitrary values that express how much actual money is worth your metal infrastructure. These are used to compute virtual cost calculation later, when you'll be spwaning **Kompute** instances with vCPUs and vGB of RAM. Each server being different, it's fully okay to have different values here for your fleet.
114+
Note that **Kaktus** instance creation/update takes 4 specific parameters into account:
115+
- **cpu_price** and **memory_price** are purely arbitrary values that express how much actual money is worth your metal infrastructure. These are used to compute virtual cost calculation later, when you'll be spawning **Kompute** instances with vCPUs and vGB of RAM. Each server being different, it's fully okay to have different values here for your fleet.
116116
- **cpu_overcommit** and **memory_overcommit** define the [overcommit](https://en.wikipedia.org/wiki/Memory_overcommitment) ratio you accept your physical hosts to address. As for price, not every server is born equal. Some have hyper-threading, other don't. You may consider that a value of 3 or 4 is fine, other tend to be stricter and use 2 instead. The more you set the bar, the more virtual resources you'll be able to create but the less actual physical resources they'll be able to get.
117117
{{< /alert >}}
118118

@@ -219,7 +219,7 @@ By default, if you only intend to use plain old **Kompute** instances, virtual d
219219

220220
If you however expect to further use **KFS** or running your own Kubernetes flavor, with an attempt to directly use Ceph backend to instantiate PVCs, exposing the VLAN 102 is mandatory.
221221

222-
To be on the safe side, and furure-proof, keep it exposed.
222+
To be on the safe side, and future-proof, keep it exposed.
223223
{{< /alert >}}
224224

225225

@@ -281,7 +281,7 @@ $ kobra tf apply
281281
What have we done here ? Simply iterating over VNETs to associate those with VLAN IDs and the name of Linux bridge interfaces which will be created on each **Kaktus** instance from the zone (see [further](/docs/admin-guide/create-kaktus/)).
282282

283283
{{< alert color="success" title="Note" >}}
284-
Note that while services instances will have dedicated reserved networks, we'll (conventionnally) add the VLAN 0 here (which is not really a VLAN at all).
284+
Note that while services instances will have dedicated reserved networks, we'll (conventionally) add the VLAN 0 here (which is not really a VLAN at all).
285285

286286
**Kaktus** instances will be created with a **br0** bridge interface, mapped on host private network controller interface(s), where public IP addresses will be bound. This will allow further create virtual machines to be able to bind public IPs through the bridged interface.
287287
{{< /alert >}}
@@ -307,7 +307,7 @@ resource "kowabunga_subnet" "eu-west" {
307307
}
308308
```
309309

310-
Subnet objects are associated with a given virtual network and usual network settings (such as CIDR, route/rgateway, DNS server) are associated.
310+
Subnet objects are associated with a given virtual network and usual network settings (such as CIDR, route/gateway, DNS server) are associated.
311311

312312
Note the use of 2 interesting parameters:
313313

@@ -380,8 +380,8 @@ Note that we arbitrary took multiple decisions here:
380380
- Reserve the first 69 IP addresses of the **10.50.102.0/24** subnet for our region growth. Each project's **Kawaii** instance (one per zone) will bind an IP from the range. That's plain enough room for the 10 projects we intend to host. But this saves us some space, shall we need to extend our infrastructure, by adding new **Kaktus** instances.
381381
- Use of /24 subnets. This is really up to each network administrator. You could pick whichever range you need which wouldn't collapse with what's currently in place.
382382
- Limit virtual network to one single subnet. We could have added as much as needed.
383-
- Reserve the first 5 IPs of each subnet. Remember, our 2 **Kiwi** instances are already configured to bind **.2** and **.3** (and **.1** is the VIP). We'll save a few exra room for future use (one never knows ...).
384-
- Reserve the subnet's last 3 IP addresses for **Kawaii** gateways virtual IPs. We only have one zone for now, so 1 would have been anough, but again, we never know what the future holds ...
383+
- Reserve the first 5 IPs of each subnet. Remember, our 2 **Kiwi** instances are already configured to bind **.2** and **.3** (and **.1** is the VIP). We'll save a few extra room for future use (one never knows ...).
384+
- Reserve the subnet's last 3 IP addresses for **Kawaii** gateways virtual IPs. We only have one zone for now, so 1 would have been enough, but again, we never know what the future holds ...
385385
{{< /alert >}}
386386

387387
{{< alert color="warning" title="Warning" >}}

content/en/docs/admin-guide/kahuna-setup.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ collections:
102102
version: 0.1.0
103103
```
104104
105-
By default, your platform is configured to pull a tagged official release from Ansible Galaxy. You may however prefer to pull it directly from Git, using latest commit for instance. This can be accomodated through:
105+
By default, your platform is configured to pull a tagged official release from Ansible Galaxy. You may however prefer to pull it directly from Git, using latest commit for instance. This can be accommodated through:
106106
107107
```yaml
108108
---
@@ -161,7 +161,7 @@ $ kobra secrets edit ansible/inventories/group_vars/all.sops.yml
161161
and set the requested password:
162162

163163
```yaml
164-
secret_kowabunga_os_user_root_password: MY_SUPER_SETRONG_PASSWORD
164+
secret_kowabunga_os_user_root_password: MY_SUPER_STRONG_PASSWORD
165165
```
166166
167167
### Firewall
@@ -248,7 +248,7 @@ kowabunga_kahuna_smtp_password: "{{ secret_kowabunga_kahuna_smtp_password }}"
248248
and add the respective secrets into **ansible/inventories/group_vars/kahuna.sops.yml**:
249249
250250
```yaml
251-
secret_kowabunga_kahuna_jwt_signature: A_STRONG_JWT_SGINATURE
251+
secret_kowabunga_kahuna_jwt_signature: A_STRONG_JWT_SIGNATURE
252252
secret_kowabunga_kahuna_api_key: A_STRONG_API_KEY
253253
secret_kowabunga_kahuna_smtp_password: A_STRONG_PASSWORD
254254
```
@@ -269,4 +269,4 @@ After a few minutes, if everything's went okay, you should have a working **Kahu
269269
- The [Kahuna](/docs/concepts/kahuna/) backend server itself, our core orchestrator.
270270
- Optionally, [MongoDB](https://www.mongodb.com/) database.
271271

272-
We're now ready for [provisionning users and teams](/docs/admin-guide/provision-users/) !
272+
We're now ready for [provisioning users and teams](/docs/admin-guide/provision-users/) !

content/en/docs/admin-guide/provision-users.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Let's populate admin users and teams
44
weight: 3
55
---
66

7-
Your **Kahuna** instance is now up and running, let's get things and create a few admin users accounts. At first, we only have the super-admin API key that was previously set through Ansible deployment. We'll make use of it to provision further users and associated teams. After all, we want a nominative user acount for each contributor, right ?
7+
Your **Kahuna** instance is now up and running, let's get things and create a few admin users accounts. At first, we only have the super-admin API key that was previously set through Ansible deployment. We'll make use of it to provision further users and associated teams. After all, we want a nominative user account for each contributor, right ?
88

99
Back to TF config, let's edit the **terraform/providers.tf** file:
1010

0 commit comments

Comments
 (0)