You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
agents = [for agent in try(each.value.agents, []) : kowabunga_agent.eu-west[agent].id]
50
+
}
51
+
```
52
+
53
+
What we're doing here is instructing **Kahuna** that there's a Ceph storage pool that can be used to provision RBD images. It will connect to **ceph** DNS record on port **3300**, and use one of the 3 **agents** defined to connect to pool **rbd**. It'll also arbitrary (as we did for **Katkus** instances) set the global storage pool price to **200 EUR / month**, so virtual resource cost computing can happen.
54
+
55
+
{{< alert color="warning" title="Warning" >}}
56
+
Take care of updating the **YOUR_CEPH_FSID** secret value with the one you've set in Ansible **kowabunga_ceph_fsid** variable. Libvirt won't be able to reach the cluster without this information.
57
+
{{< /alert >}}
58
+
59
+
And apply:
60
+
61
+
```sh
62
+
$ kobra tf apply
63
+
```
64
+
65
+
## NFS Storage
66
+
67
+
Now if you previously created an NFS endpoint want to expose it through **Kylo** services, you'll also need to setup the following TF resources:
In a very same way, this simply instructs **Kahuna** how to access NFS resources and provide **Kylo** services. you must ensure that **endpoint** and **backends** values map to your local storage domain and associated Kaktus instances. They'll be used further by **Kylo** instances to create NFS shares over Ceph.
105
+
106
+
And again, apply:
107
+
108
+
```sh
109
+
$ kobra tf apply
110
+
```
111
+
112
+
## OS Image Templates
113
+
114
+
And finally, let's declare OS image templates. Without those, you won't be able to spin up any kind of **Kompute** virtual machines instances after all. Image templates must be ready-to-boot, cloud-init compatible and either in QCOW2 (smaller to download, prefered) or RAW format.
115
+
116
+
Up to you to use pre-built community images or host your own custom one on a public HTTP server.
117
+
118
+
{{< alert color="warning" title="Warning" >}}
119
+
Note that URL must be reachable from Kaktus nodes, not Kahuna one (so can be private network).
120
+
121
+
The module however does not support authentication at the moment, so images must be "publicly" available.
122
+
{{< /alert >}}
123
+
124
+
```hcl
125
+
locals {
126
+
# WARNING: these must can be in either QCOW2 (recommended) or RAW format
127
+
# Example usage for conversion, if needed:
128
+
# $ qemu-img convert -f qcow2 -O raw ubuntu-22.04-server-cloudimg-amd64.img ubuntu-22.04-server-cloudimg-amd64.raw
pool = kowabunga_storage_pool.eu-brezel["eu-west-ssd"].id
141
+
name = each.key
142
+
desc = each.value.desc
143
+
os = try(each.value.os, "linux")
144
+
source = each.value.source
145
+
default = try(each.value.default, false)
146
+
}
147
+
```
148
+
149
+
At creation, declared images will be download by one of the **Kaktus** agent and stored into Ceph cluster. After that, one can simply reference them by their name when creating **Kompute** instances.
150
+
151
+
{{< alert color="warning" title="Warning" >}}
152
+
Depending on remote source, image size and your network performance, retrieving images can take a significant amount of time (several minuutes). TF provider is set to use a 30mn timeout by default. Update it accordingly if you believe this won't be enough.
153
+
{{< /alert >}}
154
+
155
+
Congratulations, you're now done with administration tasks and infrastructure provisionning. You now have a fully working Kowabunga setup, ready to be consumed by end users.
156
+
157
+
Let's then [provision our first project](/docs/user-guide/create-project/) !
0 commit comments