Skip to content

Commit cb08d1d

Browse files
committed
updated hw specs
1 parent c590cd6 commit cb08d1d

1 file changed

Lines changed: 6 additions & 6 deletions

File tree

content/en/docs/getting-started/hardware.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -60,10 +60,10 @@ Now, let's consider you want to achieve 128 TB usable disk space. At first, you
6060

6161
Here are different options to achieve it:
6262

63-
- 1 server with 24x 16TB SSDs
64-
- 3 servers with 8x 16TB SSDs
65-
- 3 servers with 16x 8TB SSDs
66-
- 8 servers with 6x 8TB SSDs
63+
- 1 server with 24x 16TB SSDs each
64+
- 3 servers with 8x 16TB SSDs each
65+
- 3 servers with 16x 8TB SSDs each
66+
- 8 servers with 6x 8TB SSDs each
6767
- [...]
6868

6969
From a purely resilient perspective, last option would be the best. It provides the more machines, with the more disks, meaning that if anything happens, the smallest fraction of data from the cluster will be lost. Lost data is possibly only ephemeral (time for server or disk to be brought up again). But while down, Ceph will try to re-copy data from duplicated fragments to other disks, inducing a major private network bandwidth usage. Now whether you only have 8 TB of data to be recovered or 128 TB may have a very different impact.
@@ -74,8 +74,8 @@ So let's recap ...
7474

7575
Typical **Kaktus** instances for home labs or sandbox environments would look like:
7676

77-
- 8-cores (16-threads) CPUs.
78-
- 32 GB RAM.
77+
- 4-cores (8-threads) CPUs.
78+
- 16 GB RAM.
7979
- 2x 1TB SATA or NVMe SSDs (shared between OS partition and Ceph ones)
8080
- 1 Gbps NIC
8181

0 commit comments

Comments
 (0)