NOTE This is a learning experiment to get familiar with C++ with no certain future maintenance or improvements. It's currently under active development.
Lobos (local objectstore) allows a user to quickly deploy a local object store. Lobos can bind to 127.0.0.1 with noauth or any local IP when auth is enabled (see lobos.cfg in the repo for more info).
Lobos supports 2 backends, a filesystem and SPDK's blobstore:
- Filesystem
- SPDK's blobstore
The parsing is pretty naive so things can get broken quick but the following seem to work with aws s3 cli:
- AbortMultipartUpload
- CompleteMultipartUpload
- CreateBucket
- CreateMultipartUpload
- DeleteBucket
- DeleteObject
- GetObject (including Range request)
- HeadBucket
- HeadObject
- ListBuckets
- ListMultipartUploads
- ListObjectsV2 (no max-keys)
- PutObject
- UploadPart
SPDK support is still very experimental and while the whole project has a lot of shortcuts that needs to be addressed, SPDK has a lot more.
Next area of work in no particular order:
- New index
- Blobstore object packing for small objects
- SPDK buffer pool
- Extend S3 support
- Checksums
- Object versioning
- Object tagging
- Object Copy
- ... probably more
- Install dependencies:
$ sudo apt install libboost-program-options-dev libboost-filesystem-dev libboost-url-dev libgrpc-dev libgrpc++-dev protobuf-compiler-grpc libprotobuf-dev
# If you want to use SPDK blobstore backend instead of filesystem you'll also need
$ sudo apt install prometheus-cpp-dev- Clone lobos SPDK and build SPDK:
$ git clone https://github.com/alram/lobos.git
$ cd lobos/src/
# Build SPDK if not disabled at compile time
$ git clone https://github.com/spdk/spdk --recursive
# Follow the steps documented here: https://spdk.io/doc/getting_started.html- Build Lobos
$ cmake -B build [-DENABLE_SPDK=OFF] #optionnaly to disable SPDK
$ cmake --build build- Run lobos
$ ./build/lobos -c lobos.cfg
starting in fs mode
Starting S3 HTTP server at 127.0.0.1:8080
Control plane listening on 127.0.0.1:50051
Lobos require a configuration file. Check the commented lobos.cfg at the root of the repository for configuration options.
If you enable authentication, you'll need to create S3 users. Lobos has a control plane using gRPC. A quick way to create a user is via the grpcurl command:
$ grpcurl -plaintext -d '{"name": "alram"}' -proto src/controlplane/loboscontrol.proto localhost:50051 loboscontrol.ControlPlane/AddUser
{
"params": {
"name": "alram",
"key": "LB96D7QPTHK96NIM8UXU",
"secret": "fLxTLUF9b8Kcwp1lbBghuCQ1CcPCx+u5njBWqt1d",
"backend": "lobos"
}
}
A Go CLI for lobos is also available at https://github.com/alram/lobos-cli
$ ./lobos -c lobos.cfg
starting in fs mode
loading all users
loading all buckets
Starting S3 HTTP server at 127.0.0.1:8080
Control plane listening on 127.0.0.1:50051
$ aws --endpoint http://localhost:8080 s3 mb s3://lobosdemo
make_bucket: lobosdemo
$ aws --endpoint http://localhost:8080 s3 ls
2026-02-05 13:22:41 lobosdemo
$ aws --endpoint http://localhost:8080 s3 cp Makefile s3://lobosdemo/file1
upload: ./Makefile to s3://lobosdemo/file1
$ aws --endpoint http://localhost:8080 s3 ls s3://lobosdemo
2026-02-05 13:23:21 2876 file1
NOTE If launching Lobos with SPDK make sure huge pages are configured, more info https://spdk.io/doc/getting_started.html
While Lobos supports malloc bdev, it's mostly for testing. You'll want a dedicated NVMe to use SPDK blobstore.
Running:
$ sudo ./src/spdk/scripts/setup.shShould automatically passthrough any non-used NVMe. You can use the env vars PCI_ALLOWED or PCI_BLOCKED to explicitly allow or block vfio passthrough on devices.
If your NVMe isn't automatically added even with PCI_ALLOWED, it's most likely because it was used before and need to be formatted. You can run:
sudo nvme format --ses=1 /dev/disk/by-id/<controller_nsid> --forceI highly recommend always using by-id instead of the device in the kernel (e.g. nvme0n1) since by doing passthrough, those can change (I learned that the hard way, erasing my whole OS disk).
Running
$ sudo ./src/spdk/scripts/setup.sh status
0000:c2:00.0 (144d a808): Active devices: holder@nvme1n1p3:dm-0,mount@nvme1n1:ubuntu--vg-ubuntu--lv,mount@nvme1n1:nvme1n1p1,mount@nvme1n1:nvme1n1p2, so not binding PCI dev
Hugepages
node hugesize free / total
node0 1048576kB 0 / 0
node0 2048kB 1024 / 1024
Type BDF Vendor Device NUMA Driver Device Block devices
NVMe 0000:c1:00.0 15b7 5045 unknown vfio-pci - -
NVMe 0000:c2:00.0 144d a808 unknown nvme nvme1 nvme1n1You can see which devices are passed through, in my case 0000:c1:00.0. That's the device to pass in the lobos.cfg, under [spdk_blobstore].
Once ready:
$ sudo ./lobos -c lobos.cfg
[2026-01-20 14:18:06.742840] Starting SPDK v26.01-pre git sha1 ef889f9dd / DPDK 25.07.0 initialization...
[2026-01-20 14:18:06.742887] [ DPDK EAL parameters: lobos_spdk --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92324 ]
EAL: '-c <coremask>' option is deprecated, and will be removed in a future release
EAL: Use '-l <corelist>' or '--lcores=<corelist>' option instead
[2026-01-20 14:18:06.850908] app.c: 970:spdk_app_start: *NOTICE*: Total cores available: 1
[2026-01-20 14:18:06.857697] reactor.c: 996:reactor_run: *NOTICE*: Reactor started on core 0
Passed a NVMe device.
didn't find existing blobstore, creating one
alloc done! io unit size: 4096
attempting to rebuild index if exist
index build complete
Starting S3 HTTP server at 127.0.0.1:8080
Control plane listening on 127.0.0.1:50051Since SPDK pass through the device, classic methods of monitoring are out of the window. When in SPDK mode, a prometheus collector will be started and accessible at http://127.0.0.1:9091
# HELP spdk_bdev_read_ops_total Total read operations
# TYPE spdk_bdev_read_ops_total counter
spdk_bdev_read_ops_total 57826
# HELP spdk_bdev_write_ops_total Total write operations
# TYPE spdk_bdev_write_ops_total counter
spdk_bdev_write_ops_total 2023380
# HELP spdk_bdev_bytes_read_total Total bytes read
# TYPE spdk_bdev_bytes_read_total counter
spdk_bdev_bytes_read_total 236883968
# HELP spdk_bdev_bytes_written_total Total bytes written
# TYPE spdk_bdev_bytes_written_total counter
spdk_bdev_bytes_written_total 61328384000
# HELP spdk_bs_clusters_total Total blobstore clusters
# TYPE spdk_bs_clusters_total gauge
spdk_bs_clusters_total 13354145
# HELP spdk_bs_clusters_available Total blobstore clusters available
# TYPE spdk_bs_clusters_available gauge
spdk_bs_clusters_available 11503873Note that the collector port is not configurable at the moment.
This was tested on a framework desktop (AMD RYZEN AI MAX+ 395) with 32GB of OS RAM.
Minio's wrap was used for the testing. For each test, I used 8 http threads, pinned to core 1-8 for the SPDK blobstore test, core 0 was used for the reactor.
All tests were performed on a WD_BLACK SN7100 500GB with a cluster_sz of 32KiB. A large cluster size, will help performance on large IO (reached ~5GB/s reads and 4GiB/s peak writes) but will waste a lot of space on small IO. If you know your object size, I highly encourage tweaking cluster_sz accordingly.
| Engine | IO Size | Concurrency | Method | Result |
|---|---|---|---|---|
| File | 1 MiB | 50 | PUT | 2432.04 MiB/s |
| File | 1 MiB | 50 | GET | 13711.60 MiB/s* |
| SPDK | 1 MiB | 50 | PUT | 3466.46 MiB/s** |
| SPDK | 1 MiB | 50 | GET | 4274.65 MiB/s |
| File | 32KiB | 200 | PUT | 10738 op/s - 335.56 MiB/s |
| File | 32KiB | 200 | GET | 84621 op/s - 2644 MiB/s* |
| SPDK | 32KiB | 200 | PUT | 34005 op/s - 1062.66 MiB/s ** |
| SPDK | 32KiB | 200 | GET | 83463 op/s - 2608.22 MiB/s*** |
* The GET filesystem result were (almost) all cached. Little to no disk I/O were observed.
** Performance degraded after ~30 seconds and lowered to ~900MiB/s. This is a consummer drive and I basically hit the write cliff, fast. This was confirmed by 1) running the benchmarking immediately after end, which showed 900MiB/s 2) letting the drive idle for 1h and re-running the benchmark showed the init performance and degraded a few seconds later again. The performance number showed above is pre-cliff.
The 32KiB tests showed the same pattern although less pronounced, starting at 51k op/s.
*** For GET 32KiB test, the busiest processes were warp's, not lobos', as evident by the same numbers for the two backends.
I don't have an environment where I can easily test this but functionally it seems to work.
The configuration is similar to CoreWeave's on LMCache official doc
chunk_size: 256 # for func test I did 8 which but that way too low
local_cpu: False
save_unfull_chunk: False
enable_async_loading: True
remote_url: "s3://localhost:8080/bench"
remote_serde: "naive"
blocking_timeout_secs: 10
extra_config:
s3_num_io_threads: 320
s3_prefer_http2: False
s3_region: "US-WEST-04A"
s3_enable_s3express: False
save_chunk_meta: False
disable_tls: True
aws_access_key_id: "not"
aws_secret_access_key: "needed"Saw hits:
(APIServer pid=37) INFO 01-07 20:17:18 [loggers.py:248] Engine 000: Avg prompt throughput: 4.6 tokens/s, Avg generation throughput: 2.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.3%, Prefix cache hit rate: 39.8%, External prefix cache hit rate: 11.6%
(APIServer pid=37) INFO 01-07 20:17:28 [loggers.py:248] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 7.3 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.6%, Prefix cache hit rate: 39.8%, External prefix cache hit rate: 11.6%
And validated it hit lobos:
$ aws --endpoint-url http://127.0.0.1:8080 s3 ls s3://bench/ | head
2026-01-07 12:16:00 786432 vllm%40Qwen_Qwen3-Coder-30B-A3B-Instruct%401%400%406c9faa6ae5af1bdf%40bfloat16
2026-01-07 12:16:00 786432 vllm%40Qwen_Qwen3-Coder-30B-A3B-Instruct%401%400%40-7f89f621536990ce%40bfloat16
2026-01-07 12:16:00 786432 vllm%40Qwen_Qwen3-Coder-30B-A3B-Instruct%401%400%40-49ba81e7d7a6fad%40bfloat16
2026-01-07 12:16:00 786432 vllm%40Qwen_Qwen3-Coder-30B-A3B-Instruct%401%400%4047af06aebe49e1e6%40bfloat16
[...]