Version
3.5.0
Describe the problem you're encountering
In my case CouchDB database used as configuration storage.
Here about 1000 configuration docs and some of them actively created, deleted and then pruned.
I found issue where all configuration docs required about 1Mb on filesystem but shard size on more then 500Mb.
To troubleshoot this I have deleted all documents in the database, compacted but still use more then 500 Mb on the filesystem.
Here is database info
[root@host bin]# curl -s http://admin:xxxx@127.0.0.1:5984/test| jq
{
"instance_start_time": "1768133824",
"db_name": "test",
"purge_seq": "4463449-g2wAAAABaANkAB9jb3VjaGRiQGVzcnAtZGIwYS5waC5uZ2E5MTEuY29tbAAAAAJhAG4EAP____9qYgBEG1lq",
"update_seq": "8971555-g1AAAABueJzLYWBgYMxgTmGQT84vTc5ISXJILS4q0E1JMkjUK8jQy0tPtDQ01EvOz80BKmRKZMhjYfgPBFkZzEkMHU-Vc4Gi7OYGRilGxpaEzcgCABfAIew",
"sizes": {
"file": 1322799314,
"external": 0,
"active": 508996585
},
"props": {},
"doc_del_count": 0,
"doc_count": 0,
"disk_format_version": 8,
"compact_running": true,
"cluster": {
"q": 1,
"n": 3,
"w": 2,
"r": 2
}
}
Please be aware here about 8 971 555 document changes and 4 463 449 purge docs.
To compact used command
[root@host bin]# curl -s -X POST -H "Content-Type: application/json" http://admin:xxxxx@127.0.0.1:5984/test/_compact| jq
{
"ok": true
}
On file system empty database use about 650 Mb
[root@host bin]# ls -l /var/lib/couchdb/shards/00000000-ffffffff/test.1768133824.couch
-rw-r--r--. 1 couchdb couchdb 557266 Mar 21 16:38 /var/lib/couchdb/shards/00000000-ffffffff/test.1768133824.couch
[root@host bin]# du -hs /var/lib/couchdb/.shards/00000000-ffffffff/test.1768133824_design
157M /var/lib/couchdb/.shards/00000000-ffffffff/test.1768133824_design
Shards info
[root@host bin]# curl -s http://admin:xxxxx@127.0.0.1:5984/test/_shards| jq
{
"shards": {
"00000000-ffffffff": [
"couchdb@node0.example.com",
"couchdb@node1.example.com",
"couchdb@node2.example.com"
]
}
}
Shard location
[root@node bin]# curl -s http://admin:xxxxx@127.0.0.1:5984/_node/_local/_dbs/test| jq
{
"_id": "test",
"_rev": "1-b562eb40b3a9250a1cee5d08d337a567",
"shard_suffix": [
46,
49,
55,
54,
56,
49,
51,
51,
56,
50,
52
],
"changelog": [
[
"add",
"00000000-ffffffff",
"couchdb@node0.example.com"
],
[
"add",
"00000000-ffffffff",
"couchdb@node1.example.com"
],
[
"add",
"00000000-ffffffff",
"couchdb@node2.example.com"
]
],
"by_node": {
"couchdb@node0.example.com": [
"00000000-ffffffff"
],
"couchdb@node1.example.com": [
"00000000-ffffffff"
],
"couchdb@node2.example.com": [
"00000000-ffffffff"
]
},
"by_range": {
"00000000-ffffffff": [
"couchdb@node0.example.com",
"couchdb@node1.example.com",
"couchdb@node2.example.com"
]
},
"props": {}
}
Cluster status
[root@host bin]# curl -s http://admin:xxxx@127.0.0.1:5984/_membership| jq
{
"all_nodes": [
"couchdb@node0.example.com",
"couchdb@node1.example.com",
"couchdb@node2.example.com"
],
"cluster_nodes": [
"couchdb@node0.example.com",
"couchdb@node1.example.com",
"couchdb@node2.example.com"
]
}
Could you suggest how to troubleshoot issue with user space cleanup when all docs deleted and pruned.
Expected Behaviour
When all documents deleted from database shard files should be compacted and shard file size should be not more then 3Mb.
Steps to Reproduce
It looks like issue related to cluster deployments and many documents created, deleted and then pruned.
Your Environment
Three node cluster with CouchdB 3.5.0
[root@node bin]# curl -s http://admin:xxxx@127.0.0.1:5984| jq
{
"couchdb": "Welcome",
"version": "3.5.0",
"git_sha": "11f0d36",
"uuid": "3212a9b297b73cfe646f2c45dd7b8049",
"features": [
"access-ready",
"partitioned",
"pluggable-storage-engines",
"reshard",
"scheduler"
],
"vendor": {
"name": "The Apache Software Foundation"
}
}
Additional Context
No response
Version
3.5.0
Describe the problem you're encountering
In my case CouchDB database used as configuration storage.
Here about 1000 configuration docs and some of them actively created, deleted and then pruned.
I found issue where all configuration docs required about 1Mb on filesystem but shard size on more then 500Mb.
To troubleshoot this I have deleted all documents in the database, compacted but still use more then 500 Mb on the filesystem.
Here is database info
Please be aware here about
8 971 555document changes and4 463 449purge docs.To compact used command
On file system empty database use about 650 Mb
Shards info
Shard location
Cluster status
Could you suggest how to troubleshoot issue with user space cleanup when all docs deleted and pruned.
Expected Behaviour
When all documents deleted from database shard files should be compacted and shard file size should be not more then 3Mb.
Steps to Reproduce
It looks like issue related to cluster deployments and many documents created, deleted and then pruned.
Your Environment
Three node cluster with CouchdB 3.5.0
Additional Context
No response