diff --git a/blog/posts/2026-03-05-uncached-io.md b/blog/posts/2026-03-05-uncached-io.md new file mode 100644 index 000000000..27905e9a2 --- /dev/null +++ b/blog/posts/2026-03-05-uncached-io.md @@ -0,0 +1,55 @@ +--- +title: "Uncached I/O in Prometheus" +created_at: 2026-03-05 +kind: article +author_name: "Ayoub Mrini (@machine424)" +--- + +Do you find yourself constantly looking up the difference between `container_memory_usage_bytes`, `container_memory_working_set_bytes`, and `container_memory_rss`? Pick the wrong one and your memory limits lie to you, your benchmarks mislead you, and your container gets OOMKilled. + +You're not alone. There is even a [9-year-old Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/43916) that captures the frustration of users. + +The explanation is simple: RAM is not used in just one way. One of the easiest things to miss is the [page cache](https://en.wikipedia.org/wiki/Page_cache), and for some containers it can make up most of the reported memory usage, even though that memory is largely reclaimable, creating large gaps between those metrics. + + + +> NOTE: The feature discussed here currently only supports Linux. + +Prometheus writes a lot of data to disk. It is, after all, a database. But not every write benefits from sitting in the page cache. Compaction writes are the clearest example: once a block is written, only a fraction of that data is likely to be queried again soon, and since there is no way to predict which fraction, caching it all offers little return. The [use-uncached-io](https://prometheus.io/docs/prometheus/latest/feature_flags/#use-uncached-io) feature flag was built to address exactly this. + +Bypassing the cache for those writes reduces Prometheus's page cache footprint, making its memory usage more predictable and easier to reason about. It also relieves pressure on that shared cache, lowering the risk of evicting hot data that queries and other reads actually depend on. A potential bonus is reduced CPU overhead from cache allocations and evictions. The hard constraint throughout was to avoid any measurable regression in CPU or disk I/O. + +The flag was introduced in Prometheus `v3.5.0` and currently only supports Linux. Under the hood, it uses direct I/O, which requires proper filesystem support and a kernel `v2.4.10` or newer, though you should be fine, as that version shipped nearly 25 years ago. + +If direct I/O helps here, why was it not done earlier, and why is it not used everywhere it would help? Because direct I/O comes with strict alignment requirements. Unlike buffered I/O, you cannot simply write any chunk of memory to any position in a file. The file offset, the memory buffer address, and the transfer size must all be aligned to the logical sector size of the underlying storage device, typically 512 or 4096 bytes. + +To satisfy those constraints, a [`bufio.Writer`](https://pkg.go.dev/bufio#Writer)-like writer, [`directIOWriter`](https://github.com/prometheus/prometheus/blob/ac12e30f99df9d2f68025f0238c0aef95146e94b/tsdb/fileutil/direct_io_writer.go#L46), was implemented. On Linux kernels `v6.1` or newer, Prometheus retrieves the exact alignment values via [statx](https://man7.org/linux/man-pages/man2/statx.2.html); on older kernels, conservative defaults are used. + +The `directIOWriter` currently covers chunk writes during compaction only, but that alone accounts for a substantial portion of Prometheus's I/O. The results are tangible: benchmarks show a 20–50% reduction in page cache usage, as measured by `container_memory_cache`. + +[](/assets/blog/2026-03-05/benchmark1.png) +
container_memory_cache over time, baseline (top) vs. use-uncached-io (bottom)use-uncached-io