Bug summary
We're using prefect server on our k8s cluster.
We need to scale our prefect background service, so we split the loop component and the background services.
But, ever since we split them and scaled the background service - the redis memory increasing constantly without anyone evicting it.
Should we set an eviction policy (triming) on the "events" or redis stream?
(Not sure if it's relevant, but "XINFO GROUPS events" returns empty array, so maybe there's an issue with the consuming group?)
Is it a known bug?
It's a matter of time until redis will reach it's limit (which already happened before), which makes the whole separation and scaling the background services impossible.
Version info
Version: 3.6.7
API version: 0.8.4
Python version: 3.12.12
Git commit: ebfef643
Built: Thu, Dec 18, 2025 07:57 PM
OS/Arch: linux/x86_64
Profile: ephemeral
Server type: server
Pydantic version: 2.12.5
Server:
Database: postgresql
PostgreSQL version: 16.8
Integrations:
prefect-redis: 0.2.7
Additional context
No response
Bug summary
We're using prefect server on our k8s cluster.
We need to scale our prefect background service, so we split the loop component and the background services.
But, ever since we split them and scaled the background service - the redis memory increasing constantly without anyone evicting it.
Should we set an eviction policy (triming) on the "events" or redis stream?
(Not sure if it's relevant, but "XINFO GROUPS events" returns empty array, so maybe there's an issue with the consuming group?)
Is it a known bug?
It's a matter of time until redis will reach it's limit (which already happened before), which makes the whole separation and scaling the background services impossible.
Version info
Additional context
No response