
Describe the bug
If a pod has more than one container and memory requests and limits are set for some but not all containers, then the calculation of how many percent of the pod's memory limit (column %MEM/L) is being consumed can be incorrect.
Simplest example, a pod with two containers:
Both request 100m but only one of them has a limit set to 120m. Both allocate 90m, so the total memory usage of the pod is 180m. In this scenario k9s shows the pod's memory consumption in percent of the memory limit as 150%. (180/120=1.5)
It seemingly calculates (I have not checked the code) like this:
- Add the current memory usage of all containers
- Add the memory limits of all containers
- Divide value from step 1 by step 2
If any of the containers has no limit specified then this can lead to a pod memory usage of more than 100% which is misleading because a container that tries to allocate more memory than it's limit is
oom-killed.
It is important to understand that memory limits are only set per container, not per pod. So, the %MEM/L column in k9s in pod view is tricky and doesn't seem to have an intuitive definition. Seeing values greater than 100% when the memory limit is supposed to be a hard one, i.e. you cannot exceed it without getting oom-killed prompted me to raise this bug.
To Reproduce
Steps to reproduce the behavior:
- Go to pod view (command :pods)
- Look at the '%MEM/L' column and find a value greater than 100
- Select the pod and hit enter to go into container view
- Check the current memory consumption and memory limits of each container
Expected behavior
At no point should in pod view values greater than 100 be shown in the column %MEM/L.
Omit containers from the calculation that have no memory limit set.
Versions:
Additional context
One might argue it's not a bug and it all depends on what definition is used for the %MEM/L column in pod view - and that's fair. But since k8s oom-kills a container that tries to go above 100% and k9s prints values greater than 100% in this column in red it might confuse users as to what is going on all the while no (container) limit has actually been exceeded so, to me, this feels wrong.
Describe the bug
If a pod has more than one container and memory requests and limits are set for some but not all containers, then the calculation of how many percent of the pod's memory limit (column %MEM/L) is being consumed can be incorrect.
Simplest example, a pod with two containers:
Both request 100m but only one of them has a limit set to 120m. Both allocate 90m, so the total memory usage of the pod is 180m. In this scenario k9s shows the pod's memory consumption in percent of the memory limit as 150%. (180/120=1.5)
It seemingly calculates (I have not checked the code) like this:
If any of the containers has no limit specified then this can lead to a pod memory usage of more than 100% which is misleading because a container that tries to allocate more memory than it's limit is
oom-killed.
It is important to understand that memory limits are only set per container, not per pod. So, the %MEM/L column in k9s in pod view is tricky and doesn't seem to have an intuitive definition. Seeing values greater than 100% when the memory limit is supposed to be a hard one, i.e. you cannot exceed it without getting oom-killed prompted me to raise this bug.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
At no point should in pod view values greater than 100 be shown in the column %MEM/L.
Omit containers from the calculation that have no memory limit set.
Versions:
Additional context
One might argue it's not a bug and it all depends on what definition is used for the %MEM/L column in pod view - and that's fair. But since k8s oom-kills a container that tries to go above 100% and k9s prints values greater than 100% in this column in red it might confuse users as to what is going on all the while no (container) limit has actually been exceeded so, to me, this feels wrong.