aws-ebs-csi-driver: csi-resizer getting OOMKilled
/kind bug
What happened? We are using version v1.6.0-eksbuild.1 of the aws-ebs-csi-driver addon. We have one environment where the csi-resizer container is using substantially more memory and getting OOMKilled. Since the memory limits setting cannot be controlled with the addon, we have no way of increasing it. We can edit the deployment live, but it gets reverted by EKS. However, the real issue is that this container is using so much memory:
ebs-csi-controller-748586d5cc-9fbr5 csi-resizer 13m 148Mi
Our other environments looks like
ebs-csi-controller-5458b96c8d-bmgmf csi-resizer 1m 21Mi
The memory settings for csi-resizer are
Limits:
cpu: 100m
memory: 128Mi
Requests:
cpu: 10m
memory: 40Mi
which is baked into the chart.
What you expected to happen? csi-resizer should not be using so much memory.
How to reproduce it (as minimally and precisely as possible)? This is unclear, but would welcome pointers on how to troubleshoot.
Anything else we need to know?: This is occurring even though we aren’t resizing and volumes.
Environment
- Kubernetes version (use
kubectl version): 1.22 - Driver version: v1.6.0-eksbuild.1
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 26 (10 by maintainers)
We have beeing running csi-resizer
v1.4.0for about a week. We are still seeing this container go Out of Memory. We have one container with 62 restarts.