kubernetes: glusterfs log files may become very large if volumes mount failed
Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
What happened: when a pod start and need to mount a gluster volume, a log file will be created to record the errors in glusterfs mount. The log path is at /var/lib/kubelet/plugins/kubernetes.io/glusterfs/[volName]/[podName]-glusterfs.log
we encounter a scenario that pod mount glusterfs always failed since the glusterfs server is unreachable for a long time. Finally the log file become quite large,as bellow.

when could these log files be cleaned up without manual deletion?
What you expected to happen:
I consider if we should remove the log file after every mount whether mount failed or successfully.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version): v1.10 - Cloud provider or hardware configuration:
- OS (e.g. from /etc/os-release):
- Kernel (e.g.
uname -a): - Install tools:
- Others:
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 28 (20 by maintainers)
I agree with dmoessne that this issue should be addressed - at least in the form of documentation that recommends steps on how to mitigate the issue (e. g. logrotate). CC @humblec any thoughts?