kubernetes: cAdvisor leaking journalctl processes
Extracted from https://github.com/kubernetes/kubernetes/pull/23491#issuecomment-251777767
This appears to be leaking journalctl processes originating here: https://github.com/kubernetes/kubernetes/blob/master/vendor/github.com/google/cadvisor/utils/oomparser/oomparser.go#L169
https://github.com/kubernetes/kubernetes/pull/23491 set kubelet unit files to restart just the kubelet process and this ends up orphaning and leaking journalctl processes launched by cAdvisor.
AFAIK, this behavior can leak many other processes like du, ls, mount, etc.
@kubernetes/sig-node we need to fix this.
I recommend reverting https://github.com/kubernetes/kubernetes/pull/23491 to begin with as suggested by @wwwtyro
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 22 (21 by maintainers)
Commits related to this issue
- Merge pull request #49640 from jsafrane/systemd-mount-service Automatic merge from submit-queue Run mount in its own systemd scope. Kubelet needs to run /bin/mount in its own cgroup. - When kube... — committed to kubernetes/kubernetes by deleted user 7 years ago
- Fix journalctl leak This fixes the journalctl leak that occurs when a process that uses cadvisor exits. See issues #1725 and https://github.com/kubernetes/kubernetes/issues/34965. — committed to mtaufen/cadvisor by mtaufen 7 years ago
We should use kmsg instead of journald / sd-journal since cadvisor only cares about oom messages IMO.
Duplicating my work from https://github.com/kubernetes/node-problem-detector/pull/41 against cadvisor should fix this, and I’ve been planning to do that regardless.