microk8s: "microk8s.kubectl" failed: cannot create transient scope: DBus error "System.Error.E2BIG"
Problem description
When trying to execute any of the microk8s commands I get:
internal error, please report: running "microk8s.kubectl" failed: cannot create transient scope: DBus error "System.Error.E2BIG": [Argument list too long]
After some time of cluster activity the microk8s fails with the above error. This error effectively prevents from doing any operations with microk8s command, including the microk8s inspect, microk8s.start/stop or microk8s.kubectl.
It seems that the only solution is to restart the system and the cluster comes back to a healthy state again. However, after couple of hours it goes back to the failed state again.
Interestingly enough, it seems that the services inside the cluster ARE working and responding – I can query them and so on, they are working as expected, but I cannot check their state or logs.
System data
microk8s: 1.15/1.17 stable (both had the same problem occuring)
System: Ubuntu 18.04.1
Comments
- Three of the services that run in the cluster mounts on the disk
- Other commands on that server are working fine, it’s just the
microk8sthat crashes with this error. - I tried to increase the
ulimitsize but it didn’t help and I’m not too keen on doing that indefinitely.
Is there a way to at least recover the system permanently from that state?
I tried restarting each of the services from here: https://microk8s.io/docs/configuring-services
Only one did help: systemctl restart snap.microk8s.daemon-containerd, but the cluster falls back into the same failed state (DBus error) couple of seconds after restarting the containerd service – so eventually it didn’t help.
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Reactions: 1
- Comments: 22
This started showing up for me again, and also appears to be related to
journalctl. (For reasons that make sense to a colleague who understands linux better than I do, some users on a machine have this problem while others don’t, even though they share the same snap installation of kubectl.) I was able to fix it (without rebooting) with:Just ran into this issue in 1.25 I was seeing the following messages in journalctl:
Restarting
user@1000.servicesolved this for mesudo systemctl restart user@1000.service(I assume rebooting would have had a similar result)@robotrapta yes, you linked the issue here 😃