longhorn: [BUG] Drain action fails on the nodes in a cluster when longhorn is deployed.
Describe the bug Drain action fails on the nodes in a cluster when longhorn is deployed.
To Reproduce
- Have longhorn deployed in a cluster
- From Rancher UI, Select a node and “Drain” a node.
- Select Option
Aggressivefrom Rancher UI - Command sent is -
Draining node sowmya-test-longhorn-w-4 in c-w6fdn with flags --delete-local-data=true --force=false --grace-period=-1 --ignore-daemonsets=true --timeout=120s - But the drain option fails with error -
[ERROR] nodeDrain: kubectl error draining node [sowmya-test-longhorn-w-1] in cluster [c-w6fdn]:
sowmya-test-longhorn-w-1
error: cannot delete Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet (use --force to override): default/longhorn-test-minio, longhorn-system/instance-manager-e-b6051d70, longhorn-system/instance-manager-r-2b2c5d04
Expected behavior
-
Longhorn should be able to handle “Drain” option from Rancher
-
Another scenario where this will fail is during a cluster upgrade in Rancher - k8s, any parameter change that triggers an rke up. As the nodes are “cordoned” definitely and the users have an option to drain the nodes during an upgrade of the cluster.
Environment:
- Longhorn version: master- 05/01/2020
- Kubernetes version: 1.17.5
- Node OS type and version: rke DO Linux cluster
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 17 (11 by maintainers)
@tulanian scheduled for v1.1.0 milestone (currently end of month).