submariner: Uninstalling submariner doesn't terminate the namespace
What happened: Uninstalling submariner doesnt terminate the submariner-operator namespace in both the clusters
What you expected to happen: submariner-operator namespace should get terminated
How to reproduce it (as minimally and precisely as possible): Install submariner
subctl deploy-broker --operator-debug --kubeconfig=/tmp/rackA
subctl join --kubeconfig=/tmp/rackA --operator-debug --pod-debug -- clusterid=rack-A broker-info.subm
subctl join --kubeconfig=/tmp/rackB --operator-debug --pod-debug -- clusterid=rack-B broker-info.subm
Uninstall submariner
subctl uninstall --kubeconfig=/tmp/rackA
subctl uninstall --kubeconfig=/tmp/rackB
Namespace in Terminating state
# oc get ns | grep subm
submariner-operator Terminating
Anything else we need to know?: Same behaviour on OCP versions 4.10 and 4.12 , and submariner versions 0.12 and 0.14.6
Environment:
- Diagnose information (use
subctl diagnose all): - Gather information (use
subctl gather): - Cloud provider or hardware configuration:
- Install tools:
- Others:
About this issue
- Original URL
- State: closed
- Created 10 months ago
- Comments: 18 (17 by maintainers)
A couple things of note. You state that the 2 connected clusters are running submariner versions 0.12 and 0.14.6. I’m not sure we support a mixed submariner version environment. Also you’re running the latest devel version of
subctl, at least to performgather- I assume you used the samesubctlto run uninstall. You really should use the same version ofsubctlas is the version of submariner running in the cluster.That said, as @sridhargaddam mentioned, since the submariner-operator deployment is gone, there’s isn’t much we can tell w/o the pod logs. So, if you can try to reproduce, I’d suggest to use kubectl to tail/follow the submariner-operator log and then run
subctl uninstalland provide the log output and thesubctloutput.Okay, so its not consistent and is seen only intermittently. Anyways, without access to logs, its hard to understand what is going on. When you see this problem once again, please attach the required logs.