kubeadm: kubeadm panic during phase based init
Is this a BUG REPORT or FEATURE REQUEST?
BUG REPORT
Versions
kubeadm version (use kubeadm version):
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:05:53Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/arm"}
Environment:
- Kubernetes version (use
kubectl version):
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/arm"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
-
Cloud provider or hardware configuration: RPi3B+
-
OS (e.g. from /etc/os-release):
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
HYPRIOT_OS="HypriotOS/armhf"
HYPRIOT_OS_VERSION="v2.0.1"
HYPRIOT_DEVICE="Raspberry Pi"
HYPRIOT_IMAGE_VERSION="v1.9.0"
- Kernel (e.g.
uname -a):
Linux km01 4.14.34-hypriotos-v7+ #1 SMP Sun Apr 22 14:57:31 UTC 2018 armv7l GNU/Linux
- Others:
What happened?
I was trying to work around the race conditions mentioned in #413 and #1380, executing the kubeadm init in phases. Instead it crashed on the second call to init.
What you expected to happen?
I should see the join information.
How to reproduce it (as minimally and precisely as possible)?
Fresh install of hypriotos-rpi-v1.9.0 then:
sudo bash <<EOF
curl -sSL https://packagecloud.io/Hypriot/rpi/gpgkey | apt-key add -
curl -sSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
apt-get update && apt-get install -y kubeadm
sudo kubeadm reset
sudo kubeadm init phase control-plane all --pod-network-cidr 10.244.0.0/16
sudo sed -i 's/initialDelaySeconds: [0-9][0-9]/initialDelaySeconds: 240/g' /etc/kubernetes/manifests/kube-apiserver.yaml
sudo sed -i 's/failureThreshold: [0-9]/failureThreshold: 18/g' /etc/kubernetes/manifests/kube-apiserver.yaml
sudo sed -i 's/timeoutSeconds: [0-9][0-9]/timeoutSeconds: 20/g' /etc/kubernetes/manifests/kube-apiserver.yaml
sudo kubeadm init --skip-phases=control-plane --ignore-preflight-errors=all --pod-network-cidr 10.244.0.0/16
EOF
Anything else we need to know?
This is output with the panic information
$ sudo kubeadm init --skip-phases=control-plane --ignore-preflight-errors=all --pod-network-cidr 10.244.0.0/16
[init] Using Kubernetes version: v1.13.3
[preflight] Running pre-flight checks
[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.04.0-ce. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [km01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.178.43]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [km01 localhost] and IPs [192.168.178.43 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [km01 localhost] and IPs [192.168.178.43 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0xaab708]
goroutine 1 [running]:
k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig.validateKubeConfig(0xfb953a, 0xf, 0xfc3e7a, 0x17, 0x3034540, 0x68f, 0x7bc)
/workspace/anago-v1.13.3-beta.0.37+721bfa751924da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig/kubeconfig.go:236 +0x120
k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig.createKubeConfigFileIfNotExists(0xfb953a, 0xf, 0xfc3e7a, 0x17, 0x3034540, 0x0, 0xf8160)
/workspace/anago-v1.13.3-beta.0.37+721bfa751924da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig/kubeconfig.go:257 +0x90
k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig.createKubeConfigFiles(0xfb953a, 0xf, 0x3144b40, 0x3527c60, 0x1, 0x1, 0x0, 0x0)
/workspace/anago-v1.13.3-beta.0.37+721bfa751924da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig/kubeconfig.go:120 +0xf4
k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig.CreateKubeConfigFile(0xfc3e7a, 0x17, 0xfb953a, 0xf, 0x3144b40, 0x99807501, 0xb9bfcc)
/workspace/anago-v1.13.3-beta.0.37+721bfa751924da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig/kubeconfig.go:93 +0xe8
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases.runKubeConfigFile.func1(0xf76bc8, 0x32f2ff0, 0x0, 0x0)
/workspace/anago-v1.13.3-beta.0.37+721bfa751924da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/kubeconfig.go:155 +0x168
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1(0x336ee80, 0x0, 0x0)
/workspace/anago-v1.13.3-beta.0.37+721bfa751924da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235 +0x160
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll(0x34ecbe0, 0x3527d68, 0x32f2ff0, 0x0)
/workspace/anago-v1.13.3-beta.0.37+721bfa751924da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:416 +0x5c
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run(0x34ecbe0, 0x24, 0x35bbdb4)
/workspace/anago-v1.13.3-beta.0.37+721bfa751924da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:208 +0xc8
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1(0x3513400, 0x3227560, 0x0, 0x4)
/workspace/anago-v1.13.3-beta.0.37+721bfa751924da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:141 +0xfc
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0x3513400, 0x32274c0, 0x4, 0x4, 0x3513400, 0x32274c0)
/workspace/anago-v1.13.3-beta.0.37+721bfa751924da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:760 +0x20c
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x3512140, 0x3513400, 0x35123c0, 0x300c8e0)
/workspace/anago-v1.13.3-beta.0.37+721bfa751924da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:846 +0x210
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(0x3512140, 0x300c0d8, 0x117dec0)
/workspace/anago-v1.13.3-beta.0.37+721bfa751924da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:794 +0x1c
k8s.io/kubernetes/cmd/kubeadm/app.Run(0x3034060, 0x0)
/workspace/anago-v1.13.3-beta.0.37+721bfa751924da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:48 +0x1b0
main.main()
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:29 +0x20
It seems like some nil checks are missing https://github.com/kubernetes/kubernetes/blob/v1.13.3/cmd/kubeadm/app/phases/kubeconfig/kubeconfig.go#L236
this can only happen if the kubeconfig phase reads a corrupted kubeconfig file with missing cluster context. possible a side effect of not calling reset.
But the kubeconfig file looks OK to me
$ sudo cat /etc/kubernetes/admin.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNU1ESXdNakF4TXpBeU5Wb1hEVEk1TURFek1EQXhNekF5TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTUIrCmJ1ckREUGtBR1V5NmM3ZkY5VWI5UlMyRWVqdmx2aHRwRlBGVzVPRGtRL3ZsbFU5b05MR0txdXZjRVVFekJmTnkKcURzQzBsVktoTkFBMWl6TnplZVJEWlRDZ2ZFYitxa3Zib0xGd25hd1A0ZkRKKzVnUndxN0JEM0xYdWNsTFNmeApmczkwb05RaXdYL0hXSjBOUkJZRnN6Zk1iaXZaUSsrRDJjS0FOZm9qSGx2Rm9oU1BqZkVlWmp1NnBtTEhXNlMyCmY4NjJGcnhwSEdOWmhmR3JaTmd1YUFkK0tIM1BCc1IxTThpUFpiMnFjTEN0LzNmMHY2ejc4bUVoL294UC9oUjEKdWVGWmZJWCtpbmxzVXZDM2N3WXZ3VFd6ZnlOT0NSMUJCcUNHRmd4bmt0VVRJd0M3Szc3VHZUcGpnazd5NnAzSQpHMVd3SmVUUERYRXRleGhFTDQwQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFId2NhY0RuK0ZhaldKekJpVExlZmxLeVRSNTgKVm9yKzluZmtMRkNrTWZBcC94b2pwekVKbEJEM0didmh5V21tS2tJNDgvSHZabml1c1g1THNRdXhzamV4bnZTYwppMG9keFZOMTdSOFFiWVovZ0hsREdRanlnYXhvUWN6M1J5MFU3NmJ0U0toQ1VTTko2NEZqeGp1bU9MemVYbkRLCjlsRElPZHZ4VXRXZDVaajc1YmZFRmNyNHJKbEJTK0dZRi9Da2RrdzZtUlpXNCsrYkNPd3RBUGVUemd6bEZtQ1EKZmptM28wQUlNSitvMk9YUjFrRXFlTXo2VDM3b2FsYWNNU1hEeHh1cjBZUmw3NUJ2M2lBOGk0NE5Oei9tNzhOdQpPaW1ONnBVMDFyUWJEVjVBRzJmbndwaURBcGxNbkQ2R0FyZ3R5b3VUREs2ZmlWOXpZaVdkQlBLeFQ5az0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.178.43:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJYk4rZTR0WFh4and3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB4T1RBeU1ESXdNVE13TWpWYUZ3MHlNREF5TURJd01UTXlNRFJhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXBnMzBYR0h4U1IwMWdLMGEKc3FFb1hTanFJNXloZVErZ21YcWNJeDRMYUNIWVZJM2VZc29SbTVSTCtDYTNRblJ5aE4vSHVvMkJYUE1MdGlIZwpIR1BlL3VKRkRHOHJxa2xVbHZZSXZDMkE4QVpLVENEUzBFRmNoQ0RhOHhDMGVQUG9jbXdLWTdVRHFkWGIvY2RHCk8yZG9LaWJLeGtGM3dEWjVCUXR4VXgzTDB1bWZDVFFNOWlQYk00aHF3N3N0Rzc5SXE1dUZXU1VxMFNRb0tad0oKbDFzRXpCQ3kveGV2bWIvTG1jLzR5QTVRVGNPK09yejFTdUZReVRxN0NIb1g1T1ZadDRqbk9jQUZpdFhDbWFROAp3OW0wRERvanJLakVrWlZNL1M4aWY3T3hUZ1d5MDVkaGE4VWZ2TStHaFBmVTF1cEJqdFJGcXB2VUIySkp6UEFmCnl4cFJCUUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFFcmRYbEhyb2w1a0NQR1UyYUNJOUE5a3NuL0k2a1l3RlR6RAo5NFprNVFSQVhUNjlqSy9namY3c3dNL1JxY1RpRnNFQnY0bXpzeGRjNnBydXdHbytab1o3V2VGTTAvNFJNcFZJCm1qVitKbWdlNk14WUkyMWhOZnMydjNNN2RnbVpMRjJsN25yRTNMTVpiMHZMdUJuN2ZKZWxXb0lGSDd3WWFnQnIKeFlWVzZjYzJtWkkzWHVxYTcraWpjNHpJdmpDSjR6cTFiRUdSUlNEQWNwbjhnQjFXWXRoUWd2cHV0cGZGTGlDTApIK0dya1ZCR3FEY3VVbFRJMkJlZXVMMUduRXJsQzYremhDZnY1VStGR2pwS2RwaVN6UkV4T1F4bEJJOEYzQnZVCnp0VTRVVkR3S24vYUFOUm01N3d6dHlTL0FyOGJQUlRrR0psbGpOZVE0bEd1cWtFQTJKaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBcGczMFhHSHhTUjAxZ0swYXNxRW9YU2pxSTV5aGVRK2dtWHFjSXg0TGFDSFlWSTNlCllzb1JtNVJMK0NhM1FuUnloTi9IdW8yQlhQTUx0aUhnSEdQZS91SkZERzhycWtsVWx2WUl2QzJBOEFaS1RDRFMKMEVGY2hDRGE4eEMwZVBQb2Ntd0tZN1VEcWRYYi9jZEdPMmRvS2liS3hrRjN3RFo1QlF0eFV4M0wwdW1mQ1RRTQo5aVBiTTRocXc3c3RHNzlJcTV1RldTVXEwU1FvS1p3Smwxc0V6QkN5L3hldm1iL0xtYy80eUE1UVRjTytPcnoxClN1RlF5VHE3Q0hvWDVPVlp0NGpuT2NBRml0WENtYVE4dzltMEREb2pyS2pFa1pWTS9TOGlmN094VGdXeTA1ZGgKYThVZnZNK0doUGZVMXVwQmp0UkZxcHZVQjJKSnpQQWZ5eHBSQlFJREFRQUJBb0lCQURsWFlrV3drS2lsekg3MQp4OTFkWjFuY01oWkFGVVovemY2UjUyNzlCZ1ZjZ3A2WUt1NUVSeFpKZkg1aHFERHJrMHd0Rm9SbUx3RFE4UDlnCjdVb0FkdFhmZnVhUFVTM0ppc3RpaEp1dXZ2S2p5VzVHZTJYczNDeklSN05kMW1SYUhhKzlmVXozQ2gvUXVOb0cKd1Vyc0ozMCt6aER1TkpNTWZIZndmcDZzRUdGeE9yYnN5WWE3S0l1RWxuQ0FHWXQwalpjcmw2OENKcVJnZEhEbwpwRFZCL2Zub0ZBZi82Ym9Ga1JTckJkeUM5clpqYlZRbmtwT0VpQ0JONCtMS3RIRjlhUXhELytJWXRVeWFrb2tLClNJNWVTZEhhbkl0U2hxaTVCQmtjV3c5cmdhZDJjYWE5TjRMR1Q1N29LSFJmSFV2T1phTDlGZC9xbjJlb285RlAKTXplcVdCVUNnWUVBeGk5Y3FIOEo1eHFmWUlpRXU2Ly9HTjZETXo0ZWlaSUZNRGdzWjhRR21Fc0hhd1ZZWHlRRwpQNjVES0p1ZUI3SWk1eHV2Z2tGR2JJVnBBMmRleEo2YzhtQmo4N2Zoa2s5SGxDb1VpcDBEdU9uVnJJTVR5Uk02CkR5QWNQaUw2MEY4cGFoU2htZ21USHdXYS81N1Vscllxc1N6RW4vVDBuVFFwQ09uYVJFTlVvTzhDZ1lFQTFuOE8Kdkk1OGx4MzBOSTg3UXV3eVNoMmxKUG04bnJUc0ZBbXRNNXB6Z1ovaUc5TUVGU0V0RzZLbnBuNlRrQjR0YzEzQgpiN01SVWZWY0RIQTRwS09TNk1DZHVoTmJLN3pmNjNOMFpMeWtMdzN2aExRYlhrRlBScEtEQm0rc3J2M0V1MEVnCnQwODNSKzdaMjV1aGhYa2I4WU9kaTZpQXk1VytMS2FPRzh0OWhVc0NnWUVBc2dDeUdZalk3U0NsUzMveXI5ejQKbzI2ZnFyTzltOVJ5SW9naG9pV1h3c3VJNHgvTzZzMGhhNnJxR1J3RWlXYi9JRkptaGZoNDkxbXdJMldCNGRtUQpuOFhob0hKbEFST0I5OXIveml3T3Z0UVBuYjJ4VktXWFBTU2JHVmd6ckZuOGlaSDBQN1VmMWZvajZEblJPWGh1CnllbXF4UHl2aEU3b0dHQnFNV3ZFSkRNQ2dZQVYxV01ib0dsZ1BJVlNJRTVJOXAvNzJWNnBEOTY2VFBKRzYrRTgKZ25sRmRZL2ZnekJFTWxkVUc4OXk3Q2w3SHdkRFdnVEpxUEdYWlNGVWhzdk5QblZDeWZDRU0xb3hibzFnZXlVYQo1L1RTY1ZtektWNHJ6dndSMC9JUVlxZXlQRlNkTnZqc2o5eXhyc2R3U2p3N3lPTW1SMTV2Qzl6b1hEcTZjczIrCldJMVRWd0tCZ1FDbWRpeG9nTXM0WkR3dFl4c3BBZzRqYUdRMURJck52bWJEcEl4eFhYNXBqSmFSWXU2VnljZk0KQkZJZmFtTkJpODNadDBIWkdERkdOdUdCSEJFTys4L1k4NWZDNWgySlM0MTBjUGhoVkoxWSs5Q0NpOGgzREZ2Swo5SWRzNkR0MUlCRFlsejFuV2p4cVcyb01zaGxZSy9BSkpYbGxRVXR3ZEJhczc4bkRvdkplYWc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 38 (12 by maintainers)
@gbailey46 can you please also add the exact environment you ran this on. Otherwise “works for me” is not exactly helpful.
@tcurdt
I’d be happy to help you with this. I’m Ed@kubernetes.slack.com
I was out of action the past few days but it’s on my todo to have another closer look.