kubesphere: ks-account pod in CrashLoopBackOff after fresh install of kubesphere v2.1.1
Describe the Bug
I install kubesphere v2.1.1 on a fresh install of rke v1.0.4.
Everything seems OK except the"ks-account"pod that is in"CrashLoopBackOff" mode.
The pod fail with"create client certificate failed: <nil>"
I can display the console login page but can’t login, it fails with"unable to access backend services"
I did the procedure twice after resetting the nodes…and the rke cluster is healthy and fully operational
Versions Used KubeSphere: 2.1.1 Kubernetes: rancher/rke v1.0.4 fresh install
Environment 3 masters 8G + 3 workers 8G, all with centos 7.7 fully updated, selinux and firewalld disabled
How To Reproduce Steps to reproduce the behavior:
- Setup 6 nodes with centos 7.7 8G
- Install rke with 3 masters and 3 workers
- Install kubesystem by following the instructions here
Expected behavior all pods in the kubesphere-system up and running, then be able to login to the console
Logs
kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
kubectl get pods -n kubesphere-system
NAME READY STATUS RESTARTS AGE
ks-account-789cd8bbd5-nlvg9 0/1 CrashLoopBackOff 20 79m
ks-apigateway-5664c4b76f-8vsf4 1/1 Running 0 79m
ks-apiserver-75f468d48b-9dfwb 1/1 Running 0 79m
ks-console-78bddc5bfb-zlzq9 1/1 Running 0 79m
ks-controller-manager-d4788677-6pxhd 1/1 Running 0 79m
ks-installer-75b8d89dff-rl76c 1/1 Running 0 81m
openldap-0 1/1 Running 0 80m
redis-6fd6c6d6f9-6nfmd 1/1 Running 0 80m
kubectl logs -n kubesphere-system ks-account-789cd8bbd5-nlvg9
W0226 00:40:43.093650 1 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
E0226 00:40:44.709957 1 kubeconfig.go:62] create client certificate failed: <nil>
E0226 00:40:44.710030 1 im.go:1030] create user kubeconfig failed sonarqube create client certificate failed: <nil>
E0226 00:40:44.710057 1 im.go:197] user init failed sonarqube create client certificate failed: <nil>
E0226 00:40:44.710073 1 im.go:87] create default users user sonarqube init failed: create client certificate failed: <nil>
Error: user sonarqube init failed: create client certificate failed: <nil>
Usage:
ks-iam [flags]
Flags:
--add-dir-header If true, adds the file directory to the header
--admin-email string default administrator's email (default "admin@kubesphere.io")
--admin-password string default administrator's password (default "passw0rd")
{...}
kubectl describe pod ks-account-789cd8bbd5-nlvg9 -n kubesphere-system
Name: ks-account-789cd8bbd5-nlvg9
Namespace: kubesphere-system
Priority: 0
Node: worker3/192.168.5.47
Start Time: Tue, 25 Feb 2020 18:22:55 -0500
Labels: app=ks-account
pod-template-hash=789cd8bbd5
tier=backend
version=v2.1.1
Annotations: cni.projectcalico.org/podIP: 10.62.5.7/32
Status: Running
IP: 10.62.5.7
IPs:
IP: 10.62.5.7
Controlled By: ReplicaSet/ks-account-789cd8bbd5
Init Containers:
wait-redis:
Container ID: docker://1d63b336dac9e322155ee8cc31bc266df5ab4f734de5cf683b33d8cf6abc940b
Image: alpine:3.10.4
Image ID: docker-pullable://docker.io/alpine@sha256:7c3773f7bcc969f03f8f653910001d99a9d324b4b9caa008846ad2c3089f5a5f
Port: <none>
Host Port: <none>
Command:
sh
-c
until nc -z redis.kubesphere-system.svc 6379; do echo "waiting for redis"; sleep 2; done;
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 25 Feb 2020 18:22:56 -0500
Finished: Tue, 25 Feb 2020 18:22:56 -0500
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kubesphere-token-hk59s (ro)
wait-ldap:
Container ID: docker://b51a105434877c6a17cd4cc14bc6ad40e9d06c5542eadf1b62855a1c12cb847c
Image: alpine:3.10.4
Image ID: docker-pullable://docker.io/alpine@sha256:7c3773f7bcc969f03f8f653910001d99a9d324b4b9caa008846ad2c3089f5a5f
Port: <none>
Host Port: <none>
Command:
sh
-c
until nc -z openldap.kubesphere-system.svc 389; do echo "waiting for ldap"; sleep 2; done;
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 25 Feb 2020 18:22:57 -0500
Finished: Tue, 25 Feb 2020 18:23:13 -0500
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kubesphere-token-hk59s (ro)
Containers:
ks-account:
Container ID: docker://033c55c2a717e672d4abe256a9955f01d46ee47e08147a0660470ac0a9ae1055
Image: kubesphere/ks-account:v2.1.1
Image ID: docker-pullable://docker.io/kubesphere/ks-account@sha256:6fccef53ab7a269160ce7816dfe3583730ac7fe2064ea5c9e3ce5e366f3470eb
Port: 9090/TCP
Host Port: 0/TCP
Command:
ks-iam
--logtostderr=true
--jwt-secret=$(JWT_SECRET)
--admin-password=$(ADMIN_PASSWORD)
--enable-multi-login=False
--token-idle-timeout=40m
--redis-url=redis://redis.kubesphere-system.svc:6379
--generate-kubeconfig=true
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 25 Feb 2020 19:55:58 -0500
Finished: Tue, 25 Feb 2020 19:55:59 -0500
Ready: False
Restart Count: 23
Limits:
cpu: 1
memory: 500Mi
Requests:
cpu: 20m
memory: 100Mi
Environment:
KUBECTL_IMAGE: kubesphere/kubectl:v1.0.0
JWT_SECRET: <set to the key 'jwt-secret' in secret 'ks-account-secret'> Optional: false
ADMIN_PASSWORD: <set to the key 'admin-password' in secret 'ks-account-secret'> Optional: false
Mounts:
/etc/ks-iam from user-init (rw)
/etc/kubesphere from kubesphere-config (rw)
/etc/kubesphere/rules from policy-rules (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubesphere-token-hk59s (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
policy-rules:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: policy-rules
Optional: false
user-init:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: user-init
Optional: false
kubesphere-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kubesphere-config
Optional: false
kubesphere-token-hk59s:
Type: Secret (a volume populated by a Secret)
SecretName: kubesphere-token-hk59s
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 60s
node.kubernetes.io/unreachable:NoExecute for 60s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 3m46s (x413 over 93m) kubelet, worker3 Back-off restarting failed container
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 19 (8 by maintainers)
It Works! Thanks I uninstall/reinstall rke + kubesphere For reference, I added in the rke cluster config file the following:
As said before, the doc must be updated to include the activation of the CSR feature in kube-apiserver as a prerequisite (BTW kubesphere is fantastic, a great alternative to OpenShift IMHO)