istio: istio-1.5.2 injected pod's status become Init:CrashLoopBackOff

Bug description

after running some times of period, injected pod’s status become Init:CrashLoopBackOff

manage-5c9b544cf4-fm5kf 0/2 Init:CrashLoopBackOff 37 19h

Name: manage-5c9b544cf4-fm5kf Namespace: xxxx-prod Priority: 0 Node: node3/192.168.100.216 Start Time: Fri, 08 May 2020 09:40:01 +0800 Labels: app=manage pod-template-hash=5c9b544cf4 security.istio.io/tlsMode=istio service.istio.io/canonical-name=manage service.istio.io/canonical-revision=production version=production Annotations: sidecar.istio.io/status: {“version”:“fca84600f9d5ec316cf1cf577da902f38bac258ab0fd595ee208ec0203dc0c6d”,“initContainers”:[“istio-init”],“containers”:[“istio-proxy”]… Status: Running IP: 10.233.92.202 Controlled By: ReplicaSet/manage-5c9b544cf4 Init Containers: istio-init: Container ID: docker://bfbaed86220d2fe065623b5ef0705206bf08f876f8f6f7fc2fa8660d1c8cc1fd Image: docker.io/istio/proxyv2:1.5.2 Image ID: docker-pullable://istio/proxyv2@sha256:b569b3226624b63ca007f3252162390545643433770c79a9aadb1406687d767a Port: <none> Host Port: <none> Command: istio-iptables -p 15001 -z 15006 -u 1337 -m REDIRECT -i * -x

  -b
  *
  -d
  15090,15020
State:          Waiting
  Reason:       CrashLoopBackOff
Last State:     Terminated
  Reason:       Error
  Exit Code:    2
  Started:      Sat, 09 May 2020 04:50:01 +0800
  Finished:     Sat, 09 May 2020 04:50:02 +0800
Ready:          False
Restart Count:  38
Limits:
  cpu:     100m
  memory:  50Mi
Requests:
  cpu:        10m
  memory:     10Mi
Environment:  <none>
Mounts:
  /var/run/secrets/kubernetes.io/serviceaccount from default-token-kwxpk (ro)

Containers: manage: Container ID: docker://75593b0259c9644cb40681abb9eaa541dfb9d222cab5c5ba5bdffdfe16ef476f Image: docker.xxxx.cn:5000/xxxxx/manage:production Image ID: docker-pullable://docker.xxxx.cn:5000/xxxx/manage@sha256:183416f398932d06bb50732c872df0f8d47ee0934371e276b68bd8b62f252cba Port: 80/TCP Host Port: 0/TCP State: Running Started: Fri, 08 May 2020 09:41:03 +0800 Ready: True Restart Count: 0 Limits: memory: 50Mi Requests: cpu: 1m memory: 104857600m Liveness: http-get http://:15020/app-health/manage/livez delay=3s timeout=30s period=5s #success=1 #failure=10 Readiness: http-get http://:15020/app-health/manage/readyz delay=3s timeout=30s period=5s #success=1 #failure=3 Environment: TZ: Asia/Shanghai group_name: xxxx project_name: manage SYSTEM_ENV: production Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-kwxpk (ro) istio-proxy: Container ID: docker://efc88dfbb7910e364531d82b6fbf7aa7ed930a0102fc0075a6ec40edf22556ca Image: docker.io/istio/proxyv2:1.5.2 Image ID: docker-pullable://istio/proxyv2@sha256:b569b3226624b63ca007f3252162390545643433770c79a9aadb1406687d767a Port: 15090/TCP Host Port: 0/TCP Args: proxy sidecar –domain $(POD_NAMESPACE).svc.cluster.local –configPath /etc/istio/proxy –binaryPath /usr/local/bin/envoy –serviceCluster manage.$(POD_NAMESPACE) –drainDuration 45s –parentShutdownDuration 1m0s –discoveryAddress istiod.istio-system.svc:15012 –zipkinAddress zipkin.istio-system:9411 –proxyLogLevel=warning –proxyComponentLogLevel=misc:error –connectTimeout 10s –proxyAdminPort 15000 –concurrency 2 –controlPlaneAuthPolicy NONE –dnsRefreshRate 300s –statusPort 15020 –trust-domain=cluster.local –controlPlaneBootstrap=false State: Running Started: Fri, 08 May 2020 09:41:03 +0800 Ready: True Restart Count: 0 Limits: cpu: 2 memory: 1Gi Requests: cpu: 10m memory: 40Mi Readiness: http-get http://:15020/healthz/ready delay=1s timeout=1s period=2s #success=1 #failure=30 Environment: JWT_POLICY: first-party-jwt PILOT_CERT_PROVIDER: istiod CA_ADDR: istio-pilot.istio-system.svc:15012 POD_NAME: manage-5c9b544cf4-fm5kf (v1:metadata.name) POD_NAMESPACE: xxxx-prod (v1:metadata.namespace) INSTANCE_IP: (v1:status.podIP) SERVICE_ACCOUNT: (v1:spec.serviceAccountName) HOST_IP: (v1:status.hostIP) ISTIO_META_POD_PORTS: [ {“containerPort”:80,“protocol”:“TCP”} ] ISTIO_META_APP_CONTAINERS: [ manage ] ISTIO_META_CLUSTER_ID: Kubernetes ISTIO_META_POD_NAME: manage-5c9b544cf4-fm5kf (v1:metadata.name) ISTIO_META_CONFIG_NAMESPACE: xxxx-prod (v1:metadata.namespace) ISTIO_META_INTERCEPTION_MODE: REDIRECT ISTIO_META_WORKLOAD_NAME: manage ISTIO_META_OWNER: kubernetes://apis/apps/v1/namespaces/xxxx-prod/deployments/manage ISTIO_META_MESH_ID: cluster.local ISTIO_KUBE_APP_PROBERS: {“/app-health/manage/livez”:{“httpGet”:{“path”:“/”,“port”:80,“scheme”:“HTTP”},“timeoutSeconds”:30},“/app-health/manage/readyz”:{“httpGet”:{“path”:“/”,“port”:80,“scheme”:“HTTP”},“timeoutSeconds”:30}} Mounts: /etc/istio/pod from podinfo (rw) /etc/istio/proxy from istio-envoy (rw) /var/run/secrets/istio from istiod-ca-cert (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-kwxpk (ro) Conditions: Type Status Initialized False Ready True ContainersReady True PodScheduled True Volumes: default-token-kwxpk: Type: Secret (a volume populated by a Secret) SecretName: default-token-kwxpk Optional: false istio-envoy: Type: EmptyDir (a temporary directory that shares a pod’s lifetime) Medium: Memory SizeLimit: <unset> podinfo: Type: DownwardAPI (a volume populated by information about the pod) Items: metadata.labels -> labels metadata.annotations -> annotations istiod-ca-cert: Type: ConfigMap (a volume populated by a ConfigMap) Name: istio-ca-root-cert Optional: false QoS Class: Burstable Node-Selectors: typenode=app-prod Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message


Warning BackOff 4m25s (x784 over 174m) kubelet, node3 Back-off restarting failed container

Expected behavior

Steps to reproduce the bug

running after some times

Version (include the output of istioctl version --remote and kubectl version and helm version if you used Helm)

How was Istio installed?

Environment where bug was observed (cloud vendor, OS, etc)

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 15 (6 by maintainers)

Most upvoted comments

@howardjohn Any advice to solve this problem? thanks.