kubernetes: kubectl rollout status reports timed out waiting on condition

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):

reporting a bug What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):

kubectl rollout status

Is this a BUG REPORT or FEATURE REQUEST? (choose one): bug

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:57:25Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:52:01Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: Amazon ec2
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools: kops
  • Others:

What happened: Checking on deployment status, kubectl waits for the rollout to finish. It reports 1 of 2 updated replicas are available… then:

kubectl --context=k8s.k.dub.tropo.com -n staging -v=3  rollout status   deployment/portal-eu-deployment
Waiting for rollout to finish: 1 of 2 updated replicas are available...
I0213 10:50:10.679541   11023 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF
F0213 10:50:10.679865   11023 helpers.go:116] error: timed out waiting for the condition

Here is the status section of the after calling get on the deployment:

status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: 2017-02-09T22:30:09Z
    lastUpdateTime: 2017-02-09T22:30:09Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 34
  replicas: 2
  unavailableReplicas: 1
  updatedReplicas: 2

Here is the deployment yaml that I send, very simple node web portal container that I run 2 pods of:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: portal-eu-deployment
  namespace: staging
spec:
  revisionHistoryLimit: 3
  replicas: 2
  selector:
    matchLabels:
      app: portal-eu
  minReadySeconds: 45
  template:
    metadata:
      labels:
        app: portal-eu
    spec:
      containers:
      - name: tropo-portal
        image: tropo/tropo-portal:en_GB-101
        livenessProbe:
          httpGet:
            path: /
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 30
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        resources:
          limits:
            cpu: "700m"
            memory: "768Mi"
          requests:
            cpu: "400m"
            memory: "768Mi"
      imagePullSecrets:
        - name: tropo-dockerhub-key
---
apiVersion: v1
kind: Service
metadata:
  name: portal-eu
  labels:
    name: portal-eu
  namespace: staging
spec:
  selector:
    app: portal-eu
  ### By using this type an ELB will be spun up.
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
      name: http
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: tropo-portal
  namespace: staging
spec:
  rules:
  - host: tpstaging.k.dub.tropo.com
    http:
      paths:
      - path:
        backend:
          serviceName: portal-eu
          servicePort: 80

What you expected to happen: I would expect if the deployment had started both containers and both are healthy to return a success instead of timing out. Perhaps I’m not doing the deployment right. Any time I update this deployment I just update the version number of the container and do a kubectl apply -f template.yml

How to reproduce it (as minimally and precisely as possible): This is reproducible ever single time.

Anything else we need to know:

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 19 (12 by maintainers)

Most upvoted comments

same as me… showing the following message and no response

Waiting for rollout to finish: 0 of 1 updated replicas are available… error: watch closed before Until timeout

eventually, the command timeout. So How can I check the rollout status?

You could add a timeout flag to the kubectl rollout status command.

kubectl rollout status deployment/app --namespace=app --timeout=60s

https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-status-em-