kubernetes: "kubectl rollout status" returns error message before rollout finish
When we execute kubectl rollout status
, we’re receiving this error:
kubectl rollout status deployment/app --namespace=app
Waiting for rollout to finish: 2 out of 6 new replicas have been updated...
Waiting for rollout to finish: 2 out of 6 new replicas have been updated...
error: timed out waiting for the condition
I also tried to put the option -w
, but it did’t change anything.
Since we upgraded to Kubernetes 1.5.2 we’re receiving this. In 1.4.6 it works with no problems. I’ve tried to update kubectl
and use previous versions too, but seems to be a problem with the cluster, not with kubectl
command.
This is a problem for us because Jenkins is using this command to monitor the deployment status, so our builds are crashing in the end.
We’re running the cluster on AWS with Kops provisioning.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 34
- Comments: 42 (23 by maintainers)
Commits related to this issue
- Output extra logs to debug K8s issue See more https://github.com/kubernetes/kubernetes/issues/40224 Signed-off-by: Oleksandr Slynko <oslynko@pivotal.io> — committed to cloudfoundry-incubator/kubo-ci by mordebites 7 years ago
- Use helm to deploy istio in tests The normal deploy fails waiting for `kubectl rollout status` due to this bug: kubernetes/kubernetes#40224 Using helm takes 15 minutes extra to run the tests, which ... — committed to cloudfoundry-incubator/kubo-ci by deleted user 6 years ago
- cakephp-mysql-persistent: wait before rollout status There is an issue with 'oc rollout status', sometimes it fails with: error: watch closed before Until timeout See https://github.com/kubern... — committed to redhat-cop/agnosticd by fridim 6 years ago
- Merge pull request #67817 from tnozicka/fix-rollout-status-wait Automatic merge from submit-queue (batch tested with PRs 67986, 68210, 67817). If you want to cherry-pick this change to another branch... — committed to kubernetes/kubernetes by deleted user 6 years ago
+1
Fix is here https://github.com/kubernetes/kubernetes/pull/50102
/sig apps
@guineveresaenger there is a PR, https://github.com/kubernetes/kubernetes/pull/67817 linked just above your comment, fixing the issue; in review process, still targeting v1.12
@kubernetes/sig-cli-bugs @kubernetes/sig-api-machinery-bugs either fix watch.Until or remove
Meet the same problem:
root cause:
Solution:
Set
.spec.paused
as false..spec.paused
is an optional boolean field for pausing and resuming a Deployment. It defaults to false (a Deployment is not paused).xref: http://kubernetes.kansea.com/docs/user-guide/deployments/#paused