kubernetes: Deployment doesn't create new replica set nor give any error
Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):
I’m hoping that this issue will resolve the problem I’m seeing.
What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):
deployment
, replica set
, OldReplicaSets
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Bug report
Kubernetes version (use kubectl version
):
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.4", GitCommit:"3b417cc4ccd1b8f38ff9ec96bb50a81ca0ea9d56", GitTreeState:"clean", BuildDate:"2016-10-21T02:48:38Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.5", GitCommit:"5a0a696437ad35c133c0c8493f7e9d22b0f9b81b", GitTreeState:"clean", BuildDate:"2016-10-29T01:32:42Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Environment:
Google Container Engine
What happened:
I ran kubectl apply -f
as usual to apply any changes to Deployment, Service, Ingress resources. The deployment did not create a new replica set with the updated container image. Going to the Kubernetes Dashboard I can confirm that the deployment was correctly updated with the new metadata.
The only difference when this issue started happening was a small code change.
I’ve tried multiple updates since and nothing changed. The nodes were on version 1.4.0 but I’ve tried updating them to 1.4.5 but still the same issue is happening.
The old pods are still operational and serving requests normally.
What you expected to happen:
The deployment should have created a new replica set with three pods, then rolled them out.
How to reproduce it (as minimally and precisely as possible):
I don’t know.
Anything else do we need to know:
I’ve attached a file with the deployment’s current configuration, as well as the deploy script I use to deploy: files.zip
Here’s the output of kubectl describe
:
$ kubectl describe deployment/fika-io
Name: fika-io
Namespace: default
CreationTimestamp: Sun, 16 Oct 2016 10:58:42 -0400
Labels: app=fika-io
Selector: app=fika-io
Replicas: 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
OldReplicaSets: fika-io-749979362 (3/3 replicas created)
NewReplicaSet: <none>
No events.
Note that there is no NewReplicaSet
.
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Reactions: 2
- Comments: 26 (8 by maintainers)
For anyone else running into this issue. If your deployment does not seem to work, do the following:
kubectl get rs
to find all replica sets for that deploymentDesired
value at 0, delete all of thoseSeems the issue might be that the old replica sets are not being cleaned up, and when their number exceeds the revisionHistoryLimit, the deployments stop working.
+1 to what blixt said. Changing revisionHistoryLimit is not enough to unstick the deployment. Only when I cleaned up all replica sets with kubectl delete $(kubectl -o name get rs), new replica set was created, and then everything worked again.
It seems the actual problem is that deployments are somehow confused by too many old replica sets.
Don’t forget to set revisionHistoryLimit to 5 or other reasonable value on all your deployments to avoid reoccurence of this problem in future. Default value is “unlimited”, and it doesn’t work.
Same here. It’s currently happening in my project. I can’t reload anything. “kubectl apply -f” on any deployment doesn’t work. “kubectl edit deployment” doesn’t work either. The changes I make are saved in the deployment, but it doesn’t have any effect on the replica sets.