kubernetes: kubectl apply doesn't remove container args
I’m running into an oddity where, as the title says, an update to a Deployment is not fully applying the new PodTemplateSpec onto the previous version, as components of the previous version remain around after the update.
I’m running v1.2.0-alpha8 on CoreOS 835.9.0
Here’s an example of creating a Deployment (v1), and applying an update (v2) which simply removes certain parts of the original spec (the args
key in this case), to showcase that the new spec isn’t fully being applied, even though the Deployment’s last-applied-configuration
says it should have.
git-deployment-v1.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: git-deployment
labels:
name: git-deployment
spec:
replicas: 1
template:
metadata:
labels:
name: git-deployment
spec:
containers:
- name: git-sync
image: gcr.io/google_containers/git-sync
imagePullPolicy: Always
args:
- -rev=origin/gh-pages
volumeMounts:
- name: markdown
mountPath: /git
readOnly: false
env:
- name: GIT_SYNC_REPO
value: https://github.com/kubernetes/kubernetes
- name: GIT_SYNC_DEST
value: /git
- name: GIT_SYNC_WAIT
value: "120"
volumes:
- name: markdown
emptyDir: {}
We create the Deployment:
kubectl create -f git-deployment-v1.yaml
When the Pod is running, it should not have any annotations regarding the Deployment as it was the initial deployment, and there are no history of updates. We can verify that it is not set by seeing that this cmd returns: <no value>
kubectl get deployment git-deployment -o go-template='{{index .metadata.annotations "kubectl.kubernetes.io/last-applied-configuration"}}'
…and we can check that the args
key was set, in this case, to pass an arg of -rev=origin/gh-pages
to the container’s entry cmd, by checking:
kubectl get po git-deployment-<ID> -o yaml | grep -A 1 "args"
it returns:
- args:
- -rev=origin/gh-pages
…which is what we expect as we set it as such in the v1 Deployment
After making a change in the PodTemplateSpec, specifically, removing the args
key all-together (this is the only change made in the spec), we have a v2 Deployment to apply as an update to v1:
git-deployment-v2.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: git-deployment
labels:
name: git-deployment
spec:
replicas: 1
template:
metadata:
labels:
name: git-deployment
spec:
containers:
- name: git-sync
image: gcr.io/google_containers/git-sync
imagePullPolicy: Always
volumeMounts:
- name: markdown
mountPath: /git
readOnly: false
env:
- name: GIT_SYNC_REPO
value: https://github.com/kubernetes/kubernetes
- name: GIT_SYNC_DEST
value: /git
- name: GIT_SYNC_WAIT
value: "60"
volumes:
- name: markdown
emptyDir: {}
We apply the update:
kubectl apply -f git-deployment-v2.yaml
…and we can verify that the new PodSpecTemplate for the v2 Deployment is what we wanted it to be by examining the output of the last-applied-configuration
key for the newly updated Pod:
kubectl get deployment git-deployment -o go-template='{{index .metadata.annotations "kubectl.kubernetes.io/last-applied-configuration"}}' | python -m json.tool
(this output and the git-deployment-v2.yaml
manifest should match, and neither should have the args
key as we specified)
However, if you examine the new Pod’s args
again using
kubectl get po git-deployment-<NEW_ID> -o yaml | grep -A 1 "args"
…it still returns:
- args:
- -rev=origin/gh-pages
…which should not be the case 👎
So in my case, its still passing a flag via the args
key, which in v2 I determined I did not want by removing it, and the existing flag is messing with the updated Pod’s execution.
Any ideas what could be taking place? Thanks!
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 26 (18 by maintainers)
Commits related to this issue
- services/cluster/vkube: Fix for v.io/c/21197 Since v.io/c/21197, the directory specified by --v23.credentials is no longer created when it doesn't exist. We need to create it before starting 'claimab... — committed to vanadium-archive/go.ref by rthellend 8 years ago
- Merge pull request #25074 from AdoHe/remove_container_args Automatic merge from submit-queue update kubectl apply help info Please refer #22342 for more detail. @bgrant0607 ptal. Also I have open a... — committed to kubernetes/kubernetes by k8s-github-robot 8 years ago
Still happening for me
Very similar situation to what @relaxdiego posted. Had two volumes, volumeMounts in the original deployment. Updated deployment YAML to change one of them to a different PVC and mount point. Applied YAML, and now I have three volumes and three volume mount points!
Hi, it seems like this bug is still present. I’m using:
Running
kubectl get deploy XXXXX -o go-template='{{index .metadata.annotations "kubectl.kubernetes.io/last-applied-configuration"}}'
shows the applied config as:(snipped to just the relevant parts)
However, running
kubectl describe
on the same deployment yields:I can’t reproduce it all the time, however. I’ve only observed it happening twice. Also note that I used
kubectl apply
to create the initial version of the deployment.Oh, sorry @AdoHe @metral! I didn’t look at this carefully enough initially.
@metral If you want to use apply, please use apply to create the resource initially, as well, or use create --save-config.
@metral as a workaround, you can use two apply to remove the args. Like this:
The first apply will create the deployment, the second will apply your new configuration.
In terms of getting up to speed, take a look at the developer docs: https://github.com/kubernetes/kubernetes/tree/master/docs/devel
@AdoHe @metral That would be great. There are plenty of apply-related issues to go around if you’re interested in that specifically. It would be great to work out all the kinks for 1.3.
#15493, #19809, #19767, #17238, #16569, #13576, …
@ghodss and/or @jackgr should be able to help review
@kargakis
I just confirmed that using
kubectl replace
instead ofkubectl apply
does in fact work to eliminate the lingeringargs
left in v2 from usingkubectl apply
on v1, so it appears thatkubectl apply
is the culprit here.Thanks for the tip!