kubernetes: Allow forcing a put even when metadata.resourceVersion mismatches

atm when the resourceVersion mismatches kubectl pulls the latest and then retries, which is pointless since it just does the same request after our usecase is similar, using the api to PUT the expected state, not caring for what the cluster thinks it is … but we still get metadata.resourceVersion: Invalid value errors which is annoying and means we have to do a round-trip to get the latest version which might fail again if someone else updated the resource

… so support ?force=true or ?noResourceVersion to disable this

About this issue

  • Original URL
  • State: open
  • Created 6 years ago
  • Reactions: 12
  • Comments: 33 (9 by maintainers)

Commits related to this issue

Most upvoted comments

I meet the issue too. I defined 2 APIs, one is defined using aggregated API, other is defined using CRD. I can update the aggregated one without resourceVersion, but meet error when I update the CR without resourceVersion.

... is invalid: metadata.resourceVersion: Invalid value: 0x0: must be specified for an update

Also had same issue and found this helpful article: https://www.timcosta.io/kubernetes-service-invalid-clusterip-or-resourceversion/ (TLDR: remove last-applied-configuration annotation)

This is still an issue today. But as an work around, you can get resource version of the current resource object that is deployed, and edit the new resource to have that value. This worked for me and I can do it programmatically. Something like this:

if newResource.metadata.resourceVerision == "" {
  version := getOldResource.metadata.resourceVerision
  newResource.metadata.resourceVerision = version
}

In our case it turned out to be a different problem: the kubectl last applied configuration annotation contained the resourceVersion, which it should not. Simply running kubectl apply edit-last-applied-configuration to remove the resourceVersion was enough to make it work again.

–force works.

Is there a fix for this? It’s impacting CRs and CRDs outside of the standard k8s objects. Has anybody figured out repo steps for getting into the bad state of a resourceVersion being in the last-applied-configuration annotation?

I resolved this problem by editing the svc and removing the resourceVersion and clusterIP from Annotation section. kubectl edit svc svc-name (remove resourceVersion and clusterIP from annotation) kubectl apply -f svc.yaml

I’m doing that already, but it’s still a race-condition 😕

On Mon, Apr 6, 2020 at 11:59 AM dal13002 notifications@github.com wrote:

This is still an issue today. But as an work around, you can get resource version of the current resource object that is deployed, and edit the new resource to have that value. This worked for me and I can do it programmatically. Something like this:

if newResource.metadata.resourceVerision == “” { version := getOldResource.metadata.resourceVerision newResource.metadata.resourceVerision = version }

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubernetes/issues/70674#issuecomment-609977346, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAACYZYEWCHGYIF5NWPUPT3RLIRC5ANCNFSM4GB54AEA .