kubernetes: kubectl apply fails with CRD

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

CRD:

apiVersion: apiextensions.k8s.io/v1beta1
description: Calico IP Pools
kind: CustomResourceDefinition
metadata:
  name: ippools.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: IPPool
    plural: ippools
    singular: ippool

Existing object:

apiVersion: crd.projectcalico.org/v1
kind: IPPool
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"crd.projectcalico.org/v1","kind":"IPPool","metadata":{"annotations":{},"name":"172-29-0-0-16","namespace":""},"spec":{"cidr":"172.29.0.0/16"}}
  clusterName: ""
  creationTimestamp: 2017-10-03T10:09:47Z
  deletionGracePeriodSeconds: null
  deletionTimestamp: null
  generation: 0
  initializers: null
  name: 172-29-0-0-16
  namespace: ""
  resourceVersion: "18833705"
  selfLink: /apis/crd.projectcalico.org/v1/172-29-0-0-16
  uid: fc9418eb-a822-11e7-9fc7-4201ac1fd019
spec:
  cidr: 172.29.0.0/16

New object:

apiVersion: crd.projectcalico.org/v1
kind: IPPool
metadata:
  name: 172-29-0-0-16
spec:
  cidr: 172.29.0.0/16

kubectl apply says:

Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{}}}
to:
&{0xc421420240 0xc4215597a0  172-29-0-0-16 build/manifests/calico-node.yaml 0xc421182890 0xc420f58ce0 18833705 false}
for: "build/manifests/calico-node.yaml": IPPool.crd.projectcalico.org "172-29-0-0-16" is invalid: apiVersion: Invalid value: "IPPool": must be crd.projectcalico.org/v1

If this is a problem with the new object definition rather than a bug with kubectl apply, I’d expect a better error message. (The new object looks fine to me, though; it should be identical to the old object.)

What you expected to happen:

I expected the object to be updated without an error.

How to reproduce it (as minimally and precisely as possible):

See description.

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:46:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration**: GCE
  • OS (e.g. from /etc/os-release): CentOS Linux 7.4 (1708)
  • Kernel (e.g. uname -a): Linux master-head-1 4.13.4-1.el7.elrepo.x86_64 #1 SMP Wed Sep 27 13:32:23 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: Custom scripts
  • Others:

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 29 (22 by maintainers)

Commits related to this issue

Most upvoted comments

From https://github.com/kubernetes/kubernetes/pull/54780#discussion_r150165278:

Hi, @liggitt @sttts I did some research. This logic is designed to make make sure a no-op patch will cause object stored in etcd to upgrade to the new preferred version. For example:

in etcd, we have apiversion v1alpha1 we do patch -p ‘{}’ in etcd, the object is stored as v1beta1 So, for CRD, we will never run into this if scope. So, there is no second loop for CRDs, #53379 is totally fixed(or shadowed) as far as I can see.

Then I think we can officially close this.

sttts closed this in sttts/apiserver@3acb05e 11 hours ago

Fun, a fork closing issues.

@bjhaid hopefully yes, this should be merged soon.

/assign

Still investigating the issue but I noticed that when we do a no-op patch for native resources, we get a “not patched” response. However, for non-native resources, it still tries to patch it (and bumps the resourceVersion too).

This just fix the error message: https://github.com/kubernetes/kubernetes/pull/54218

But it doesn’t fix the twice-patch issue