cilium: Cilium v1.12.0-rc2 complains on startup: "Unable to patch node resource with annotation"

Is there an existing issue for this?

  • I have searched the existing issues

What happened?

I installed Cilium v1.9.x using the Helm charts into a kind environment by following the Kind GSG.

Afterwards, I upgraded to Cilium 1.12.0-rc2 by executing the following command:

helm upgrade -i cilium cilium/cilium --version 1.12.0-rc2 --namespace kube-system -f kind-values.yaml

Here is my kind-values.yaml:

kubeProxyReplacement: partial
hostServices:
  enabled: false
externalIPs:
  enabled: true
nodePort:
  enabled: true
hostPort:
  enabled: true
bpf:
  masquerade: false
image:
  pullPolicy: IfNotPresent
ipam:
  mode: kubernetes
extraConfig:
  mtu: "1280"

The following warnings are then being regularly printed to the logs:

level=warning msg="Unable to patch node resource with annotation" error="nodes \"kind-control-plane\" is forbidden: User \"system:serviceaccount:kube-system:cilium\" cannot patch resource \"nodes/status\" in API group \"\" at the cluster scope" key=0 nodeName=kind-control-plane subsys=k8s v4CiliumHostIP.IPv4=10.244.0.245 v4Prefix=10.244.0.0/24 v4healthIP.IPv4=10.244.0.164 v6CiliumHostIP.IPv6="<nil>" v6Prefix="<nil>" v6healthIP.IPv6="<nil>"

Cilium Version

1.12.0-rc2

Kernel Version

5.14.0-1034-oem (Ubuntu Focal 20.04.4 LTS)

Kubernetes Version

1.21.1

Sysdump

No response

Relevant log output

No response

Anything else?

No response

Code of Conduct

  • I agree to follow this project’s Code of Conduct

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 15 (8 by maintainers)

Most upvoted comments

@AndreiHardziyenkaIR I just discovered the same issue in our GKE clusters, did you get any more information about this? is it required at all?

Reproduced in GKE 1.22.8-gke.201

Found a quick workaround:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cilium-node-patcher
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: "system:node"
subjects:
- apiGroup: ""
  kind: ServiceAccount
  name: cilium
  namespace: kube-system

ClusterRole system:node is predefined role so no need to create it It gives an ability to cilium service-account to patch nodes

@sayboras The version I am using was installed automatically with GKE I believe it comes with Dataplane V2 enabled.

Same happened with v1.11.6 and dataplane v2, then i’ve used cilium uninstall which crashed whole cluster.

Out of curiousity, do we support upgrade across major versions (e.g. 1.9 -> 1.11/1.12) ? Or the upgrade should be done incrementally (e.g. 1.9.x -> 1.10.x -> 1.11.x -> …) ?

We only officially support the latter, there are some steps that will get missed if you upgrade directly from 1.9->1.11/1.12. As a developer I just like to live life on the edge 😁