kyverno: [BUG] Error syncing deployment - Operation cannot be fulfilled on replicasets
Software version numbers
- Kubernetes version: 1.20
- Kubernetes platform (if applicable; ex., EKS, GKE, OpenShift): OpenShift 4.7
- Kyverno version: 1.4.2
Describe the bug
The generation field in the kyverno/kyverno deployment increases constantly from 1 to thousands after a few days. In the deployment-controller you see log messages like:
deployment_controller.go:490] „Error syncing deployment“ deployment=„kyverno/kyverno“ err=„Operation cannot be fulfilled on replicasets.apps \“kyverno-697786d947\“: the object has been modified; please apply your changes to the latest version and try again“
To Reproduce Install kyverno and inspect generation field of deployment. It will increase constantly.
Expected behavior No increase of generation and no error messages in deployment-controller logs.
Additional context This does not happen in 1.3.6 but also with 1.4.1
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Reactions: 7
- Comments: 15 (10 by maintainers)
I think I know what’s happening.
For Deployment, an annotation is always updated, it creates a new version aka “generation” of the resource:
And this has been introduced in PR #1931, it was not there in 1.36.
This “bug” is very annoying for people who deploys with ArgoCD like this, our application is always in “Progressing” and not “Healthy”, because of the increment of
generation
(see docs which explain the behavior ifgeneration
changes).To elaborate on the specific issue this is causing us:
.metadata.generation == .status.observedGeneration
For some additional context, see this similar issue about leader election In particular:
Since the requirement here seems to be “an arbitrary place to store some annotation data” - could Kyverno follow the wider project here, and store those annotations on a Lease object?
@mritunjaysharma394 Sorry for late answer but it took some time until I was able to check it… just updated to 1.6.2 and no more issues! Brilliant! Thx for this fix!
@foriequal0 You’re right, my mistake! Again I think this is NOT an ArgoCD issue… we see the same without ArgoCD!
I confirm we’re facing the same behaviour with EKS 1.18