kubernetes: PodDisruptionBudget updates are forbidden
PodDisruptionBudget is immutable at this moment. It would be a nice improvement to allow changes to it, at the least to the minAvailable field. Not sure whether this counts as a bug or a feature request š
After creating a PodDisruptionBudget with the following manifest.
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: elasticsearch-master
labels:
app: elasticsearch
role: master
spec:
selector:
matchLabels:
app: elasticsearch
role: master
minAvailable: 2
When changing the spec to letās say minAvailable: 3
and re-running kubectl apply
it returns the following error:
The PodDisruptionBudget "elasticsearch-master" is invalid: spec: Forbidden: updates to poddisruptionbudget spec are forbidden.
Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.6", GitCommit:"114f8911f9597be669a747ab72787e0bd74c9359", GitTreeState:"clean", BuildDate:"2017-03-28T13:36:31Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Environment:
- Google Container Engine 1.6.2 with cos image:
- OS (e.g. from /etc/os-release):
BUILD_ID=9000.84.2
NAME="Container-Optimized OS"
GOOGLE_CRASH_ID=Lakitu
VERSION_ID=56
BUG_REPORT_URL=https://crbug.com/new
PRETTY_NAME="Container-Optimized OS from Google"
VERSION=56
GOOGLE_METRICS_PRODUCT_ID=26
HOME_URL="https://cloud.google.com/compute/docs/containers/vm-image/"
ID=cos
- Kernel (e.g.
uname -a
):
Linux gke-production-europe-we-auto-scaling-917da0af-zzzj 4.4.21+ #1 SMP Fri Feb 17 15:34:45 PST 2017 x86_64 Intel(
R) Xeon(R) CPU @ 2.50GHz GenuineIntel GNU/Linux
What happened:
Changing the value of spec.minAvailable
in a PodDisruptionBudget failed due to updates being forbidden.
What you expected to happen:
At least spec.minAvailable
being allowed to change because itās often used in conjunction with a StatefulSet which is likely to grow over time by incrementing the number of replicas, at which time you also want to grow minAvailable in the PodDisruptionBudget.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 52
- Comments: 33 (14 by maintainers)
https://github.com/kubernetes/kubernetes/pull/69867 allows this update, and will be included in 1.15
Either way, will
PodDisruptionBudget
be updatable?FYI, I get the same error even if the update changes nothing. How do I update my kubernetes cluster in a CD pipeline with a
PodDisruptionBudget
in it? Normally, I can dokubectl apply -f somefile.yml
, and it intelligently handles services, deployments, statefulsets, etc. But if there is a disruption budget in there, boom.@kow3ns After reading this thread, Iām convinced we need an answer for mutating PDBs through
kubectl apply
(and thus Helm). For things like Pods, itās ok that you canāt mutate them because we have things like Deployment that allow you to declaratively transition to a new Pod template. Thereās no such thing for PDB, so we are essentially preventing users from managing PDBs declaratively. They would have to imperatively delete the old one when they make any change.Operationally you have a fair point that updating maxUnavailable should be uncommon, but making it immutable results in poor UX for the times when itās necessary (e.g. even if thereās only one value that makes sense, maybe you used the wrong value by mistake and need to fix it). @foxish seems to think thereās no semantic reason we canāt make PDB mutable, but even if there were some reason that all changes needed to occur by deleting/recreating the PDB, then Iād argue we need to provide a āPDB Deploymentā controller that contains a mutable PDB template and does the delete/recreate for you.
Our CI/CD pipeline is using
helm
and we hit this issue as well.I donāt know how to avoid this problem other than removing the
PodDisruptionBudget
object.EDIT: As a workaround (using Helm) I can enclose the object definition like this:
I have a solution that seems to be working well for me:
Similar to the
{{- if .Release.IsInstall -}}
workaround linked before, but with the ability to change on subsequent upgrades.If you leverage helm hooks to create the Pod Disruption Budget, you can delete and recreate it on every helm install/upgrade.
Just add these annotations to the PDB like below in your chart:
Pinging @kubernetes/sig-apps-feature-requests please, weāre hacking on top of helm charts, creating separate imperative workflows and hooks instead of declarative kubernetes resource management, just to remove and recreating PDBs every time. Any action on this would be very helpful for consumers pleaseā¦
For those having this issue with helm, you can use the --force argument during an upgrade to force a deletion/re-creation of the resource if apply fails. It isnāt specific to PDBās, though.
https://github.com/helm/helm/issues/2914
This breaks upgrades for lots of helm charts that only have
minAvailable
, eg:https://github.com/helm/charts/blob/master/stable/nginx-ingress/templates/controller-poddisruptionbudget.yaml
kubernetes version v1.9.8
I agree, it would be really nice to be able to
apply
these files after creation for CD workflows.On the linked thread it seems like here was chosen as the place to continue discussion on this topic. Are there any further updates to this discussion elsewhere, or any issues implementing changes?
To add a further discussion point - is there any current reason that PDBs should not be mutable?
@foxishās comment:
Seems to imply that this was made immutable for a potential feature that does not yet exist. Whereas the current immutable implementation raises very real issues with
kubectl apply
and helm usage now as noted in this thread. Iām perhaps over simplifying but I believe these should be considered bugs in the implementation without a controller as @enisoc mentions. That said is it not better to make this a mutable resource rather than work to add another resource/controller. This would somewhat depend on if the other feature has been planned or dropped?While I agree this restriction is tedious for helm charts, the fact that helm does not detect that the change will fail is also a problem: rather than rejecting the upgrade, helm installs the upgrade, only to have it fail, and then forces you to roll it back. If there are going to be immutable fields, then there should be a way for helm to determine that those fields are immutable prior to actually starting the upgrade. Ideally, a helm diff would even flag these.