kubevela: Rollout failed when targetSize=totalRolloutSize
Describe the bug
Rollout failed, and pod cannot be created
To Reproduce 1.apply the application yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: test-rolling
annotations:
"app.oam.dev/rolling-components": "metrics-provider"
"app.oam.dev/rollout-template": "true"
spec:
components:
- name: metrics-provider
type: worker
properties:
cmd:
- ./podinfo
- stress-cpu=1
image: stefanprodan/podinfo:4.0.6
port: 8080
updateStrategyType: InPlaceIfPossible
replicas: 2
- apply rollout yaml
apiVersion: core.oam.dev/v1beta1
kind: AppRollout
metadata:
name: rolling-example
spec:
targetAppRevisionName: test-rolling-v1
componentList:
- metrics-provider
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 1
- replicas: 1
targetSize: 2
- verify the approllout
$ kubectl get approllout
NAME TARGET UPGRADED READY BATCH-STATE ROLLING-STATE AGE
rolling-example 2 0 0 batchInitializing rolloutFailed 26s
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 29 (1 by maintainers)
I think this may be not a bug, but just an unreasonable end-user behavior. While a user is authoring the rollout plan for his application, he must be aware of the original number (or the initial number, starting number, whatever it’s named) of replicas of his application, then takes the expected final number of replicas into account, and finally design the batches. Current logic uses
.spec.replicasas the original number, which makes sense for both CloneSet and Deployment, because end-users don’t (indeed don’t have to) know how many replicas really exist now, they just know the value of.spec.replicas.The deployment defined in kube-schematic template should not have
.spec.replicas: x, otherwise kubevela will fix its replicas to x forever. So rollout makes no sense for such a workload… Indeed, rollout is exclusive to scaler trait and any others that can set.spec.replicas: xThat’s true we need to refine and add more doc.