k8s-config-connector: error on ContainerCluster status: Update call failed: the desired mutation for the following field(s) is invalid: [nodeConfig.0.MachineType]
Describe the bug The status for ContainerCluster is complaining about an invalid nodeConfig value, although nodeConfig (which I think is optional) was never specified.
ConfigConnector Version 1.20.1
To Reproduce Steps to reproduce the behavior:
not sure, but I this is more or less what we did:
- created a ContainerCluster with an older KCC version (without a nodeConfig)
- set the deletetion policy to abandon
- probably deleted and recreated the ContainerCluster resource (without deleting the cluster) - not sure if it is relevant
- upgraded to KCC to 1.20.0 ( I believe the issue appeared then )
- handle CVE-2020-14386
- upgrade master to 1.16.13-gke.401
- created a new ContainerNodePool, to work around #260
- deleted the other (non-managed) nodepool, using the console
- upgraded KCC to 1.20.1 - no change
manifest:
apiVersion: container.cnrm.cloud.google.com/v1beta1
kind: ContainerCluster
metadata:
annotations:
cnrm.cloud.google.com/deletion-policy: abandon
cnrm.cloud.google.com/remove-default-node-pool: 'true'
name: <REDACTED>
namespace: <REDACTED>
spec:
authenticatorGroupsConfig:
securityGroup: gke-security-groups@<REDACTED>
initialNodeCount: 1
ipAllocationPolicy:
clusterIpv4CidrBlock: ''
location: <REDACTED>
loggingService: logging.googleapis.com/kubernetes
maintenancePolicy:
recurringWindow:
endTime: '2020-01-01T04:30:00Z'
recurrence: 'FREQ=WEEKLY;BYDAY=MO,TU,WE,TH'
startTime: '2020-01-01T00:30:00Z'
masterAuth:
clientCertificateConfig:
issueClientCertificate: false
monitoringService: monitoring.googleapis.com/kubernetes
releaseChannel:
channel: REGULAR
workloadIdentityConfig:
identityNamespace: <REDACTED>.svc.id.goog
status from live manifest:
status:
conditions:
- lastTransitionTime: '2020-09-15T14:27:08Z'
message: >-
Update call failed: the desired mutation for the following field(s) is
invalid: [nodeConfig.0.MachineType]
reason: UpdateFailed
status: 'False'
type: Ready
endpoint: <REDACTED>
instanceGroupUrls:
- <REDACTED>
- <REDACTED>
labelFingerprint: 14cb7bfd
masterVersion: 1.16.13-gke.401
servicesIpv4Cidr: <REDACTED>
About this issue
- Original URL
- State: open
- Created 4 years ago
- Comments: 15 (3 by maintainers)
I believe I have just reproduced this on config connector version 1.44.0 following steps very similar to @Synehan. For me it is the oauthScopes that won’t sync:
cnrm.cloud.google.com/version: 1.44.0GKE version: 1.18.15-gke.1501My Steps:
UpdateFailedkubectl get containerclustershows the cluster nodeConfig as having oauthScopes Xgcloud container clusters describeshows the cluster as having oauthScopes YUnsuccessful Workaround:
Successful Workaround:
cnrm.cloud.google.com/deletion-policy: "abandon"annotation, deleting the ContainerCluster, and recreating it appears to have fixed the problem. oauthScopes now appears in the ContainerCluster as Y, even though I didn’t set it explicitly. I’m going to test on a second cluster that has this problem, will report back if it does not fix the problem there.