k8s-config-connector: Update call failed: the desired mutation for the following field(s) is invalid: [nodeConfig.0.OauthScopes.#...

Describe the bug When deploying Kubeflow on GCP: https://github.com/kubeflow/gcp-blueprints. One user reported a problem of getting

      message: 'Update call failed: the desired mutation for the following field(s)
        is invalid: [nodeConfig.0.OauthScopes.# ipAllocationPolicy.# initialNodeCount
        nodeConfig.0.OauthScopes.3859019814 nodeConfig.0.MachineType nodeConfig.0.ServiceAccount]'
      reason: UpdateFailed

for a containercluster resource. Ref: https://github.com/kubeflow/kubeflow/issues/5223#issuecomment-679242861

Even though we asked them to delete the resource and recreate one: https://github.com/kubeflow/kubeflow/issues/5223#issuecomment-679198124. Therefore this seems like a bug.

ConfigConnector Version 1.7.1

To Reproduce Steps to reproduce the behavior: not reproducible yet

YAML snippets:

apiVersion: v1
items:
- apiVersion: container.cnrm.cloud.google.com/v1beta1
  kind: ContainerCluster
  metadata:
    annotations:
      cnrm.cloud.google.com/management-conflict-prevention-policy: resource
      cnrm.cloud.google.com/project-id: XXXX
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"container.cnrm.cloud.google.com/v1beta1","kind":"ContainerCluster","metadata":{"annotations":{},"clusterName":"XXXX/us-east1-b/xx","labels":{"kf-name":"xxf","mesh_id":"XXXX_us-east1-b_xx"},"name":"xx","namespace":"XXXX"},"spec":{"clusterAutoscaling":{"autoProvisioningDefaults":{"oauthScopes":["https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/devstorage.read_only"],"serviceAccountRef":{"name":"xx-vm"}},"enabled":true,"resourceLimits":[{"maximum":128,"resourceType":"cpu"},{"maximum":2000,"resourceType":"memory"},{"maximum":16,"resourceType":"nvidia-tesla-k80"}]},"initialNodeCount":2,"location":"us-east1-b","loggingService":"logging.googleapis.com/kubernetes","monitoringService":"monitoring.googleapis.com/kubernetes","nodeConfig":{"machineType":"n1-standard-8","metadata":{"disable-legacy-endpoints":"true"},"oauthScopes":["https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/devstorage.read_only"],"serviceAccountRef":{"name":"xx-vm"},"workloadMetadataConfig":{"nodeMetadata":"GKE_METADATA_SERVER"}},"releaseChannel":{"channel":"REGULAR"},"workloadIdentityConfig":{"identityNamespace":"XXXX.svc.id.goog"}}}
    creationTimestamp: "2020-08-24T16:00:09Z"
    generation: 20
    labels:
      kf-name: xx
      mesh_id: XXXX_us-east1-b_xx
    name: xx
    namespace: XXXX
    resourceVersion: "1290712"
    selfLink: /apis/container.cnrm.cloud.google.com/v1beta1/namespaces/XXXX/containerclusters/xx
    uid: 0c9868a3-d248-430a-aa42-7543fc2b6b09
  spec:
    clusterAutoscaling:
      autoProvisioningDefaults:
        oauthScopes:
        - https://www.googleapis.com/auth/logging.write
        - https://www.googleapis.com/auth/monitoring
        - https://www.googleapis.com/auth/devstorage.read_only
        serviceAccountRef:
          name: xx-vm
      enabled: true
      resourceLimits:
      - maximum: 128
        resourceType: cpu
      - maximum: 2000
        resourceType: memory
      - maximum: 16
        resourceType: nvidia-tesla-k80
    initialNodeCount: 2
    location: us-east1-b
    loggingService: logging.googleapis.com/kubernetes
    monitoringService: monitoring.googleapis.com/kubernetes
    nodeConfig:
      machineType: n1-standard-8
      metadata:
        disable-legacy-endpoints: "true"
      oauthScopes:
      - https://www.googleapis.com/auth/logging.write
      - https://www.googleapis.com/auth/monitoring
      - https://www.googleapis.com/auth/devstorage.read_only
      serviceAccountRef:
        name: xx-vm
      workloadMetadataConfig:
        nodeMetadata: GKE_METADATA_SERVER
    releaseChannel:
      channel: REGULAR
    workloadIdentityConfig:
      identityNamespace: XXXX.svc.id.goog
  status:
    conditions:
    - lastTransitionTime: "2020-08-24T16:00:12Z"
      message: 'Update call failed: the desired mutation for the following field(s)
        is invalid: [nodeConfig.0.OauthScopes.# ipAllocationPolicy.# initialNodeCount
        nodeConfig.0.OauthScopes.3859019814 nodeConfig.0.MachineType nodeConfig.0.ServiceAccount]'
      reason: UpdateFailed
      status: "False"
      type: Ready
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 15 (1 by maintainers)

Most upvoted comments

Hi, I would suggest upgrading KCC to the latest version since it has the fix for the similar issue #165. Also I would suggest not using the embedded nodeConfig but instead manage the node pools as a separate resource.