kustomize: kyaml is not respecting `$patch replace|retainKeys` directives

tree:

.
├── base
│   ├── kafka.yaml
│   └── kustomization.yaml
└── overlays
    ├── kustomization.yaml
    ├── output.yaml
    └── patch.yaml

base content:

# kustomization.yaml
resources:
    - kafka.yaml

# kafka.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: kafka-broker01
spec:
  replicas: 1
  template:
    spec:
      containers:
        - name: broker
          imagePullPolicy: Always
          image: kafka:cloudera-2.1.0
          args: ["start", "broker"]
          volumeMounts:
            - name: kafka-broker01
              mountPath: "/kafka/kafka-logs"
            - name: jaas-config
              mountPath: "/opt/jaas-config"
          env:
            - name: BROKER_ID
              value: "0"
      volumes:
        - name: kafka-broker01
          emptyDir: {}
        - name: jaas-config
          configMap:
            name: jaas-config
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: kafka-broker02
spec:
  replicas: 1
  template:
    spec:
      containers:
        - name: broker
          imagePullPolicy: Always
          image: kafka:cloudera-2.1.0
          args: ["start", "broker"]
          volumeMounts:
            - name: kafka-broker02
              mountPath: "/kafka/kafka-logs"
            - name: jaas-config
              mountPath: "/opt/jaas-config"
          env:
            - name: BROKER_ID
              value: "1"
      volumes:
        - name: kafka-broker02
          emptyDir: {}
        - name: jaas-config
          configMap:
            name: jaas-config

overlay contents:

# kustomization.yaml
bases:
  - ../base
patchesStrategicMerge:
  - patch.yaml

# patch.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: kafka-broker01
spec:
  template:
    spec:
      volumes:
        - name: kafka-broker01
          persistentVolumeClaim:
            claimName: kafka-broker01
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: kafka-broker02
spec:
  template:
    spec:
      volumes:
        - name: kafka-broker02
          persistentVolumeClaim:
            claimName: kafka-broker02

cd overlays && kustomize build . > output.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafka-broker01
spec:
  replicas: 1
  template:
    spec:
      containers:
      - args:
        - start
        - broker
        env:
        - name: BROKER_ID
          value: "0"
        image: kafka:cloudera-2.1.0
        imagePullPolicy: Always
        name: broker
        volumeMounts:
        - mountPath: /kafka/kafka-logs
          name: kafka-broker01
        - mountPath: /opt/jaas-config
          name: jaas-config
      volumes:
      - emptyDir: {} # NOTE: unexpected
        name: kafka-broker01
        persistentVolumeClaim:
          claimName: kafka-broker01
      - configMap:
          name: jaas-config
        name: jaas-config
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafka-broker02
spec:
  replicas: 1
  template:
    spec:
      containers:
      - args:
        - start
        - broker
        env:
        - name: BROKER_ID
          value: "1"
        image: kafka:cloudera-2.1.0
        imagePullPolicy: Always
        name: broker
        volumeMounts:
        - mountPath: /kafka/kafka-logs
          name: kafka-broker02
        - mountPath: /opt/jaas-config
          name: jaas-config
      volumes:
      - emptyDir: {} # NOTE: unexpected
        name: kafka-broker02
        persistentVolumeClaim:
          claimName: kafka-broker02
      - configMap:
          name: jaas-config
        name: jaas-config

In the output both emptyDir and persistentVolumeClaim field exists.

How to change volumes from emptyDir to PVC use kustomize?

About this issue

  • Original URL
  • State: open
  • Created 4 years ago
  • Reactions: 7
  • Comments: 27 (3 by maintainers)

Most upvoted comments

After searching for information and testing, I found two methods:

# patch.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: kafka-broker01
spec:
  template:
    spec:
      volumes:
        - name: kafka-broker01
          emptyDir: null # method 1
          persistentVolumeClaim:
            claimName: kafka-broker01
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: kafka-broker02
spec:
  template:
    spec:
      volumes:
        - name: kafka-broker02
          $patch: delete # method 2
        - name: kafka-broker02
          persistentVolumeClaim:
            claimName: kafka-broker02

I dug into this a bit more and I now believe the underlying problem is that the kyaml implementation of strategic merge patch does not respect the retainKeys strategy communicated in the openapi’s x-kubernetes-patch-strategy field (confirmed to be in kustomize’s embedded copy). That strategy only appears two places: in volumes (here) and deployment update strategy. I confirmed the latter is affected too:

# kustomization
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- input.yaml

patchesStrategicMerge:
  - patch.yaml
# input.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
  name: foo
  labels:
    app: foo
spec:
  replicas: 3
  selector:
    matchLabels:
      app: foo
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
# patch.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
  name: foo
spec:
  strategy:
    type: Recreate

Result:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: foo
  name: foo
spec:
  replicas: 3
  selector:
    matchLabels:
      app: foo
  strategy:
    rollingUpdate: # nonsensical, should have been cleared
      maxSurge: 1
      maxUnavailable: 1
    type: Recreate

Here are the PRs that implemented this in k/k for refrence: https://github.com/kubernetes/kubernetes/pull/50296 https://github.com/kubernetes/kubernetes/pull/44597

/triage accepted /area openapi /area kyaml

cc @mengqiy @natasha41575

Same I have also worked around the problem with key: null.

This is very much an issue, I was able to reproduce it in Kustomize 3.8.1

kustomize version
{Version:3.8.1 GitCommit:0b359d0ef0272e6545eda0e99aacd63aef99c4d0 BuildDate:2020-07-16T05:11:04+01:00 GoOs:darwin GoArch:amd64}

Raw Deployment:

---
# Source: rancher/templates/deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
  name: rancher
  labels:
    app: rancher
    chart: rancher-2.4.6
    heritage: Helm
    release: rancher
spec:
  replicas: 3
  selector:
    matchLabels:
      app: rancher
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: rancher
        release: rancher
    spec:
      serviceAccountName: rancher
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - rancher
              topologyKey: kubernetes.io/hostname
      containers:
      - image: rancher/rancher:v2.4.6
        imagePullPolicy: IfNotPresent
        name: rancher
        ports:
        - containerPort: 80
          protocol: TCP
        args:
        # Private CA - don't clear ca certs
        - "--http-listen-port=80"
        - "--https-listen-port=443"
        - "--add-local=auto"
        env:
        - name: CATTLE_NAMESPACE
          value: rancher-system
        - name: CATTLE_PEER_SERVICE
          value: rancher
        - name: AUDIT_LEVEL
          value: "1"
        - name: AUDIT_LOG_MAXAGE
          value: "1"
        - name: AUDIT_LOG_MAXBACKUP
          value: "1"
        - name: AUDIT_LOG_MAXSIZE
          value: "100"
        livenessProbe:
          httpGet:
            path: /healthz
            port: 80
          initialDelaySeconds: 60
          periodSeconds: 30
        readinessProbe:
          httpGet:
            path: /healthz
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 30
        resources:
          {}
        volumeMounts:
        # Pass CA cert into rancher for private CA
        - mountPath: /etc/rancher/ssl/cacerts.pem
          name: tls-ca-volume
          subPath: cacerts.pem
          readOnly: true
        - mountPath: /var/log/auditlog
          name: audit-log
      # Make audit logs available for Rancher log collector tools.
      - image: busybox
        name: rancher-audit-log
        command: ["tail"]
        args: ["-F", "/var/log/auditlog/rancher-api-audit.log"]
        volumeMounts:
        - mountPath: /var/log/auditlog
          name: audit-log
      volumes:
      - name: tls-ca-volume
        secret:
          defaultMode: 0400
          secretName: tls-ca
      - name: audit-log
        emptyDir: {}

Patch:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: rancher
  # namespace: rancher-system
spec:
  template:
    spec:
      containers:
        - name: rancher
          volumeMounts:
            - name: secrets-store-inline
              mountPath: "/mnt/secrets-store"
              readOnly: true
      volumes:
        - name: secrets-store-inline
          csi:
            driver: secrets-store.csi.k8s.io
            readOnly: true
            volumeAttributes:
              secretProviderClass: "azure-tls"
            nodePublishSecretRef:
              name: secrets-store-creds
        - name: tls-ca-volume
          csi:
            driver: secrets-store.csi.k8s.io
            readOnly: true
            volumeAttributes:
              secretProviderClass: "azure-root-ca"
            nodePublishSecretRef:
              name: secrets-store-creds

Unexpected output:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: rancher
    chart: rancher-2.4.6
    heritage: Helm
    release: rancher
  name: rancher
  namespace: rancher-system
spec:
  replicas: 3
  selector:
    matchLabels:
      app: rancher
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: rancher
        release: rancher
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - rancher
              topologyKey: kubernetes.io/hostname
            weight: 100
      containers:
      - args:
        - --http-listen-port=80
        - --https-listen-port=443
        - --add-local=auto
        env:
        - name: CATTLE_NAMESPACE
          value: rancher-system
        - name: CATTLE_PEER_SERVICE
          value: rancher
        - name: AUDIT_LEVEL
          value: "1"
        - name: AUDIT_LOG_MAXAGE
          value: "1"
        - name: AUDIT_LOG_MAXBACKUP
          value: "1"
        - name: AUDIT_LOG_MAXSIZE
          value: "100"
        image: rancher/rancher:v2.4.6
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /healthz
            port: 80
          initialDelaySeconds: 60
          periodSeconds: 30
        name: rancher
        ports:
        - containerPort: 80
          protocol: TCP
        readinessProbe:
          httpGet:
            path: /healthz
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 30
        volumeMounts:
        - mountPath: /etc/rancher/ssl/cacerts.pem
          name: tls-ca-volume
          readOnly: true
          subPath: cacerts.pem
        - mountPath: /var/log/auditlog
          name: audit-log
        - mountPath: /mnt/secrets-store
          name: secrets-store-inline
          readOnly: true
      - args:
        - -F
        - /var/log/auditlog/rancher-api-audit.log
        command:
        - tail
        image: busybox
        name: rancher-audit-log
        volumeMounts:
        - mountPath: /var/log/auditlog
          name: audit-log
      serviceAccountName: rancher
      volumes:
      - csi:
          driver: secrets-store.csi.k8s.io
          nodePublishSecretRef:
            name: secrets-store-creds
          readOnly: true
          volumeAttributes:
            secretProviderClass: azure-root-ca
        name: tls-ca-volume
        secret:
          defaultMode: 256
          secretName: tls-ca
      - name: audit-log
      - csi:
          driver: secrets-store.csi.k8s.io
          nodePublishSecretRef:
            name: secrets-store-creds
          readOnly: true
          volumeAttributes:
            secretProviderClass: azure-tls
        name: secrets-store-inline

Which produces the following error:

The Deployment "rancher" is invalid:
* spec.template.spec.volumes[1].csi: Forbidden: may not specify more than 1 volume type
* spec.template.spec.containers[0].volumeMounts[1].name: Not found: "tls-ca-volume"