argo-cd: `kube-prometheus-stack` stuck in `OutOfSync`

Checklist:

  • I’ve searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
  • I’ve included steps to reproduce the bug.
  • I’ve pasted the output of argocd version.

Describe the bug

Hi, I am deploying the kube-prometheus-stack helm chart using ArgoCD:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: monitoring
  namespace: argocd
spec:
  project: default
  source:
    chart: kube-prometheus-stack
    repoURL: https://prometheus-community.github.io/helm-charts
    targetRevision: 41.6.1
  destination:
    namespace: monitoring
    server: https://kubernetes.default.svc
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - ServerSideApply=true
      - CreateNamespace=true

It create all the resources but it stays in Current sync status: OutOfSync due to the resource:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: monitoring-kube-prometheus-kubelet
...

If I click the resource, using the ArgoCD WebUI, Summary and DiffI get:

image

Expected behavior

Current sync status: Sync

Version

Argo CD: v2.5.0+b895da4
Build Date: 2022-10-25T14:40:01Z
Go Version: go1.18.7
Go Compiler: gc
Platform: linux/amd64
jsonnet: v0.18.0
kustomize: v4.5.7 2022-08-02T16:35:54Z
Helm: v3.10.1+g9f88ccb
kubectl: v0.24.2

Logs

From the argocd-application-controller-0 pod it shows:

time="2022-10-26T13:03:09Z" level=info msg="Adding resource result, status: 'Synced', phase: 'Running', message: 'servicemonitor.monitoring.coreos.com/monitoring-kube-prometheus-kubelet serverside-applied'" application=argocd/monitoring kind=ServiceMonitor name=monitoring-kube-prometheus-kubelet namespace=monitoring phase=Sync syncId=00106-mKqpp
time="2022-10-26T13:03:14Z" level=info msg="Initialized new operation: {&SyncOperation{Revision:41.6.1,Prune:true,DryRun:false,SyncStrategy:nil,Resources:[]SyncOperationResource{SyncOperationResource{Group:monitoring.coreos.com,Kind:ServiceMonitor,Name:monitoring-kube-prometheus-kubelet,Namespace:,},},Source:nil,Manifests:[],SyncOptions:[ServerSideApply=true CreateNamespace=true],} { true} [] {5 nil}}" application=argocd/monitoring
time="2022-10-26T13:03:14Z" level=info msg="Tasks (dry-run)" application=argocd/monitoring syncId=00107-rGhzV tasks="[Sync/0 resource monitoring.coreos.com/ServiceMonitor:monitoring/monitoring-kube-prometheus-kubelet obj->obj (,,)]"
time="2022-10-26T13:03:14Z" level=info msg="Applying resource ServiceMonitor/monitoring-kube-prometheus-kubelet in cluster: https://10.100.0.1:443, namespace: monitoring"
time="2022-10-26T13:03:14Z" level=info msg="Applying resource ServiceMonitor/monitoring-kube-prometheus-kubelet in cluster: https://10.100.0.1:443, namespace: monitoring"
time="2022-10-26T13:03:14Z" level=info msg="Adding resource result, status: 'Synced', phase: 'Running', message: 'servicemonitor.monitoring.coreos.com/monitoring-kube-prometheus-kubelet serverside-applied'" application=argocd/monitoring kind=ServiceMonitor name=monitoring-kube-prometheus-kubelet namespace=monitoring phase=Sync syncId=00107-rGhzV
time="2022-10-26T13:03:19Z" level=info msg="Initialized new operation: {&SyncOperation{Revision:41.6.1,Prune:true,DryRun:false,SyncStrategy:nil,Resources:[]SyncOperationResource{SyncOperationResource{Group:monitoring.coreos.com,Kind:ServiceMonitor,Name:monitoring-kube-prometheus-kubelet,Namespace:,},},Source:nil,Manifests:[],SyncOptions:[ServerSideApply=true CreateNamespace=true],} { true} [] {5 nil}}" application=argocd/monitoring
time="2022-10-26T13:03:19Z" level=info msg="Tasks (dry-run)" application=argocd/monitoring syncId=00108-awxqW tasks="[Sync/0 resource monitoring.coreos.com/ServiceMonitor:monitoring/monitoring-kube-prometheus-kubelet obj->obj (,,)]"
time="2022-10-26T13:03:19Z" level=info msg="Applying resource ServiceMonitor/monitoring-kube-prometheus-kubelet in cluster: https://10.100.0.1:443, namespace: monitoring"
time="2022-10-26T13:03:19Z" level=info msg="Applying resource ServiceMonitor/monitoring-kube-prometheus-kubelet in cluster: https://10.100.0.1:443, namespace: monitoring"
time="2022-10-26T13:03:19Z" level=info msg="Adding resource result, status: 'Synced', phase: 'Running', message: 'servicemonitor.monitoring.coreos.com/monitoring-kube-prometheus-kubelet serverside-applied'" application=argocd/monitoring kind=ServiceMonitor name=monitoring-kube-prometheus-kubelet namespace=monitoring phase=Sync syncId=00108-awxqW
time="2022-10-26T13:03:24Z" level=info msg="Initialized new operation: {&SyncOperation{Revision:41.6.1,Prune:true,DryRun:false,SyncStrategy:nil,Resources:[]SyncOperationResource{SyncOperationResource{Group:monitoring.coreos.com,Kind:ServiceMonitor,Name:monitoring-kube-prometheus-kubelet,Namespace:,},},Source:nil,Manifests:[],SyncOptions:[ServerSideApply=true CreateNamespace=true],} { true} [] {5 nil}}" application=argocd/monitoring
time="2022-10-26T13:03:24Z" level=info msg="Tasks (dry-run)" application=argocd/monitoring syncId=00109-rqkbn tasks="[Sync/0 resource monitoring.coreos.com/ServiceMonitor:monitoring/monitoring-kube-prometheus-kubelet obj->obj (,,)]"
time="2022-10-26T13:03:24Z" level=info msg="Applying resource ServiceMonitor/monitoring-kube-prometheus-kubelet in cluster: https://10.100.0.1:443, namespace: monitoring"
time="2022-10-26T13:03:24Z" level=info msg="Applying resource ServiceMonitor/monitoring-kube-prometheus-kubelet in cluster: https://10.100.0.1:443, namespace: monitoring"
time="2022-10-26T13:03:24Z" level=info msg="Adding resource result, status: 'Synced', phase: 'Running', message: 'servicemonitor.monitoring.coreos.com/monitoring-kube-prometheus-kubelet serverside-applied'" application=argocd/monitoring kind=ServiceMonitor name=monitoring-kube-prometheus-kubelet namespace=monitoring phase=Sync syncId=00109-rqkbn

Thanks

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 14
  • Comments: 21 (7 by maintainers)

Commits related to this issue

Most upvoted comments

Like mentioned above, there are a few approaches that can be used to address this issue in Argo CD:

  1. If deploying with Kustomize and patch the CRDs with Replace=true annotation
  2. If deploying with Helm, first wrap it inside a kustomize project so you can patch CRD like in 1
  3. Create multiple Argo CD applications: One without CRDs syncing normally and another one just with CRDs syncing with Replace=true

All the approaches above will fix the problem but require some amount of work to be done.

In Argo CD 2.5 you can now use ServerSideApply to avoid the error with big CRDs while syncing. However, Argo CD is unable to consider CRD default values during diff calculation which causes it to show resources out-of-sync when in fact they aren’t. To address this issue with the minimal amount of work users can leverage ignoreDifferences configuration.

To deploy prometheus stack with Argo CD you can apply this Application resource:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: monitoring
  namespace: argocd
spec:
  project: default
  source:
    chart: kube-prometheus-stack
    repoURL: https://prometheus-community.github.io/helm-charts
    targetRevision: 41.6.1
  destination:
    namespace: monitoring
    server: https://kubernetes.default.svc
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - ServerSideApply=true
      - CreateNamespace=true
  ignoreDifferences:
  - group: monitoring.coreos.com
    kind: ServiceMonitor
    jqPathExpressions:
    - .spec.endpoints[]?.relabelings[]?.action

With this approach users don’t need to create an additional project to patch CRDs. Everything can be configured from within the Application resource. Note that every default value can be added to the jqPathExpression list, like in the example above, and it will be ignored during diff calculation.

Ideally Argo CD should be able to retrieve all schemas from the target cluster with the proper structure so it can be used to consider CRD default values during diff calculation. I created the following issue to track this enhancement (https://github.com/argoproj/argo-cd/issues/11139). Please vote for it if you want to see it implemented.

Closing this issue for now.

I installed it using this way:

helmCharts:
- name: kube-prometheus-stack
  repo: https://prometheus-community.github.io/helm-charts
  version: 41.7.0
  releaseName: kube-prometheus-stack
  namespace: kube-prometheus-stack
  includeCRDs: true
  valuesFile: values.yml

patches:
  - patchAnnotationTooLong.yml

Where patchAnnotationTooLong.yml contains:

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  annotations:
    argocd.argoproj.io/sync-options: Replace=true
  name: prometheuses.monitoring.coreos.com

It fixes the annotation too long error

I have just reinstalled from scratch, unfortunately ServerSideApply is required.

one or more objects failed to apply, reason: CustomResourceDefinition.apiextensions.k8s.io "prometheuses.monitoring.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes,resource mapping not found for name: "system-kube-prometheus-sta-prometheus" namespace: "system" from "/dev/shm/3789263989": no matches for kind "Prometheus" in version "monitoring.coreos.com/v1" ensure CRDs are installed first. Retrying attempt #5 at 2:58PM.

Yes I think so. At least I want to use ServerSideApply because it fixes other problems like being able to apply large CRDs etc.

@leoluz

Live manifest
apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"write","app.kubernetes.io/instance":"loki","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"loki","app.kubernetes.io/part-of":"memberlist","app.kubernetes.io/version":"2.6.1","argocd.argoproj.io/instance":"logging","helm.sh/chart":"loki-3.2.0"},"name":"loki-write","namespace":"logging"},"spec":{"podManagementPolicy":"Parallel","replicas":3,"revisionHistoryLimit":10,"selector":{"matchLabels":{"app.kubernetes.io/component":"write","app.kubernetes.io/instance":"loki","app.kubernetes.io/name":"loki"}},"serviceName":"loki-write-headless","template":{"metadata":{"annotations":{"checksum/config":"dc4356fb9c8ae2285982e39f348eaa3087a7bd09084224adb6915903fdf04574"},"labels":{"app.kubernetes.io/component":"write","app.kubernetes.io/instance":"loki","app.kubernetes.io/name":"loki","app.kubernetes.io/part-of":"memberlist"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"app.kubernetes.io/component":"write","app.kubernetes.io/instance":"loki","app.kubernetes.io/name":"loki"}},"topologyKey":"kubernetes.io/hostname"}]}},"automountServiceAccountToken":true,"containers":[{"args":["-config.file=/etc/loki/config/config.yaml","-target=write"],"env":[{"name":"AWS_ACCESS_KEY_ID","valueFrom":{"secretKeyRef":{"key":"AWS_ACCESS_KEY_ID","name":"loki-s3"}}},{"name":"AWS_SECRET_ACCESS_KEY","valueFrom":{"secretKeyRef":{"key":"AWS_SECRET_ACCESS_KEY","name":"loki-s3"}}}],"image":"docker.io/grafana/loki:2.6.1","imagePullPolicy":"IfNotPresent","name":"write","ports":[{"containerPort":3100,"name":"http-metrics","protocol":"TCP"},{"containerPort":9095,"name":"grpc","protocol":"TCP"},{"containerPort":7946,"name":"http-memberlist","protocol":"TCP"}],"readinessProbe":{"httpGet":{"path":"/ready","port":"http-metrics"},"initialDelaySeconds":30,"timeoutSeconds":1},"resources":{},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":true},"volumeMounts":[{"mountPath":"/etc/loki/config","name":"config"},{"mountPath":"/var/loki","name":"data"}]}],"securityContext":{"fsGroup":10001,"runAsGroup":10001,"runAsNonRoot":true,"runAsUser":10001},"serviceAccountName":"loki","terminationGracePeriodSeconds":300,"volumes":[{"configMap":{"name":"loki"},"name":"config"}]}},"updateStrategy":{"rollingUpdate":{"partition":0}},"volumeClaimTemplates":[{"metadata":{"name":"data"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"10Gi"}},"storageClassName":"openebs-hostpath"}}]}}
  creationTimestamp: '2022-10-26T09:13:13Z'
  generation: 1
  labels:
    app.kubernetes.io/component: write
    app.kubernetes.io/instance: loki
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: loki
    app.kubernetes.io/part-of: memberlist
    app.kubernetes.io/version: 2.6.1
    argocd.argoproj.io/instance: logging
    helm.sh/chart: loki-3.2.0
  managedFields:
    - apiVersion: apps/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:labels':
            'f:app.kubernetes.io/component': {}
            'f:app.kubernetes.io/instance': {}
            'f:app.kubernetes.io/managed-by': {}
            'f:app.kubernetes.io/name': {}
            'f:app.kubernetes.io/part-of': {}
            'f:app.kubernetes.io/version': {}
            'f:argocd.argoproj.io/instance': {}
            'f:helm.sh/chart': {}
        'f:spec':
          'f:podManagementPolicy': {}
          'f:replicas': {}
          'f:revisionHistoryLimit': {}
          'f:selector': {}
          'f:serviceName': {}
          'f:template':
            'f:metadata':
              'f:annotations':
                'f:checksum/config': {}
              'f:labels':
                'f:app.kubernetes.io/component': {}
                'f:app.kubernetes.io/instance': {}
                'f:app.kubernetes.io/name': {}
                'f:app.kubernetes.io/part-of': {}
            'f:spec':
              'f:affinity':
                'f:podAntiAffinity':
                  'f:requiredDuringSchedulingIgnoredDuringExecution': {}
              'f:automountServiceAccountToken': {}
              'f:containers':
                'k:{"name":"write"}':
                  .: {}
                  'f:args': {}
                  'f:env':
                    'k:{"name":"AWS_ACCESS_KEY_ID"}':
                      .: {}
                      'f:name': {}
                      'f:valueFrom':
                        'f:secretKeyRef': {}
                    'k:{"name":"AWS_SECRET_ACCESS_KEY"}':
                      .: {}
                      'f:name': {}
                      'f:valueFrom':
                        'f:secretKeyRef': {}
                  'f:image': {}
                  'f:imagePullPolicy': {}
                  'f:name': {}
                  'f:ports':
                    'k:{"containerPort":3100,"protocol":"TCP"}':
                      .: {}
                      'f:containerPort': {}
                      'f:name': {}
                      'f:protocol': {}
                    'k:{"containerPort":7946,"protocol":"TCP"}':
                      .: {}
                      'f:containerPort': {}
                      'f:name': {}
                      'f:protocol': {}
                    'k:{"containerPort":9095,"protocol":"TCP"}':
                      .: {}
                      'f:containerPort': {}
                      'f:name': {}
                      'f:protocol': {}
                  'f:readinessProbe':
                    'f:httpGet':
                      'f:path': {}
                      'f:port': {}
                    'f:initialDelaySeconds': {}
                    'f:timeoutSeconds': {}
                  'f:resources': {}
                  'f:securityContext':
                    'f:allowPrivilegeEscalation': {}
                    'f:capabilities':
                      'f:drop': {}
                    'f:readOnlyRootFilesystem': {}
                  'f:volumeMounts':
                    'k:{"mountPath":"/etc/loki/config"}':
                      .: {}
                      'f:mountPath': {}
                      'f:name': {}
                    'k:{"mountPath":"/var/loki"}':
                      .: {}
                      'f:mountPath': {}
                      'f:name': {}
              'f:securityContext':
                'f:fsGroup': {}
                'f:runAsGroup': {}
                'f:runAsNonRoot': {}
                'f:runAsUser': {}
              'f:serviceAccountName': {}
              'f:terminationGracePeriodSeconds': {}
              'f:volumes':
                'k:{"name":"config"}':
                  .: {}
                  'f:configMap':
                    'f:name': {}
                  'f:name': {}
          'f:updateStrategy':
            'f:rollingUpdate':
              'f:partition': {}
          'f:volumeClaimTemplates': {}
      manager: argocd-controller
      operation: Apply
      time: '2022-10-28T07:36:52Z'
    - apiVersion: apps/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations': {}
          'f:labels':
            .: {}
            'f:app.kubernetes.io/component': {}
            'f:app.kubernetes.io/managed-by': {}
            'f:app.kubernetes.io/name': {}
            'f:app.kubernetes.io/part-of': {}
            'f:app.kubernetes.io/version': {}
            'f:helm.sh/chart': {}
        'f:spec':
          'f:podManagementPolicy': {}
          'f:replicas': {}
          'f:revisionHistoryLimit': {}
          'f:selector': {}
          'f:serviceName': {}
          'f:template':
            'f:metadata':
              'f:annotations':
                .: {}
                'f:checksum/config': {}
              'f:labels':
                .: {}
                'f:app.kubernetes.io/component': {}
                'f:app.kubernetes.io/instance': {}
                'f:app.kubernetes.io/name': {}
                'f:app.kubernetes.io/part-of': {}
            'f:spec':
              'f:affinity':
                .: {}
                'f:podAntiAffinity':
                  .: {}
                  'f:requiredDuringSchedulingIgnoredDuringExecution': {}
              'f:automountServiceAccountToken': {}
              'f:containers':
                'k:{"name":"write"}':
                  .: {}
                  'f:args': {}
                  'f:env':
                    .: {}
                    'k:{"name":"AWS_ACCESS_KEY_ID"}':
                      .: {}
                      'f:name': {}
                      'f:valueFrom':
                        .: {}
                        'f:secretKeyRef': {}
                    'k:{"name":"AWS_SECRET_ACCESS_KEY"}':
                      .: {}
                      'f:name': {}
                      'f:valueFrom':
                        .: {}
                        'f:secretKeyRef': {}
                  'f:image': {}
                  'f:imagePullPolicy': {}
                  'f:name': {}
                  'f:ports':
                    .: {}
                    'k:{"containerPort":3100,"protocol":"TCP"}':
                      .: {}
                      'f:containerPort': {}
                      'f:name': {}
                      'f:protocol': {}
                    'k:{"containerPort":7946,"protocol":"TCP"}':
                      .: {}
                      'f:containerPort': {}
                      'f:name': {}
                      'f:protocol': {}
                    'k:{"containerPort":9095,"protocol":"TCP"}':
                      .: {}
                      'f:containerPort': {}
                      'f:name': {}
                      'f:protocol': {}
                  'f:readinessProbe':
                    .: {}
                    'f:failureThreshold': {}
                    'f:httpGet':
                      .: {}
                      'f:path': {}
                      'f:port': {}
                      'f:scheme': {}
                    'f:initialDelaySeconds': {}
                    'f:periodSeconds': {}
                    'f:successThreshold': {}
                    'f:timeoutSeconds': {}
                  'f:resources': {}
                  'f:securityContext':
                    .: {}
                    'f:allowPrivilegeEscalation': {}
                    'f:capabilities':
                      .: {}
                      'f:drop': {}
                    'f:readOnlyRootFilesystem': {}
                  'f:terminationMessagePath': {}
                  'f:terminationMessagePolicy': {}
                  'f:volumeMounts':
                    .: {}
                    'k:{"mountPath":"/etc/loki/config"}':
                      .: {}
                      'f:mountPath': {}
                      'f:name': {}
                    'k:{"mountPath":"/var/loki"}':
                      .: {}
                      'f:mountPath': {}
                      'f:name': {}
              'f:dnsPolicy': {}
              'f:restartPolicy': {}
              'f:schedulerName': {}
              'f:securityContext':
                .: {}
                'f:fsGroup': {}
                'f:runAsGroup': {}
                'f:runAsNonRoot': {}
                'f:runAsUser': {}
              'f:serviceAccount': {}
              'f:serviceAccountName': {}
              'f:terminationGracePeriodSeconds': {}
              'f:volumes':
                .: {}
                'k:{"name":"config"}':
                  .: {}
                  'f:configMap':
                    .: {}
                    'f:defaultMode': {}
                    'f:name': {}
                  'f:name': {}
          'f:updateStrategy':
            'f:rollingUpdate':
              .: {}
              'f:partition': {}
            'f:type': {}
      manager: argocd-application-controller
      operation: Update
      time: '2022-10-26T09:13:13Z'
    - apiVersion: apps/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:status':
          'f:availableReplicas': {}
          'f:collisionCount': {}
          'f:currentReplicas': {}
          'f:currentRevision': {}
          'f:observedGeneration': {}
          'f:readyReplicas': {}
          'f:replicas': {}
          'f:updateRevision': {}
          'f:updatedReplicas': {}
      manager: kube-controller-manager
      operation: Update
      subresource: status
      time: '2022-10-26T09:18:53Z'
    - apiVersion: apps/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            'f:kubectl.kubernetes.io/last-applied-configuration': {}
          'f:labels':
            'f:app.kubernetes.io/instance': {}
            'f:argocd.argoproj.io/instance': {}
      manager: argocd-controller
      operation: Update
      time: '2022-10-28T07:19:13Z'
  name: loki-write
  namespace: logging
  resourceVersion: '46346521'
  uid: 159449f2-01c3-4ee1-8b91-2e1e90c1e9eb
spec:
  podManagementPolicy: Parallel
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/component: write
      app.kubernetes.io/instance: loki
      app.kubernetes.io/name: loki
  serviceName: loki-write-headless
  template:
    metadata:
      annotations:
        checksum/config: dc4356fb9c8ae2285982e39f348eaa3087a7bd09084224adb6915903fdf04574
      creationTimestamp: null
      labels:
        app.kubernetes.io/component: write
        app.kubernetes.io/instance: loki
        app.kubernetes.io/name: loki
        app.kubernetes.io/part-of: memberlist
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                  app.kubernetes.io/component: write
                  app.kubernetes.io/instance: loki
                  app.kubernetes.io/name: loki
              topologyKey: kubernetes.io/hostname
      automountServiceAccountToken: true
      containers:
        - args:
            - '-config.file=/etc/loki/config/config.yaml'
            - '-target=write'
          env:
            - name: AWS_ACCESS_KEY_ID
              valueFrom:
                secretKeyRef:
                  key: AWS_ACCESS_KEY_ID
                  name: loki-s3
            - name: AWS_SECRET_ACCESS_KEY
              valueFrom:
                secretKeyRef:
                  key: AWS_SECRET_ACCESS_KEY
                  name: loki-s3
          image: 'docker.io/grafana/loki:2.6.1'
          imagePullPolicy: IfNotPresent
          name: write
          ports:
            - containerPort: 3100
              name: http-metrics
              protocol: TCP
            - containerPort: 9095
              name: grpc
              protocol: TCP
            - containerPort: 7946
              name: http-memberlist
              protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /ready
              port: http-metrics
              scheme: HTTP
            initialDelaySeconds: 30
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          resources: {}
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
                - ALL
            readOnlyRootFilesystem: true
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /etc/loki/config
              name: config
            - mountPath: /var/loki
              name: data
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 10001
        runAsGroup: 10001
        runAsNonRoot: true
        runAsUser: 10001
      serviceAccount: loki
      serviceAccountName: loki
      terminationGracePeriodSeconds: 300
      volumes:
        - configMap:
            defaultMode: 420
            name: loki
          name: config
  updateStrategy:
    rollingUpdate:
      partition: 0
    type: RollingUpdate
  volumeClaimTemplates:
    - apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        creationTimestamp: null
        name: data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
        storageClassName: openebs-hostpath
        volumeMode: Filesystem
      status:
        phase: Pending
status:
  availableReplicas: 3
  collisionCount: 0
  currentReplicas: 3
  currentRevision: loki-write-68f4b7bcfc
  observedGeneration: 1
  readyReplicas: 3
  replicas: 3
  updateRevision: loki-write-68f4b7bcfc
  updatedReplicas: 3
Desired manifest
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app.kubernetes.io/component: write
    app.kubernetes.io/instance: loki
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: loki
    app.kubernetes.io/part-of: memberlist
    app.kubernetes.io/version: 2.6.1
    argocd.argoproj.io/instance: logging
    helm.sh/chart: loki-3.2.0
  name: loki-write
  namespace: logging
spec:
  podManagementPolicy: Parallel
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/component: write
      app.kubernetes.io/instance: loki
      app.kubernetes.io/name: loki
  serviceName: loki-write-headless
  template:
    metadata:
      annotations:
        checksum/config: dc4356fb9c8ae2285982e39f348eaa3087a7bd09084224adb6915903fdf04574
      labels:
        app.kubernetes.io/component: write
        app.kubernetes.io/instance: loki
        app.kubernetes.io/name: loki
        app.kubernetes.io/part-of: memberlist
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                  app.kubernetes.io/component: write
                  app.kubernetes.io/instance: loki
                  app.kubernetes.io/name: loki
              topologyKey: kubernetes.io/hostname
      automountServiceAccountToken: true
      containers:
        - args:
            - '-config.file=/etc/loki/config/config.yaml'
            - '-target=write'
          env:
            - name: AWS_ACCESS_KEY_ID
              valueFrom:
                secretKeyRef:
                  key: AWS_ACCESS_KEY_ID
                  name: loki-s3
            - name: AWS_SECRET_ACCESS_KEY
              valueFrom:
                secretKeyRef:
                  key: AWS_SECRET_ACCESS_KEY
                  name: loki-s3
          image: 'docker.io/grafana/loki:2.6.1'
          imagePullPolicy: IfNotPresent
          name: write
          ports:
            - containerPort: 3100
              name: http-metrics
              protocol: TCP
            - containerPort: 9095
              name: grpc
              protocol: TCP
            - containerPort: 7946
              name: http-memberlist
              protocol: TCP
          readinessProbe:
            httpGet:
              path: /ready
              port: http-metrics
            initialDelaySeconds: 30
            timeoutSeconds: 1
          resources: {}
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
                - ALL
            readOnlyRootFilesystem: true
          volumeMounts:
            - mountPath: /etc/loki/config
              name: config
            - mountPath: /var/loki
              name: data
      securityContext:
        fsGroup: 10001
        runAsGroup: 10001
        runAsNonRoot: true
        runAsUser: 10001
      serviceAccountName: loki
      terminationGracePeriodSeconds: 300
      volumes:
        - configMap:
            name: loki
          name: config
  updateStrategy:
    rollingUpdate:
      partition: 0
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
        storageClassName: openebs-hostpath

I also saw this old issue https://github.com/argoproj/argo-cd/issues/4126 related to what looks like the same problem.

Apart from that I also now see another issue using the loki helmchart in a ServiceMonitor - Probably same issue as with kube-prometheus-stack helm chart.

Screenshot 2022-10-28 at 09 50 26
Live for loki servicemonitor
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"monitoring.coreos.com/v1","kind":"ServiceMonitor","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"loki","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"loki","app.kubernetes.io/version":"2.6.1","argocd.argoproj.io/instance":"logging","helm.sh/chart":"loki-3.2.0"},"name":"loki","namespace":"logging"},"spec":{"endpoints":[{"path":"/metrics","port":"http-metrics","relabelings":[{"replacement":"logging/$1","sourceLabels":["job"],"targetLabel":"job"},{"replacement":"loki","targetLabel":"cluster"}],"scheme":"http"}],"selector":{"matchExpressions":[{"key":"prometheus.io/service-monitor","operator":"NotIn","values":["false"]}],"matchLabels":{"app.kubernetes.io/instance":"loki","app.kubernetes.io/name":"loki"}}}}
  creationTimestamp: '2022-10-27T21:13:07Z'
  generation: 1
  labels:
    app.kubernetes.io/instance: loki
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: loki
    app.kubernetes.io/version: 2.6.1
    argocd.argoproj.io/instance: logging
    helm.sh/chart: loki-3.2.0
  managedFields:
    - apiVersion: monitoring.coreos.com/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:labels':
            'f:app.kubernetes.io/instance': {}
            'f:app.kubernetes.io/managed-by': {}
            'f:app.kubernetes.io/name': {}
            'f:app.kubernetes.io/version': {}
            'f:argocd.argoproj.io/instance': {}
            'f:helm.sh/chart': {}
        'f:spec':
          'f:endpoints': {}
          'f:selector': {}
      manager: argocd-controller
      operation: Apply
      time: '2022-10-28T07:50:32Z'
    - apiVersion: monitoring.coreos.com/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:kubectl.kubernetes.io/last-applied-configuration': {}
          'f:labels':
            .: {}
            'f:app.kubernetes.io/instance': {}
            'f:app.kubernetes.io/managed-by': {}
            'f:app.kubernetes.io/name': {}
            'f:app.kubernetes.io/version': {}
            'f:argocd.argoproj.io/instance': {}
            'f:helm.sh/chart': {}
        'f:spec':
          .: {}
          'f:selector': {}
      manager: argocd-controller
      operation: Update
      time: '2022-10-28T07:19:13Z'
  name: loki
  namespace: logging
  resourceVersion: '46358803'
  uid: a7df54d3-fa2a-4e63-b1a2-b2a643ff06bb
spec:
  endpoints:
    - path: /metrics
      port: http-metrics
      relabelings:
        - action: replace
          replacement: logging/$1
          sourceLabels:
            - job
          targetLabel: job
        - action: replace
          replacement: loki
          targetLabel: cluster
      scheme: http
  selector:
    matchExpressions:
      - key: prometheus.io/service-monitor
        operator: NotIn
        values:
          - 'false'
    matchLabels:
      app.kubernetes.io/instance: loki
      app.kubernetes.io/name: loki
Desired for loki servicemonitor
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    app.kubernetes.io/instance: loki
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: loki
    app.kubernetes.io/version: 2.6.1
    argocd.argoproj.io/instance: logging
    helm.sh/chart: loki-3.2.0
  name: loki
  namespace: logging
spec:
  endpoints:
    - path: /metrics
      port: http-metrics
      relabelings:
        - replacement: logging/$1
          sourceLabels:
            - job
          targetLabel: job
        - replacement: loki
          targetLabel: cluster
      scheme: http
  selector:
    matchExpressions:
      - key: prometheus.io/service-monitor
        operator: NotIn
        values:
          - 'false'
    matchLabels:
      app.kubernetes.io/instance: loki
      app.kubernetes.io/name: loki

The ignoredifferences solution above worked for me except I had to specify something different to match:

      ignoreDifferences:
      - group: monitoring.coreos.com
        kind: ServiceMonitor
        jqPathExpressions:
        - .metadata.annotations

@Cowboy-coder yes please… Just copy/paste the statefulset details in the new ticket from your previous comment: https://github.com/argoproj/argo-cd/issues/11074#issuecomment-1294634254

@Cowboy-coder client-side-apply diff is based on patches calculated with 3-way-diff using desired state, live state and last-applied-configuration annotation. Server-side-apply diffs is a brand new implementation that uses the same library used by kubernetes while applying resources server side which leverages the managedFields to inspect and define fields ownership.

@leoluz I have the same issue / background. Currently, syncing kube-prometheus-stack with Replace=true for the too big CRDs and testing SSA to resolve that hack.

prometheus-operator-kubelet live manifest (after recreating it with ServerSideApply=true on 2.5.0)
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  creationTimestamp: "2022-10-27T16:03:32Z"
  generation: 1
  labels:
    app: prometheus-operator-kubelet
    app.kubernetes.io/instance: prometheus-operator
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/part-of: prometheus-operator
    app.kubernetes.io/version: 40.5.0
    chart: kube-prometheus-stack-40.5.0
    heritage: Helm
    release: prometheus-operator
  name: prometheus-operator-kubelet
  namespace: services
  resourceVersion: "97862637"
  uid: f8e4315e-ff4c-46ae-b86b-9a3c51cfd9c1
spec:
  endpoints:
  - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    honorLabels: true
    port: https-metrics
    relabelings:
    - action: replace
      sourceLabels:
      - __metrics_path__
      targetLabel: metrics_path
    scheme: https
    tlsConfig:
      caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      insecureSkipVerify: true
  - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    honorLabels: true
    metricRelabelings:
    - action: drop
      regex: container_cpu_(cfs_throttled_seconds_total|load_average_10s|system_seconds_total|user_seconds_total)
      sourceLabels:
      - __name__
    - action: drop
      regex: container_fs_(io_current|io_time_seconds_total|io_time_weighted_seconds_total|reads_merged_total|sector_reads_total|sector_writes_total|writes_merged_total)
      sourceLabels:
      - __name__
    - action: drop
      regex: container_memory_(mapped_file|swap)
      sourceLabels:
      - __name__
    - action: drop
      regex: container_(file_descriptors|tasks_state|threads_max)
      sourceLabels:
      - __name__
    - action: drop
      regex: container_spec.*
      sourceLabels:
      - __name__
    - action: drop
      regex: .+;
      sourceLabels:
      - id
      - pod
    path: /metrics/cadvisor
    port: https-metrics
    relabelings:
    - action: replace
      sourceLabels:
      - __metrics_path__
      targetLabel: metrics_path
    scheme: https
    tlsConfig:
      caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      insecureSkipVerify: true
  - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    honorLabels: true
    path: /metrics/probes
    port: https-metrics
    relabelings:
    - action: replace
      sourceLabels:
      - __metrics_path__
      targetLabel: metrics_path
    scheme: https
    tlsConfig:
      caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      insecureSkipVerify: true
  jobLabel: k8s-app
  namespaceSelector:
    matchNames:
    - kube-system
  selector:
    matchLabels:
      app.kubernetes.io/name: kubelet
      k8s-app: kubelet
Desired resource definition (generated locally via helm template; but synced via kustomize by ArgoCD)
---
# Source: kube-prometheus-stack/templates/exporters/kubelet/servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: prometheus-operator-kubelet
  namespace: services
  labels:
    app: prometheus-operator-kubelet    
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/instance: prometheus-operator
    app.kubernetes.io/version: "40.5.0"
    app.kubernetes.io/part-of: prometheus-operator
    chart: kube-prometheus-stack-40.5.0
    release: "prometheus-operator"
    heritage: "Helm"
spec:
  endpoints:
  - port: https-metrics
    scheme: https
    tlsConfig:
      caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      insecureSkipVerify: true
    bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    honorLabels: true
    relabelings:
    - sourceLabels:
      - __metrics_path__
      targetLabel: metrics_path
  - port: https-metrics
    scheme: https
    path: /metrics/cadvisor
    honorLabels: true
    tlsConfig:
      caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      insecureSkipVerify: true
    bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    metricRelabelings:
    - action: drop
      regex: container_cpu_(cfs_throttled_seconds_total|load_average_10s|system_seconds_total|user_seconds_total)
      sourceLabels:
      - __name__
    - action: drop
      regex: container_fs_(io_current|io_time_seconds_total|io_time_weighted_seconds_total|reads_merged_total|sector_reads_total|sector_writes_total|writes_merged_total)
      sourceLabels:
      - __name__
    - action: drop
      regex: container_memory_(mapped_file|swap)
      sourceLabels:
      - __name__
    - action: drop
      regex: container_(file_descriptors|tasks_state|threads_max)
      sourceLabels:
      - __name__
    - action: drop
      regex: container_spec.*
      sourceLabels:
      - __name__
    - action: drop
      regex: .+;
      sourceLabels:
      - id
      - pod
    relabelings:
    - sourceLabels:
      - __metrics_path__
      targetLabel: metrics_path
  - port: https-metrics
    scheme: https
    path: /metrics/probes
    honorLabels: true
    tlsConfig:
      caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      insecureSkipVerify: true
    bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    relabelings:
    - sourceLabels:
      - __metrics_path__
      targetLabel: metrics_path
  jobLabel: k8s-app
  namespaceSelector:
    matchNames:
    - kube-system
  selector:
    matchLabels:
      app.kubernetes.io/name: kubelet
      k8s-app: kubelet

I suspect the default value from the CRD plays a role here.