argo-workflows: Cannot submit workflow from workflow template with output artifact

Checklist

  • Double-checked my configuration.
  • Tested using the latest version.
  • Used the Emissary executor.

Summary

What happened/what you expected to happen?

With the following WorkflowTemplate with an output artifact defined with the name messagejson. I am trying to configure it to use S3 in a Workflow similar to how input artifact in a WorkflowTemplate is configured.

However, I get Error (exit code 1): You need to configure artifact storage. More information on how to do this can be found in the docs: https://argoproj.github.io/argo-workflows/configure-artifact-repository/.

Should be able to configure the details of output artifacts, in this case, S3 endpoint, credentials, etc…, just like how input artifacts can be configured. However, error shown as above.

What version are you running? 3.3.1 & 3.3.2 & 3.3.3

Diagnostics

Paste the smallest workflow that reproduces the bug. We must be able to run the workflow.

apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
  name: file-output
spec:
  entrypoint: writefile
  templates:
    - name: writefile
      container:
        image: alpine:latest
        command: ["/bin/sh", "-c"]
        args: ["echo hello | tee /tmp/message.json; ls -l /tmp; cat /tmp/message.json"]
      outputs:
        artifacts:
        - name: messagejson
          path: /tmp/message.json
---
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: read-file-
spec:
  entrypoint: read-file
  templates:
  - name: read-file
    steps:
    - - name: print-file-content
        templateRef:
          name: file-output
          template: writefile
        arguments:
          artifacts:
          - name: messagejson
            s3:
              endpoint: 1.2.3.4
              bucket: mybucket
              key: "/rabbit/message.json"
              insecure: true 
              accessKeySecret:
                name: my-s3-credentials
                key: accessKey
              secretKeySecret:
                name: my-s3-credentials
                key: secretKey
# Logs from the workflow controller:
kubectl logs -n argo deploy/workflow-controller | grep ${workflow} 


time="2022-04-24T03:36:20.827Z" level=info msg="Processing workflow" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:20.844Z" level=info msg="Updated phase  -> Running" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:20.844Z" level=info msg="Steps node read-file-cp2c6 initialized Running" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:20.844Z" level=info msg="StepGroup node read-file-cp2c6-1929381673 initialized Running" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:20.844Z" level=info msg="Pod node read-file-cp2c6-2982165601 initialized Pending" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:20.860Z" level=info msg="Created pod: read-file-cp2c6[0].print-file-content (read-file-cp2c6-2982165601)" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:20.860Z" level=info msg="Workflow step group node read-file-cp2c6-1929381673 not yet completed" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:20.860Z" level=info msg="TaskSet Reconciliation" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:20.860Z" level=info msg=reconcileAgentPod namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:20.869Z" level=info msg="Workflow update successful" namespace=default phase=Running resourceVersion=969154 workflow=read-file-cp2c6
time="2022-04-24T03:36:30.862Z" level=info msg="Processing workflow" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:30.863Z" level=info msg="Task-result reconciliation" namespace=default numObjs=0 workflow=read-file-cp2c6
time="2022-04-24T03:36:30.863Z" level=warning msg="workflow uses legacy/insecure pod patch, see https://argoproj.github.io/argo-workflows/workflow-rbac/" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:30.863Z" level=info msg="node changed" new.message="Error (exit code 1): You need to configure artifact storage. More information on how to do this can be found in the docs: https://argoproj.github.io/argo-workflows/configure-artifact-repository/" new.phase=Error new.progress=0/1 nodeID=read-file-cp2c6-2982165601 old.message= old.phase=Pending old.progress=0/1
time="2022-04-24T03:36:30.864Z" level=info msg="Step group node read-file-cp2c6-1929381673 deemed failed: child 'read-file-cp2c6-2982165601' failed" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:30.864Z" level=info msg="node read-file-cp2c6-1929381673 phase Running -> Failed" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:30.864Z" level=info msg="node read-file-cp2c6-1929381673 message: child 'read-file-cp2c6-2982165601' failed" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:30.864Z" level=info msg="node read-file-cp2c6-1929381673 finished: 2022-04-24 03:36:30.864594939 +0000 UTC" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:30.864Z" level=info msg="step group read-file-cp2c6-1929381673 was unsuccessful: child 'read-file-cp2c6-2982165601' failed" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:30.864Z" level=info msg="Outbound nodes of read-file-cp2c6-2982165601 is [read-file-cp2c6-2982165601]" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:30.864Z" level=info msg="Outbound nodes of read-file-cp2c6 is [read-file-cp2c6-2982165601]" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:30.864Z" level=info msg="node read-file-cp2c6 phase Running -> Failed" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:30.864Z" level=info msg="node read-file-cp2c6 message: child 'read-file-cp2c6-2982165601' failed" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:30.864Z" level=info msg="node read-file-cp2c6 finished: 2022-04-24 03:36:30.864881661 +0000 UTC" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:30.864Z" level=info msg="Checking daemoned children of read-file-cp2c6" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:30.864Z" level=info msg="TaskSet Reconciliation" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:30.864Z" level=info msg=reconcileAgentPod namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:30.864Z" level=info msg="Updated phase Running -> Failed" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:30.865Z" level=info msg="Updated message  -> child 'read-file-cp2c6-2982165601' failed" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:30.865Z" level=info msg="Marking workflow completed" namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:30.865Z" level=info msg="Checking daemoned children of " namespace=default workflow=read-file-cp2c6
time="2022-04-24T03:36:30.870Z" level=info msg="cleaning up pod" action=deletePod key=default/read-file-cp2c6-1340600742-agent/deletePod
time="2022-04-24T03:36:30.889Z" level=info msg="Workflow update successful" namespace=default phase=Failed resourceVersion=969215 workflow=read-file-cp2c6
time="2022-04-24T03:36:30.900Z" level=info msg="cleaning up pod" action=labelPodCompleted key=default/read-file-cp2c6-2982165601/labelPodCompleted


# If the workflow's pods have not been created, you can skip the rest of the diagnostics.

# The workflow's pods that are problematic:
kubectl get pod -o yaml -l workflows.argoproj.io/workflow=${workflow},workflow.argoproj.io/phase!=Succeeded

apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      cni.projectcalico.org/containerID: 7655f1d31a2eee80e86083add52df808e1c01764521a56630619a0578c6d7923
      cni.projectcalico.org/podIP: ""
      cni.projectcalico.org/podIPs: ""
      kubectl.kubernetes.io/default-container: main
      workflows.argoproj.io/node-id: read-file-cp2c6-2982165601
      workflows.argoproj.io/node-name: read-file-cp2c6[0].print-file-content
      workflows.argoproj.io/outputs: '{"artifacts":[{"name":"messagejson","path":"/tmp/message.json","s3":{}}]}'
    creationTimestamp: "2022-04-24T03:36:20Z"
    labels:
      workflows.argoproj.io/completed: "true"
      workflows.argoproj.io/workflow: read-file-cp2c6
    name: read-file-cp2c6-2982165601
    namespace: default
    ownerReferences:
    - apiVersion: argoproj.io/v1alpha1
      blockOwnerDeletion: true
      controller: true
      kind: Workflow
      name: read-file-cp2c6
      uid: e15d4f08-2393-497b-bcba-f2adbd1cd129
    resourceVersion: "969220"
    uid: 65ae857f-1f04-4583-a5bf-7ef7fed157f5
  spec:
    containers:
    - command:
      - argoexec
      - wait
      - --loglevel
      - info
      env:
      - name: ARGO_POD_NAME
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.name
      - name: ARGO_POD_UID
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.uid
      - name: GODEBUG
        value: x509ignoreCN=0
      - name: ARGO_WORKFLOW_NAME
        value: read-file-cp2c6
      - name: ARGO_CONTAINER_NAME
        value: wait
      - name: ARGO_TEMPLATE
        value: '{"name":"writefile","inputs":{},"outputs":{"artifacts":[{"name":"messagejson","path":"/tmp/message.json","s3":{}}]},"metadata":{},"container":{"name":"","image":"alpine:latest","command":["/bin/sh","-c"],"args":["echo
          hello | tee /tmp/message.json; ls -l /tmp; cat /tmp/message.json"],"resources":{}},"archiveLocation":{"archiveLogs":false}}'
      - name: ARGO_NODE_ID
        value: read-file-cp2c6-2982165601
      - name: ARGO_INCLUDE_SCRIPT_OUTPUT
        value: "false"
      - name: ARGO_DEADLINE
        value: "0001-01-01T00:00:00Z"
      - name: ARGO_PROGRESS_FILE
        value: /var/run/argo/progress
      - name: ARGO_PROGRESS_PATCH_TICK_DURATION
        value: 1m0s
      - name: ARGO_PROGRESS_FILE_TICK_DURATION
        value: 3s
      image: quay.io/argoproj/argoexec:latest
      imagePullPolicy: Always
      name: wait
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/argo
        name: var-run-argo
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-lwbsb
        readOnly: true
    - args:
      - echo hello | tee /tmp/message.json; ls -l /tmp; cat /tmp/message.json
      command:
      - /var/run/argo/argoexec
      - emissary
      - --
      - /bin/sh
      - -c
      env:
      - name: ARGO_CONTAINER_NAME
        value: main
      - name: ARGO_TEMPLATE
        value: '{"name":"writefile","inputs":{},"outputs":{"artifacts":[{"name":"messagejson","path":"/tmp/message.json","s3":{}}]},"metadata":{},"container":{"name":"","image":"alpine:latest","command":["/bin/sh","-c"],"args":["echo
          hello | tee /tmp/message.json; ls -l /tmp; cat /tmp/message.json"],"resources":{}},"archiveLocation":{"archiveLogs":false}}'
      - name: ARGO_NODE_ID
        value: read-file-cp2c6-2982165601
      - name: ARGO_INCLUDE_SCRIPT_OUTPUT
        value: "false"
      - name: ARGO_DEADLINE
        value: "0001-01-01T00:00:00Z"
      - name: ARGO_PROGRESS_FILE
        value: /var/run/argo/progress
      - name: ARGO_PROGRESS_PATCH_TICK_DURATION
        value: 1m0s
      - name: ARGO_PROGRESS_FILE_TICK_DURATION
        value: 3s
      image: alpine:latest
      imagePullPolicy: Always
      name: main
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/argo
        name: var-run-argo
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-lwbsb
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    initContainers:
    - command:
      - argoexec
      - init
      - --loglevel
      - info
      env:
      - name: ARGO_POD_NAME
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.name
      - name: ARGO_POD_UID
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.uid
      - name: GODEBUG
        value: x509ignoreCN=0
      - name: ARGO_WORKFLOW_NAME
        value: read-file-cp2c6
      - name: ARGO_CONTAINER_NAME
        value: init
      - name: ARGO_TEMPLATE
        value: '{"name":"writefile","inputs":{},"outputs":{"artifacts":[{"name":"messagejson","path":"/tmp/message.json","s3":{}}]},"metadata":{},"container":{"name":"","image":"alpine:latest","command":["/bin/sh","-c"],"args":["echo
          hello | tee /tmp/message.json; ls -l /tmp; cat /tmp/message.json"],"resources":{}},"archiveLocation":{"archiveLogs":false}}'
      - name: ARGO_NODE_ID
        value: read-file-cp2c6-2982165601
      - name: ARGO_INCLUDE_SCRIPT_OUTPUT
        value: "false"
      - name: ARGO_DEADLINE
        value: "0001-01-01T00:00:00Z"
      - name: ARGO_PROGRESS_FILE
        value: /var/run/argo/progress
      - name: ARGO_PROGRESS_PATCH_TICK_DURATION
        value: 1m0s
      - name: ARGO_PROGRESS_FILE_TICK_DURATION
        value: 3s
      image: quay.io/argoproj/argoexec:latest
      imagePullPolicy: Always
      name: init
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/argo
        name: var-run-argo
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-lwbsb
        readOnly: true
    nodeName: vchang-dt1
    preemptionPolicy: PreemptLowerPriority
    priority: 0
    restartPolicy: Never
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: default
    serviceAccountName: default
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    volumes:
    - emptyDir: {}
      name: var-run-argo
    - name: kube-api-access-lwbsb
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-04-24T03:36:25Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-04-24T03:36:20Z"
      message: 'containers with unready status: [wait main]'
      reason: ContainersNotReady
      status: "False"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-04-24T03:36:20Z"
      message: 'containers with unready status: [wait main]'
      reason: ContainersNotReady
      status: "False"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-04-24T03:36:20Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: containerd://94851832c44dee902fbf897fcd445fb8b39da876d527434c18ffa7b3fb7b7547
      image: docker.io/library/alpine:latest
      imageID: docker.io/library/alpine@sha256:4edbd2beb5f78b1014028f4fbb99f3237d9561100b6881aabbf5acce2c4f9454
      lastState: {}
      name: main
      ready: false
      restartCount: 0
      started: false
      state:
        terminated:
          containerID: containerd://94851832c44dee902fbf897fcd445fb8b39da876d527434c18ffa7b3fb7b7547
          exitCode: 0
          finishedAt: "2022-04-24T03:36:27Z"
          reason: Completed
          startedAt: "2022-04-24T03:36:27Z"
    - containerID: containerd://161e4fb9af5179f13c9c033749f57329092f5b177e3dc7ee8431178eb5a00619
      image: quay.io/argoproj/argoexec:latest
      imageID: quay.io/argoproj/argoexec@sha256:0bba746f58d8fde71d86ff3a72bdb829a4a73366c7483f24788b86bf62afa2d7
      lastState: {}
      name: wait
      ready: false
      restartCount: 0
      started: false
      state:
        terminated:
          containerID: containerd://161e4fb9af5179f13c9c033749f57329092f5b177e3dc7ee8431178eb5a00619
          exitCode: 1
          finishedAt: "2022-04-24T03:36:28Z"
          message: 'You need to configure artifact storage. More information on how
            to do this can be found in the docs: https://argoproj.github.io/argo-workflows/configure-artifact-repository/'
          reason: Error
          startedAt: "2022-04-24T03:36:26Z"
    hostIP: 10.20.13.58
    initContainerStatuses:
    - containerID: containerd://57ec10648e76128d12ac51cacd0d25c9e1b6df207e96fb6ccb53d5b91a530f8e
      image: quay.io/argoproj/argoexec:latest
      imageID: quay.io/argoproj/argoexec@sha256:0bba746f58d8fde71d86ff3a72bdb829a4a73366c7483f24788b86bf62afa2d7
      lastState: {}
      name: init
      ready: true
      restartCount: 0
      state:
        terminated:
          containerID: containerd://57ec10648e76128d12ac51cacd0d25c9e1b6df207e96fb6ccb53d5b91a530f8e
          exitCode: 0
          finishedAt: "2022-04-24T03:36:25Z"
          reason: Completed
          startedAt: "2022-04-24T03:36:25Z"
    phase: Failed
    podIP: 192.168.29.25
    podIPs:
    - ip: 192.168.29.25
    qosClass: BestEffort
    startTime: "2022-04-24T03:36:20Z"
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

# Logs from in your workflow's wait container, something like:
kubectl logs -c wait -l workflows.argoproj.io/workflow=${workflow},workflow.argoproj.io/phase!=Succeeded

time="2022-04-24T03:36:28.585Z" level=info msg="Staging artifact: messagejson"
time="2022-04-24T03:36:28.585Z" level=info msg="Copying /tmp/message.json from container base image layer to /tmp/argo/outputs/artifacts/messagejson.tgz"
time="2022-04-24T03:36:28.585Z" level=info msg="/var/run/argo/outputs/artifacts/tmp/message.json.tgz -> /tmp/argo/outputs/artifacts/messagejson.tgz"
time="2022-04-24T03:36:28.586Z" level=error msg="executor error: You need to configure artifact storage. More information on how to do this can be found in the docs: https://argoproj.github.io/argo-workflows/configure-artifact-repository/"
time="2022-04-24T03:36:28.602Z" level=info msg="Create workflowtaskresults 403"
time="2022-04-24T03:36:28.603Z" level=warning msg="failed to patch task set, falling back to legacy/insecure pod patch, see https://argoproj.github.io/argo-workflows/workflow-rbac/" error="workflowtaskresults.argoproj.io is forbidden: User \"system:serviceaccount:default:default\" cannot create resource \"workflowtaskresults\" in API group \"argoproj.io\" in the namespace \"default\""
time="2022-04-24T03:36:28.617Z" level=info msg="Patch pods 200"
time="2022-04-24T03:36:28.623Z" level=info msg="Killing sidecars []"
time="2022-04-24T03:36:28.623Z" level=info msg="Alloc=6334 TotalAlloc=11022 Sys=19154 NumGC=3 Goroutines=8"
time="2022-04-24T03:36:28.623Z" level=fatal msg="You need to configure artifact storage. More information on how to do this can be found in the docs: https://argoproj.github.io/argo-workflows/configure-artifact-repository/"

Message from the maintainers:

Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 9
  • Comments: 16 (5 by maintainers)

Most upvoted comments

I came herer for search “failed to patch task set”, anyone know what does that mean? my sa already has patch role for workflowtasksets

level=info msg="Create workflowtaskresults 403"
level=warning msg="failed to patch task set, falling back to legacy/insecure pod patch, see https://argoproj.github.io/argo-workflows/workflow-rbac/" error="workflowtaskresults.argoproj.io is forbidden: User \"system:serviceaccount:argo:default\" cannot create resource \"workflowtaskresults\" in API group \"argoproj.io\" in the namespace \"argo\""
level=info msg="Patch pods 403"

Error (exit code 1): You need to configure artifact storage. More information on how to do this can be found in the docs: https://argoproj.github.io/argo-workflows/configure-artifact-repository/

    outputs:
      artifacts:
        - name: file
          path: /tmp/file.txt
          archive:
            none: {}

What is the solution, why closed? I happened to this issue when use the argo workflow on minikube

This week.

@mocsharp re-reading this, I’m not sure this a bug. You can help me by running workflow-controller:latest. If that does not work either, can you please share your artifact repository configuration?

@alexec Just tried v3.3.3 and am still getting the same error.

I was not clear is not fixed in :v3.3.2. It is fixed is :latest. It will be fixed in :v3.3.3