argo-workflows: High RAM Usage and Frequent Crashes with Argo Workflows Controller on Large Scale Workflows

Pre-requisites

  • I have double-checked my configuration
  • I can confirm the issues exists when I tested with :latest
  • I’d like to contribute the fix myself (see contributing guide)

What happened/what you expected to happen?

Description: We are observing significant issues with the Argo Workflows Controller while handling a large number of workflows in parallel.

Environment: Argo Workflows version: 3.4.11 Nodes count: 300+ Parallel workflows: 5000+

What happened: The Argo Workflows Controller’s memory consumption increases exponentially, sometimes surpassing 100GB. Despite this excessive memory usage, the controller crashes frequently. Notably, despite workflows being archived, they aren’t deleted post-archiving, possibly contributing to the memory usage. It does not log any specific error messages prior to these crashes, making it challenging to pinpoint the cause or underlying issue.

What you expected to happen: We expected the Argo Workflows Controller to handle the parallel execution of 5000+ workflows across 300+ nodes without such a drastic increase in RAM consumption. We also expected a more resilient behavior, not prone to unexpected crashes, and better error logging for troubleshooting.

How to reproduce it (as minimally and precisely as possible): Set up an environment with 300+ nodes. Launch 5000+ workflows in parallel. Monitor the RAM usage of the Argo Workflows Controller and note any unexpected crashes.

Additional context: Given the scale at which we are operating, it’s critical for our operations that Argo can handle such workloads efficiently. Any assistance in resolving this issue or guidance on potential optimizations would be greatly appreciated.

RAM

Version

v3.4.11

Paste a small workflow that reproduces the issue. We must be able to run the workflow; don’t enter a workflows that uses private images.

-

Logs from the workflow controller

-

Logs from in your workflow’s wait container

-

About this issue

  • Original URL
  • State: closed
  • Created 9 months ago
  • Reactions: 4
  • Comments: 38 (16 by maintainers)

Commits related to this issue

Most upvoted comments

Please try out v3.4.14 which fixes both this issue and the node missing issue.

We can revert that in future 3.4 patches. It doesn’t affect usage. Tracking in https://github.com/argoproj/argo-workflows/issues/11851.

@terrytangyuan sounds good!

After the build is ready, I can arrange the version upgrade maybe Tomorrow (11/7). With this new version, I will check if the k8s API calls and the function works as expected or not and get back to you. However, our high loading appears start from Thursday night (Taipei time), so we might need to wait until 11/10 to confirm if the succeeded workflow increased issue is gone.

@terrytangyuan

I discovered a discrepancy in the watch requests generated by wfInformer in workflow/controller/controller compared to workflow/cron/controller when communicating with the k8s API server. Specifically, the resourceVersion parameter is missing in the requests from workflow/controller/controller. This omission seems to prompt the k8s API server to return WatchEvents from the latest version, potentially leading to missed WatchEvents by wfInformer, such as delete events. As a result, succeeded workflows may remain in wfInformer indefinitely.

workflow/controller/controller missing resourceVersion:

__source__=log_service __time__=1698635337 __topic__=k8s_api_server _container_name_=kube-apiserver _source_=stderr _time_=2023-10-30T03:08:56.831235179Z content=I1030 11:08:56.831175       1 httplog.go:132] "HTTP" verb="WATCH" URI="/apis/argoproj.io/v1alpha1/namespaces/canal-flow/workflows?allowWatchBookmarks=true&labelSelector=%21workflows.argoproj.io%2Fcontroller-instanceid&timeoutSeconds=557&watch=true" latency="9m17.000861432s" userAgent="workflow-controller/v0.0.0 (linux/amd64) kubernetes/$Format/argo-workflows/v3.4.11 argo-controller" audit-ID="57565d21-1654-4807-8c2c-c34e5bc1af04" srcIP="172.22.160.38:35015" resp=200 contentType="application/json" apf_pl="workload-low" apf_fs="service-accounts" apf_iseats=1 apf_fseats=0 apf_additionalLatency="0s" apf_init_latency="539.269772ms" apf_execution_time="539.27239ms" resp=200
__source__=log_service __time__=1698634780 __topic__=k8s_api_server _container_name_=kube-apiserver _source_=stderr _time_=2023-10-30T02:59:39.829672376Z content=I1030 10:59:39.829600       1 httplog.go:132] "HTTP" verb="WATCH" URI="/apis/argoproj.io/v1alpha1/namespaces/canal-flow/workflows?allowWatchBookmarks=true&labelSelector=%21workflows.argoproj.io%2Fcontroller-instanceid&timeoutSeconds=486&watch=true" latency="8m6.00118578s" userAgent="workflow-controller/v0.0.0 (linux/amd64) kubernetes/$Format/argo-workflows/v3.4.11 argo-controller" audit-ID="1a771981-2665-42de-9b63-8bf1366d8730" srcIP="172.22.160.38:56251" resp=200 contentType="application/json" apf_pl="workload-low" apf_fs="service-accounts" apf_iseats=1 apf_fseats=0 apf_additionalLatency="0s" apf_init_latency="661.519019ms" apf_execution_time="661.521015ms" resp=200

workflow/cron/controller:

__source__=log_service __time__=1698635529 __topic__=k8s_api_server _container_name_=kube-apiserver _source_=stderr _time_=2023-10-30T03:12:08.448808764Z content=I1030 11:12:08.448734       1 httplog.go:132] "HTTP" verb="WATCH" URI="/apis/argoproj.io/v1alpha1/namespaces/canal-flow/workflows?allowWatchBookmarks=true&labelSelector=%21workflows.argoproj.io%2Fcontroller-instanceid%2Cworkflows.argoproj.io%2Fcron-workflow&resourceVersion=6422307129&timeoutSeconds=498&watch=true" latency="8m18.001101775s" userAgent="workflow-controller/v0.0.0 (linux/amd64) kubernetes/$Format/argo-workflows/v3.4.11 argo-controller" audit-ID="a93bff01-e817-461f-a13b-eba47e2b72ef" srcIP="172.22.160.38:50343" resp=200 contentType="application/json" apf_pl="workload-low" apf_fs="service-accounts" apf_iseats=1 apf_fseats=0 apf_additionalLatency="0s" apf_init_latency="356.215µs" apf_execution_time="357.388µs" resp=200
__source__=log_service __time__=1698635031 __topic__=k8s_api_server _container_name_=kube-apiserver _source_=stderr _time_=2023-10-30T03:03:50.446786285Z content=I1030 11:03:50.446711       1 httplog.go:132] "HTTP" verb="WATCH" URI="/apis/argoproj.io/v1alpha1/namespaces/canal-flow/workflows?allowWatchBookmarks=true&labelSelector=%21workflows.argoproj.io%2Fcontroller-instanceid%2Cworkflows.argoproj.io%2Fcron-workflow&resourceVersion=6422260990&timeoutSeconds=543&watch=true" latency="9m3.000783408s" userAgent="workflow-controller/v0.0.0 (linux/amd64) kubernetes/$Format/argo-workflows/v3.4.11 argo-controller" audit-ID="a726b820-8da9-4e4b-be97-79d1ad6f395d" srcIP="172.22.160.38:54040" resp=200 contentType="application/json" apf_pl="workload-low" apf_fs="service-accounts" apf_iseats=1 apf_fseats=0 apf_additionalLatency="0s" apf_init_latency="350.437µs" apf_execution_time="351.577µs" resp=200

This behavior appears to have been introduced in PR #11343 (v3.4.9) where tweakListOptions is utilized for both list and watch requests. To validate this, we reverted to v3.4.8 and observed that the watch requests now include the resourceVersion parameter:

__source__=log_service __time__=1699011092 __topic__=k8s_api_server _container_name_=kube-apiserver _source_=stderr _time_=2023-11-03T11:31:31.353724739Z content=I1103 19:31:31.353665       1 httplog.go:132] "HTTP" verb="WATCH" URI="/apis/argoproj.io/v1alpha1/namespaces/canal-flow/workflows?allowWatchBookmarks=true&labelSelector=%21workflows.argoproj.io%2Fcontroller-instanceid&resourceVersion=6457248239&timeoutSeconds=533&watch=true" latency="8m53.000581908s" userAgent="workflow-controller/v0.0.0 (linux/amd64) kubernetes/$Format/argo-workflows/v3.4.8 argo-controller" audit-ID="e6506332-99e2-4401-b28b-f16b48ddccfa" srcIP="172.22.160.38:31549" resp=200 contentType="application/json" apf_pl="workload-low" apf_fs="service-accounts" apf_iseats=1 apf_fseats=0 apf_additionalLatency="0s" apf_init_latency="303.127µs" apf_execution_time="304.331µs" resp=200

We are currently monitoring to see if the count of succeeded workflows continues to rise over time.

Yea GC/TTL sounds like what you’d want to tune for deleting archived workflows from your cluster. There are more detailed docs in the Operator Guide, such as for GC: https://argoproj.github.io/argo-workflows/cost-optimisation/#limit-the-total-number-of-workflows-and-pods and other scalability tuneables: https://argoproj.github.io/argo-workflows/scaling/.

I’m also working on horizontally scaling the Controller in #9990

Adding the following settings to the controller config fixed the cleanup of the already archived workflows and the RAM consuption:

      extraArgs:
        - '--workflow-ttl-workers'
        - '32'
        - '--pod-cleanup-workers'
        - '32'
      extraEnv: # https://github.com/argoproj/argo-workflows/blob/master/docs/environment-variables.md
        - name: DEFAULT_REQUEUE_TIME
          value: "1m"

Anyways, the controller is still repeatingly restarting. I will update k8s to 1.27.3 and give you feedback if that fixed this issue as well.

I’m curious if this note also appears in v3.4.13, so I tried and found in v3.4.13, argo-ui does display the release note of v3.5.

Thanks for pointing this out! It looks like the v3.5 upgrade note got accidentally cherry-picked into 3.4.12+: https://github.com/argoproj/argo-workflows/commit/46297cad798cb48627baf23d548a2c7e595ed316 @terrytangyuan not sure how you want to handle that since the history and 2 versions are now affected.

Thank you! We can track the node missing issue separately as I don’t think that’s related to this issue. https://github.com/argoproj/argo-workflows/issues/12165

Hi @terrytangyuan, here are some updates: I’ve confirmed both argo-server and workflow-controller are using the same image version - dev-fix-informer-3.4.13. I’m curious if this note also appears in v3.4.13, so I tried and found in v3.4.13, argo-ui does display the release note of v3.5. image

In addition to this weird behavior, we noticed there exists another UI bug in this new version. Some nodes of the workflows are missing in Argo UI graph view, but actually the workflows looks running correctly and we can see the pods in timeline view. image I guess this issue was introduced recently since we didn’t see it when using v3.4.11.

Sorry that with above concern which will impact our service users, I’m afraid that I cannot arrange the version upgrade to our production environment at this moment to monitor if it fix the workflow succeeded incremental issue or not. But with build dev-fix-informer-3.4.13 in our dev env, I did see the k8s api query for workflow watch has resourceVersion added.

// watch workflows with resourceVersion=5849977253, with userAgent:"... argo-workflows/untagged ..." which indicates the request was come from your untagged dev build
__source__=log_service __time__=1699331612 __topic__=k8s_api_server _container_name_=kube-apiserver _source_=stderr _time_=2023-11-07T04:33:31.930048025Z content=I1107 12:33:31.929973       1 httplog.go:132] "HTTP" verb="WATCH" URI="/apis/argoproj.io/v1alpha1/namespaces/canal-flow/workflows?allowWatchBookmarks=true&labelSelector=%21workflows.argoproj.io%2Fcontroller-instanceid&resourceVersion=5849977253&timeoutSeconds=471&watch=true" latency="7m51.000562646s" userAgent="workflow-controller/v0.0.0 (linux/amd64) kubernetes/$Format/argo-workflows/untagged argo-controller" audit-ID="8dda91a9-8d89-46b6-929f-65974b1131e3" srcIP="172.25.16.27:38487" resp=200 contentType="application/json" apf_pl="workload-low" apf_fs="service-accounts" apf_iseats=1 apf_fseats=0 apf_additionalLatency="0s" apf_init_latency="324.407µs" apf_execution_time="325.738µs" resp=200
// watch workflows with resourceVersion=5849988603
__source__=log_service __time__=1699332150 __topic__=k8s_api_server _container_name_=kube-apiserver _source_=stderr _time_=2023-11-07T04:42:29.931940422Z content=I1107 12:42:29.931865       1 httplog.go:132] "HTTP" verb="WATCH" URI="/apis/argoproj.io/v1alpha1/namespaces/canal-flow/workflows?allowWatchBookmarks=true&labelSelector=%21workflows.argoproj.io%2Fcontroller-instanceid&resourceVersion=5849988603&timeoutSeconds=538&watch=true" latency="8m58.000866255s" userAgent="workflow-controller/v0.0.0 (linux/amd64) kubernetes/$Format/argo-workflows/untagged argo-controller" audit-ID="6151f625-c8f7-449c-8d2e-4dd6539236ba" srcIP="172.25.16.27:21991" resp=200 contentType="application/json" apf_pl="workload-low" apf_fs="service-accounts" apf_iseats=1 apf_fseats=0 apf_additionalLatency="0s" apf_init_latency="417.471µs" apf_execution_time="419.107µs" resp=200

@carolkao I guess you forgot to update the image for argo-server. Here’s the only difference from v3.4.13 https://github.com/argoproj/argo-workflows/compare/v3.4.13...dev-fix-informer-3.4.13

@carolkao Good catch. It looks like not all changes are included when cherry-picking. I am building dev-fix-informer-3.4.13. Once all builds finish, a new image tag dev-fix-informer-3.4.13 should be available.

This issue becomes worse, when the number of workflow resources keeps increasing while the archiving can´t keep up. At a certain point, the k8s API is overwhelmed and calling get workflows.argoproj.io is very slow. When the workflows controller restarts, it fetches all workflow resources. If the k8s API is overwhelmed like this, the controller fails:

"2023-10-30T07:38:43.298Z" level=fatal msg="the server was unable to return a response in the time allotted, but may still be processing the request (get workflows.argoproj.io)"

Because the request to the k8s API timeouts, the workflow controller crashes and tries to restart again. The same problem occurs again and the workflows controller is in a restart loop. The only way to recover from this, is to manually delete workflow resources until the k8s api response times for get workflows.argoproj.io decrease.

@jkldrr

I observed a similar issue in our site after upgrading from 3.3.6 to 3.4.11:

  • high memory usage, OOMKilled by k8s
  • succeeded workflow number increases over time
  • most succeeded workflows cannot be found via kubectl get workflow, these succeeded workflows are re-deleted every 20 minutes but got this message “Workflow already deleted”. 20 minutes is the wfInformer’s resyncPeriod
  • some pending archiving workflows block succeeded workflows to be deleted and workflow controller failed to catch up. In this case, kubectl find these workflows
  • I found some workflows are archiving multiple times
  • lots of throttling logs
Waited for ... due to client-side throttling, not priority and fairness, request: GET:https://.../apis/argoproj.io/v1alpha1/namespaces/argo/...
  • all workflow workers(32) are busy at peak time

Based on above observations, I have a hypothesis:

  • the workflow workers fail to digest the incoming tasks due to qps throttling
  • since the incoming tasks are queued, the workflow state be handling are delayed
  • the delayed and incorrect workflow state will be written back to wfInformer and make the succeeded workflow leakage and redundant archiving
  • succeeded workflow leakage and redundant archiving will make things worse quickly due to more loads to workflow controller

I made some changes and it looks normal now:

Hope these changes help in your case. Good luck.

---- Update ----- The above change didn’t work well. After a few days, the succeeded workflows started increasing and workflow controller try to delete them every 20 minutes.

Here is an example of workflow controller keeps deleting a deleted workflow. workflow_controller_logs_delete_workflows.txt

We analyzed the issue further after applying your suggested changes. Now we see frequent restarts of the controller and found out that it is restarted due to failing health checks, which are caused by the same issue described here: Liveness probe fails with 500 and “workflow never reconciled”

Running out of CPU don’t seems to be the problem. It using (at max) 20% of the CPU resources. We adjusted the workflow workers to fit the cleanup workers (both are now at 32). Didn’t changed the behaviour. Some new fatal logs appeared this night:

time="2023-10-11T03:47:02.429Z" level=fatal msg="Timed out waiting for caches to sync"
time="2023-10-11T02:58:13.413Z" level=fatal msg="Timed out waiting for caches to sync"
time="2023-10-11T02:46:01.699Z" level=fatal msg="the server was unable to return a response in the time allotted, but may still be processing the request (get workflows.argoproj.io)"
grafik

Do you have any idea how to debug this further? I’m running out of ideas…

Screenshot 2023-10-09 at 11 06 54

It seems that, over the weekend, the worklfows contoller, again, stopped cleaning up the archived workflows… After it crashed and restarted it was cleaing up the workflows. Found nothing uncommon at log levels error & fatal in the logs.

PS: I think it was OOM killed by K8s.

Does it crash after this error? Or was that just another error you found in the logs?

Yes, it seems to be crashing now after this log message.

Yea GC/TTL sounds like what you’d want to tune for deleting archived workflows from your cluster. There are more detailed docs in the Operator Guide, such as for GC: https://argoproj.github.io/argo-workflows/cost-optimisation/#limit-the-total-number-of-workflows-and-pods and other scalability tuneables: https://argoproj.github.io/argo-workflows/scaling/.

I’m also working on horizontally scaling the Controller in #9990

I think the root cause can caused by GC not clearing some data. Maybe this will help you https://argoproj.github.io/argo-workflows/running-at-massive-scale/

But in my use-case we trying to migrate from Azkaban to Argo-Workflow. Before that we run concurrent workflow just like what the pipe article do and i think it manage the resource pretty well (using GKE 1.24).

But yeah you can try to run test scenario to load test it first and track where is the issue come from

What kind of workflow do you run ? do you load test it maybe with simple 5000 cowsay workflow concurrent ?

I think this may help you with that https://pipekit.io/blog/upgrade-to-kubernetes-127-take-advantage-of-performance-improvements