airflow: Airflow progressive slowness

Apache Airflow version

Other Airflow 2 version (please specify below)

What happened

We are running Airflow on EKS with version 2.5.3. Airflow has been experiencing progressive slowness over a period of 2-3 weeks where DAGs start getting queued without ever executing and leads us to restart the scheduler pod. After the pod restart, problem goes away for a few days and then starts to slowly creep back up.

The pods, the logs and the dashboards all look healthy, the UI shows that no tasks are currently running, and that there are no worker pods alive. The resource usage graphs (CPU, memory) also look what they should if no DAGs are actually executing.

During one such outage, we disabled all the DAGs and marked all the tasks as success just to see if scheduler is able to spin up new worker pods. Scheduler never recovered and we restarted the scheduler pod.

However, there is one dashboard that shows metrics named Executor running tasks and Executor open slots. We noticed that this dashboard was accurately representing the slowness behavior. Over a period of time, number of open slots would decrease and vice versa for running tasks. These two would never reset even when nothing is running during a long period of time which is every day between 10:00 PM to 8:00 AM.

These metrics are coming from base_exeuctor :

        Stats.gauge("executor.open_slots", open_slots)
        Stats.gauge("executor.queued_tasks", num_queued_tasks)
        Stats.gauge("executor.running_tasks", num_running_tasks)

and num_running_tasks is defined as num_running_tasks = len(self.running) in base_executor.

Screenshot 2023-07-28 at 3 11 30 PM

So we enabled some logs from KuberenetesExecutor under this method to see what was in self.running:

    def sync(self) -> None:
        """Synchronize task state."""
      ####
        if self.running:
            self.log.debug("self.running: %s", self.running)  #--> this log
       ###
        self.kube_scheduler.sync()

where self.running is defined as self.running: set[TaskInstanceKey] = set(). The log showed that somehow the tasks that have been completed successfully in the past still exist in self.running. For example, a snippet of the log outputted on the 28th is holding on to the tasks that have already been successfully completed on the 24th and 27th:

**time: Jul 28, 2023 @ 15:07:01.784**
self.running: {TaskInstanceKey(dag_id='flight_history.py', task_id='load_file', run_id=**'manual__2023-07-24T01:06:18+00:00'**, try_number=1, map_index=17), TaskInstanceKey(dag_id='emd_load.py', task_id='processing.emd', run_id='**scheduled__2023-07-25T07:30:00+00:00'**, try_number=1, map_index=-1), 

We validated that these tasks have been completed without any issue from the UI and Postgres DB (which we use as the metadata backend).

Once the scheduler pod is restarted, the problem goes away, the metrics in Grafana dashboard reset and tasks start executing.

What you think should happen instead

Airflow’s scheduler is keeping a track of currently running tasks and their state in memory. And that state in some cases is not getting cleared. The tasks that have been completed should eventually be cleared from running set in KubernetesExecutor once the worker pod exits.

How to reproduce

Beats me. Our initial assumption was that that is a DAG implementation issue and some particular DAG is misbehaving. But this problem has occurred with all sorts of DAGs, happens for scheduled and manual runs, and is sporadic. Tt here is some edge scenario that causes this to happen. But we are unable to nail it down any further.

Operating System

Debian GNU/ Linux 11 (bullseye)

Versions of Apache Airflow Providers

aiofiles==23.1.0 aiohttp==3.8.4 airflow-dbt>=0.4.0 airflow-exporter==1.5.3 anytree==2.8.0 apache-airflow-providers-ftp==2.0.1 apache-airflow-providers-http>=2.0.3 apache-airflow-providers-microsoft-mssql==2.1.3 apache-airflow-providers-snowflake>=4.0.4 apache-airflow-providers-hashicorp==3.3.0 apache-airflow-providers-cncf-kubernetes==5.2.2 apache-airflow>=2.2.3 asgiref==3.5.0 Authlib==0.15.5 dbt-snowflake==1.5.2 flatdict==4.0.1 hvac==0.11.2 jsonschema>=4.17.3 pandas==1.3.5 psycopg2-binary==2.9.3 pyOpenSSL==23.1.1 pysftp==0.2.9 pysmbclient==0.1.5 python-gnupg==0.5.0 PyYAML~=5.4.1 requests~=2.26.0 smbprotocol==1.9.0 snowflake-connector-python== 3.0.4 snowflake-sqlalchemy==1.4.7 statsd==3.3.0 py7zr==0.20.5

Deployment

Official Apache Airflow Helm Chart

Deployment details

Airflow is deployed via helm charts on EKS in AWS. There are two scheduler pods with AIRFLOW__CORE__PARALLELISM set to 10.

Anything else

N/A

Are you willing to submit PR?

  • Yes I am willing to submit a PR!

Code of Conduct

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 2
  • Comments: 38 (15 by maintainers)

Commits related to this issue

Most upvoted comments

Hi, everyone!

I think I found the root cause of the problem.

Short answer: The KubernetesExecutor._adopt_completed_pods function is not compatible with concurrently running schedulers.

Long answer: I encountered an issue described here after updating my Airflow instance from 2.4.3 to 2.7.3. After the update, the executor.open_slots metric started decreasing, as shown in the picture below.

Снимок экрана от 2023-11-21 10-04-04

After restarting the schedulers, the open_slots metric resets, but then it starts to decline again. After some investigation, I discovered two things:

  1. My KubernetesExecutor.running set has TaskInstances that were completed a while ago.
  2. These TaskInstances should not be in this scheduler because they were completed by my other scheduler.

After digging through the logs, I found that pods belonging to those task instances were adopted. The log entry “Attempting to adopt pod” (with a capital “A”) corresponds to this line in the code.

Снимок экрана от 2023-11-20 17-40-21

The first error in the function occurs here. Even if the scheduler fails to adopt a pod, the pod is still added to the KubernetesExecutor.running set. This piece of code was changed in #28871 and was merged into 2.5.2

In my case, the pod failed to be adopted because it had already been deleted (error 404). I decided to fix it by simply adding continue in the except block. After the fix, the situation improved a lot, but I still saw some TaskInstances in the KubernetesExecutor.running set that didn’t belong to that scheduler. Then I found the second problem with this function. The KubernetesExecutor._adopt_completed_pods function is called unconditionally in the KubernetesExecutor.try_adopt_task_instances function, which is called by SchedulerJobRunner. SchedulerJobRunner sends a list of TaskInstances that need to be adopted because their Job.last_healthcheck missed the timeout. KubernetesExecutor.try_adopt_task_instances iterates through that list and tries to adopt (patch one of the labels) pods that belong to these TaskInstances. If adoption is successful, it adds the TaskInstance to the KubernetesExecutor.running set. However, KubernetesExecutor._adopt_completed_pods, which is called during try_adopt_task_instances, does no such thing - it just gets the list of all completed pods and tries to adopt all completed pods that are not bound to the current scheduler. This results in the constant adoption of completed pods between schedulers. Scheduler 1 adopts completed pods of scheduler 2 and vice versa.

So, how can we fix this?

I think that the _adopt_completed_pods function needs to be removed because the presence of completed pods after scheduler failure is pretty harmless, and the airflow cleanup-pods CLI command can take care of that. Plus this function has already caused problems #26778 before. But I might be wrong. So if some maintainer can give advice on this situation, it would be great.

I pinged at the #development channel of Airflow’s Slack as we are now gearing up for 2.8.0 release that might be a good opportunity to have some maintainers to take a close look. Thanks a lot for the detailed analysis - I think it might be super -helpful in quick diagnosis and remedy for that issue. https://apache-airflow.slack.com/archives/CCPRP7943/p1700574306533159

I am having the same issue with airflow 2.6.3. Database and airflow ui shows no tasks are running but the kubernetes executor thinks there are 64 (parallelism=64) running tasks and skips over queuing / running any more tasks.

My initial thoughts on this is it appears that within the kubernetes_executor.py the process_watcher_task sometimes fails to add a finished pod to the result_queue and because of this _change_state is never called on the task instance and thus the task instance is never removed from self.running set of running task instances and in turn the “critical section” of queueing tasks is never attempted.

I have posted more details about it here in the airflow slack: https://apache-airflow.slack.com/archives/CCQ7EGB1P/p1694033145572849

I have created a new issue #36335 for celery executor. Let’s keep this for Kubernetes executor.

P.S. Can you share the secret of how you managed to get a review from the maintainer so fast?

I can share a secret. Be kind, be considerate, but also … be persistent. The thing is that maintainers have sometimes multiple tens of PRs to take a look at DAILY, and - more often than not - they do it in their free time, or things like MySQL breaking all images released during last 3 years and forces them to scramble to fix that… Or they have new release coming and they want to make sure it’s good.

And your PR might be simply one of the many PRs that you forget that is important.

But on the other hand, you have one PR, important and useful and solving real problem and you are eager to get it merged. And you definitely remember about it and are a but disappointed it have not been reviewed so far. And well, you are the driving force there if you understand the assymetry.

So … keep on reminding if you see no response back in a few days. It’s far easier for you to see “nothing happened yet to that only PR in Airflow I care about”, rather than maintainers haveing to “daily” look through 170+ opened PRs to be able to decide where to spend some of their free time they have to spare on Airflow to pick just this one.

I think that’s the best recipe.

Worst-case PR will be closed and another one will supersede it - and everyone will learn from the comments in the first one 😃

I prepared pr #35245 and started proposal on devlist to add description on the process/approach we are using. See https://lists.apache.org/thread/05njmmqvwl0gn20f2go9d420xhzptrw2 - feel free to chime in as well there - in PR or devlist @harshg0910

@llamageddon83 We have faced a similar issue in production, identified the root cause, and provided the fix in #36240. If you are looking for an immediate fix, then you can try this patch #36240. As a workaround, you can increase the parallelism to a large number (1OK).

@dirrao This is cool, but I already submitted PR #35800 almost a month ago. I think we are fixing different problems. Your PR addresses the issue when adoption is performed on a live scheduler that just skipped the heartbeat. We encountered this problem, but we resolved it by simply increasing the scheduler_health_check_threshold. However, there is another problem in the adoption cycle. The “adoption” of completed pods is unconditional, so even if the scheduler didn’t skip the heartbeat, another scheduler will try to adopt “completed” pods from it. This results in a bloated running set. For more information, you can read our analysis of the situation here (https://github.com/apache/airflow/issues/32928#issuecomment-1820413530). Please feel free to check out our PR.

P.S. Can you share the secret of how you managed to get a review from the maintainer so fast?

@potiuk Thank you. Should I submit a PR with the _adopt_completed_pods removal or is it better to wait for the maintainers decision on how to fix this problem?

Submitting a PR is ALWAYS good … This way it might get people to the right code faster and their comments might be better - the closer the code and change you are, the better!

@potiuk Thank you. Should I submit a PR with the _adopt_completed_pods removal or is it better to wait for the maintainers decision on how to fix this problem?

Hi all,

Thank you to everyone who provided input and to the Airflow team for following up on this. While going through some old posts from @potiuk on stackoverflow in response to the Airflow v1 question, he mentioned the AIRFLOW__SCHEDULER__NUM_RUNS configuration. It restarts the scheduler loop as per the value specified.

We have added this configuration to our Airflow platform and set it to '100000` which happens about every 2 hours. We are still performing validations since it usually takes 2-3 weeks for the slowness to creep up. But in case anyone else wants to give it a shot as well.

I’m also seeing this issue on our setup…Airflow 2.7.1, KubernetesExecutor. Running 3 scheduler pods. Looks like things start going downhill after 5 days. Restarting the schedulers gets things moving again.