airflow: Can't read S3 remote logs when using gevent/eventlent webserver workers.
Hey everyone. I’ve upgraded to 1.10.9. It appears that the logging changes broke the functionality for reading S3 remote logs in the Web UI (writing is ok). In the change log it mentions that Airflow’s logging mechanism has been refactored to uses Python’s builtin logging
module:
[AIRFLOW-1611] Customize logging
I followed the directions in the changelog and created the following log config:
import os
import six
from airflow import AirflowException
from airflow.configuration import conf
from airflow.utils.file import mkdirs
from typing import Dict, Any
LOG_LEVEL = conf.get('core', 'LOGGING_LEVEL').upper()
FAB_LOG_LEVEL = conf.get('core', 'FAB_LOGGING_LEVEL').upper()
LOG_FORMAT = conf.get('core', 'LOG_FORMAT')
COLORED_LOG_FORMAT = conf.get('core', 'COLORED_LOG_FORMAT')
COLORED_LOG = conf.getboolean('core', 'COLORED_CONSOLE_LOG')
COLORED_FORMATTER_CLASS = conf.get('core', 'COLORED_FORMATTER_CLASS')
BASE_LOG_FOLDER = conf.get('core', 'BASE_LOG_FOLDER')
PROCESSOR_LOG_FOLDER = conf.get('scheduler', 'CHILD_PROCESS_LOG_DIRECTORY')
DAG_PROCESSOR_MANAGER_LOG_LOCATION = \
conf.get('core', 'DAG_PROCESSOR_MANAGER_LOG_LOCATION')
FILENAME_TEMPLATE = conf.get('core', 'LOG_FILENAME_TEMPLATE')
PROCESSOR_FILENAME_TEMPLATE = conf.get('core', 'LOG_PROCESSOR_FILENAME_TEMPLATE')
FORMATTER_CLASS_KEY = '()' if six.PY2 else 'class'
#
# Getting this from environment because the changelog for 1.10.9 says to set
# the path of `REMOTE_BASE_LOG_FOLDER` explicitly in the config. The
# `REMOTE_BASE_LOG_FOLDER` key is not used anymore.
#
REMOTE_BASE_LOG_FOLDER = os.environ.get('AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER')
LOGGING_CONFIG = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'airflow.task': {
'format': LOG_FORMAT,
},
'airflow.processor': {
'format': LOG_FORMAT,
},
'airflow_coloured': {
'format': COLORED_LOG_FORMAT if COLORED_LOG else LOG_FORMAT,
FORMATTER_CLASS_KEY: COLORED_FORMATTER_CLASS if COLORED_LOG else 'logging.Formatter'
},
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'airflow.task',
'stream': 'ext://sys.stdout'
},
'file.task': {
'class': 'airflow.utils.log.file_task_handler.FileTaskHandler',
'formatter': 'airflow.task',
'base_log_folder': os.path.expanduser(BASE_LOG_FOLDER),
'filename_template': FILENAME_TEMPLATE,
},
'file.processor': {
'class': 'airflow.utils.log.file_processor_handler.FileProcessorHandler',
'formatter': 'airflow.processor',
'base_log_folder': os.path.expanduser(PROCESSOR_LOG_FOLDER),
'filename_template': PROCESSOR_FILENAME_TEMPLATE,
},
's3.task': {
'class': 'airflow.utils.log.s3_task_handler.S3TaskHandler',
'formatter': 'airflow.task',
'base_log_folder': os.path.expanduser(BASE_LOG_FOLDER),
's3_log_folder': REMOTE_BASE_LOG_FOLDER,
'filename_template': FILENAME_TEMPLATE,
},
},
'loggers': {
'': {
'handlers': ['console'],
'level': LOG_LEVEL
},
'airflow': {
'handlers': ['console'],
'level': LOG_LEVEL,
'propagate': False,
},
'airflow.processor': {
'handlers': ['file.processor'],
'level': LOG_LEVEL,
'propagate': True,
},
'airflow.task': {
'handlers': ['s3.task'],
'level': LOG_LEVEL,
'propagate': False,
},
'airflow.task_runner': {
'handlers': ['s3.task'],
'level': LOG_LEVEL,
'propagate': True,
},
}
} # type: Dict[str, Any]
However, the task log reader is always defaulting to using the FileTaskHandler. This should not occur because I have the following settings in airflow.cfg:
remote_logging = True
remote_base_log_folder = s3://my-bucket-name
remote_log_conn_id = aws_default
task_log_reader = s3.task
The s3.task
handler passed to the task_log_reader
setting should be creating an instance of the S3TaskHandler
class to read the task logs to from S3. This occurs when rendering the get_logs_with_metadata
view in www/views.py
.
Apache Airflow version: 1.10.9 Kubernetes version: 1.15
Environment:
- Cloud provider or hardware configuration: AWS
- OS (e.g. from /etc/os-release): python:3.7.6-buster docker image
What happened: Logs did not appear in the Airflow Web UI. The FileTaskHandler tries to fetch the file locally or from the worker on port 8793. However, the logs do not exist in either location since we are using the Kubernetes Executor. This produces the following errors messages:
*** Log file does not exist: /usr/local/airflow/logs/MY_DAG_NAME/MY_TASK_NAME/2020-04-07T20:59:19.312402+00:00/6.log
*** Fetching from: http://MY_DAG_NAME-0dde5ff5a786437cb14234:8793/log/MY_DAG_NAME/MY_TASK_NAME/2020-04-07T20:59:19.312402+00:00/6.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='MY_DAG_NAME-0dde5ff5a786437cb14234', port=8793): Max retries exceeded with url: /log/MY_DAG_NAME/MY_TASK_NAME/2020-04-07T20:59:19.312402+00:00/6.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f708332fc90>: Failed to establish a new connection: [Errno -2] Name or service not known'))
What you expected to happen:
The logs should be rendered in the Web UI using the S3TaskHandler
class.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 8
- Comments: 66 (40 by maintainers)
Commits related to this issue
- Make sure pach_all for gevent worker happens before everything else This change makes sure that gevent performs monkey_patching before everything else. Gevent should make sure to patch all the classe... — committed to potiuk/airflow by potiuk 2 years ago
- Trigger gevent monkeypatching via environment variable Gevent needs to monkeypatch a number of system libraries as soon as possible when Python interpreter starts, in order to avoid other libraries m... — committed to potiuk/airflow by potiuk 2 years ago
- Trigger gevent monkeypatching via environment variable (#28283) Gevent needs to monkeypatch a number of system libraries as soon as possible when Python interpreter starts, in order to avoid other ... — committed to apache/airflow by potiuk 2 years ago
- Trigger gevent monkeypatching via environment variable (#28283) Gevent needs to monkeypatch a number of system libraries as soon as possible when Python interpreter starts, in order to avoid other li... — committed to apache/airflow by potiuk 2 years ago
- apache 2.5.1 -> affirm 2.5.1 (#29) * Handle transient state errors in `RedshiftResumeClusterOperator` and `RedshiftPauseClusterOperator` (#27276) * Modify RedshiftPauseClusterOperator and Redshift... — committed to Affirm/airflow by deleted user a year ago
Previously I had to manually patch botocore to fix this by removing bypassing of patched SSLContext, but it broke in Airflow 1.10.12.
I ran into IOErrors on the scheduler when using
monkey.patch_all()
similar to this and the suggested fixmonkey.patch_all(thread=False, socket=False)
caused warnings on threading.In the end I figured the issue was only with monkey-patching SSL as somehow it’s imported earlier than gunicorn expected as seen from the warning in the webserver logs.
In the end adding this to the top of
airflow_local_settings.py
worked for me.If anyone pops on this issue, I wanted to mention that as of 2.1.3 the issue is still present, and solution that still works is prepending this:
to very top line of file
airflow/config_templates/airflow_local_settings.py
(or your respective actual file if you have made one) of Airflow Webserver component.Make sure to not touch any other airflow components in your cluster like Airflow Worker or Airflow Scheduler (so if you have single node setup, this may not work for you), as those are not going to behave correctly with monkeypatches for SSL.
Unfortunately, mentioned above hack worked in 2.1.4, but no longer works on 2.2.0. Some code refactoring have affected the loading flow of Airflow Webserver, so we’re no longer patching SSL early enough (so monkeypatching exceptions are popping). I’m still digging in code to identify where we should plug hack now.
UPD few hours later: now you need to add it to the very top of file
airflow/settings.py
instead ofairflow_local_settings.py
The issue seems to be infinite recursion due to an interaction between gevent’s monkey patching and the botocore library used by S3TaskHandler:
I still need to dive deeper into this
@potiuk will test this out
I faced with similar issue when I tried to get secret from secrets backend (HashiCorp Vault) in webserver_config.py.
So, based on answers of @dimon222, I had to patch
airflow/settings.py
during my custom docker image build. Below the proof of concept:And it works fine! I hope this patch will be included into official airflow code.
Solution mentioned above by me
This no longer works in 1.10.12. Perhaps, something gets loaded earlier again and messes up urllib3. The issue is still present.
How can I read logs from worker pods? This is important since we need to see logs in real time to see whats happening. S3 logs are available only when task gets completed. I am getting below error currently:
Can confirm that using a threaded
gunicorn
worker (gevent
in our case) breaks the web component’s ability to show task logs that are in S3. Moving back tosync
works for us, and since we’re running on Kubernetes a few more web pods to absorb the ALB/ELB health checks isn’t (so far) a performance concern.Would love to see this fixed, however.
Using Airflow default <tt>aws_default</tt> for the S3 connection. Relevant configuration shared among web, scheduler, and worker pods below the fold