fluent-bit: DNS resolution timeout/failure in 1.8.9

Bug Report

Describe the bug Hi, I am facing a DNS resolution timeout/failure using 1.8.9 with the forward module to a stackdriver.

To Reproduce

  • upgrade from 1.8.3 to 1.8.9
[2021/10/29 18:43:37] [error] [input:emitter:fluent_log_emitted] error registering chunk with tag: st01.fluent
[2021/10/29 18:43:37] [error] [input:emitter:fluent_log_emitted] error registering chunk with tag: st01.fluent
[2021/10/29 18:43:37] [error] [input:emitter:fluent_log_emitted] error registering chunk with tag: st01.fluent
[2021/10/29 18:43:38] [ warn] [net] getaddrinfo(host='logging.googleapis.com', err=12): Timeout while contacting DNS servers
[2021/10/29 18:43:38] [ warn] [net] getaddrinfo(host='logging.googleapis.com', err=12): Timeout while contacting DNS servers
[2021/10/29 18:43:38] [ warn] [net] getaddrinfo(host='logging.googleapis.com', err=12): Timeout while contacting DNS servers
[2021/10/29 18:43:38] [ warn] [engine] chunk '1-1635532763.608049522.flb' cannot be retried: task_id=41, input=standard_log_emitted > output=stackdriver.1
[2021/10/29 18:43:41] [ warn] [input] emitter.8 paused (mem buf overlimit)
[2021/10/29 18:43:41] [error] [input:emitter:standard_log_emitted] error registering chunk with tag: st01.kubernetes.order-queue-processor-st01
[2021/10/29 18:43:41] [error] [input:emitter:standard_log_emitted] error registering chunk with tag: st01.kubernetes.order-queue-processor-st01
[2021/10/29 18:43:41] [error] [input:emitter:standard_log_emitted] error registering chunk with tag: st01.kubernetes.order-queue-processor-st01

Your Environment

  • Version used: 1.8.9
  • Configuration: stackdriver output and tail input
  • Environment name and version (e.g. Kubernetes? What version?): K8S 1.19
  • Filters and plugins: stackdriver & tail

Additional context Some fluent-bit pods eventually output logs such as Resource temporarily unavailable and gave up:

[2021/10/29 18:43:58] [error] [input:emitter:standard_log_emitted] error registering chunk with tag: st01.kubernetes.order-queue-processor-st01
[2021/10/29 18:43:58] [error] [input:emitter:standard_log_emitted] error registering chunk with tag: st01.kubernetes.order-queue-processor-st01
[2021/10/29 18:43:58] [error] [input:emitter:standard_log_emitted] error registering chunk with tag: st01.kubernetes.order-queue-processor-st01
[2021/10/29 18:43:58] [error] [input:emitter:standard_log_emitted] error registering chunk with tag: st01.kubernetes.order-queue-processor-st01
[2021/10/29 18:43:58] [error] [input:emitter:standard_log_emitted] error registering chunk with tag: st01.kubernetes.order-queue-processor-st01
[2021/10/29 18:43:58] [error] [input:emitter:standard_log_emitted] error registering chunk with tag: st01.kubernetes.order-queue-processor-st01
[2021/10/29 18:43:58] [error] [input:emitter:standard_log_emitted] error registering chunk with tag: st01.kubernetes.order-queue-processor-st01
[2021/10/29 18:43:58] [error] [input:emitter:standard_log_emitted] error registering chunk with tag: st01.kubernetes.order-queue-processor-st01
[2021/10/29 18:43:58] [error] [src/flb_http_client.c:1172 errno=11] Resource temporarily unavailable
[2021/10/29 18:43:58] [ warn] [output:stackdriver:stackdriver.1] http_do=-1
[2021/10/29 18:43:58] [error] [src/flb_http_client.c:1172 errno=11] Resource temporarily unavailable
[2021/10/29 18:43:58] [ warn] [output:stackdriver:stackdriver.1] http_do=-1 

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 7
  • Comments: 28 (14 by maintainers)

Commits related to this issue

Most upvoted comments

@leonardo-albertovich I will bring this up next time we have some sort of community or other meeting, IMO we should not have special hidden settings that only some maintainers understand and know about. If a setting needs a warning attached to it or some caveats, sure, that makes sense, but anything that exists should be documented IMO.

@030 This issue still could be seen with the latest version. Ping for some attention. Try to confirm if the PR mentioned above could fix this issue or not and when do you plan to release it?