telegraf: Allow agent to start when input or output cannot be connected to

I have installed telegraf v1.4.4 via rpm and configured a input for kafka_consumer as follows:

## Read metrics from Kafka topic(s)
[[inputs.kafka_consumer]]
  brokers = ["localhost:9092"]
  topics = ["telegraf"]

It works well for gathering kafka metrics. Unfortunately, when the kafka broker is down abnormally, It is failed to restart telegraf. The telegraf log hints this:

$ service telegraf status
 
Redirecting to /bin/systemctl status telegraf.service
● telegraf.service - The plugin-driven server agent for reporting metrics into InfluxDB
   Loaded: loaded (/usr/lib/systemd/system/telegraf.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Fri 2018-01-26 20:18:16 CST; 45min ago
     Docs: https://github.com/influxdata/telegraf
  Process: 8954 ExecStart=/usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d $TELEGRAF_OPTS (code=exited, status=0/SUCCESS)
 Main PID: 8954 (code=exited, status=0/SUCCESS)

/var/log/telegraf/telegraf.log

2018-01-26T10:33:56Z I! Starting Telegraf v1.4.4
2018-01-26T10:33:56Z I! Loaded outputs: influxdb opentsdb prometheus_client
2018-01-26T10:33:56Z I! Loaded inputs: inputs.kernel inputs.system inputs.filestat inputs.mongodb inputs.kafka_consumer inputs.diskio inputs.swap inputs.disk inputs.docker inputs.internal inputs.kernel_vmstat inputs.cpu inputs.mem inputs.processes inputs.net inputs.zookeeper inputs.logparser
2018-01-26T10:33:56Z I! Tags enabled: host=127.0.0.1 user=telegraf
2018-01-26T10:33:56Z I! Agent Config: Interval:10s, Quiet:false, Flush Interval:10s 
2018-01-26T10:33:57Z E! Error when creating Kafka Consumer, brokers: [localhost:9092], topics: [telegraf]
2018-01-26T10:33:57Z E! Service for input inputs.kafka_consumer failed to start, exiting
kafka: client has run out of available brokers to talk to (Is your cluster reachable?)

Expected behavior:

Telegraf restart successfully regardless of some inputs internal error.

Actual behavior:

Telegraf restart failed owing to a input plugin kafka_consumer 's internal error.

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 15
  • Comments: 23 (10 by maintainers)

Most upvoted comments

Hmm… +1

same issue for missing influxdb connections as well.

Ideally, a flag to switch on switch off with retry numbers (-1 for infinities) would be nice.

@reimda new issue submitted for this particular situation: https://github.com/influxdata/telegraf/issues/9778

I personally think that this goes to show the value of this issue (albeit its 4 years old). Playing whack-a-mole with each plugin to catch every possible failure (with each one being treated as its own totally separate issue) is not optimal. This example shows that even InfluxData developers trying to fix a specific plugin failing in a specific and trivial to reproduce case find this difficult to get right.

Telegraf could really benefit from an architectural change that prevents plugin A blocking plugin B, regardless of missed exception handling deep in the third party dependencies of plugin A - because at scale you really dont want your CPU metrics to stop because some other third party system (of the huge number that telegraf now has plugins for) started doing something odd. The alternative I guess is to run one telegraf per plugin, but the overhead of that for us would be enormous.

In https://github.com/influxdata/telegraf/pull/12111 a new config option was added to the kafka plugins to allow for retrying connections on failure. This means that the plugin can start even if the connection is not successful.

While we will not add a global config option to let any and all plugins to start on failure, we are more than happy to see a plugin-by-plugin options to allow connection failures on start. If there is another plugin you are interested in seeing this for, please open a new issue (assuming one does not already exist), requesting something similar.

As a result I am going to go ahead and close this issue due to #12111 landing. Thanks!

Plus one for this feature

This was by design however I think we should provide a config/cli option that tells Telegraf to continue if any ServiceInput or Output does not successfully connect.

@daviesalex Thanks for your report. Could you open a new issue specific to the problems you’re having? Continuing to comment on this three year old issue isn’t a good way to track what you’re seeing and plan a fix. Please mention this issue and #9051 for context in the new issue you open.

Given your example I would expect the cpu data to appear on stdout. I would also expect the kafka output to retry 100 times since you have the max_retry set to 100. After a quick look at the code, I see that the setting is passed to sarama, the kafka library telegraf uses. What you’re seeing may be a problem with sarama. I’m not sure what retry values it allows.

This will not be fixed in 1.20.0 GA. That release was scheduled for Sept 15 so it is already two days late and we are currently working on getting it officially released. Since there is also no fix ready for this issue it is unreasonable to ask 1.20.0 GA to be held up for this.

1.20.1 is the absolute earliest you could expect a fix to be in an official release. 1.20.1 is scheduled for Oct 6. You’ll be able to test an alpha build as soon as someone is able to debug your issue and provide a PR that passes CI tests

There are 2 failures that we see in Telegraf 1.20-rc0 just in the Kafka plugin, despite https://github.com/influxdata/telegraf/pull/9051 that was supposed to fix this plugin:

1. If the Kafka backends are just down

Use this config to test:

[agent]
  interval = "1s"
  flush_interval = "1s"
  omit_hostname = true
  collection_jitter = "0s"
  flush_jitter = "0s"

[[outputs.kafka]]
  brokers = ["server1:9092","server2:9092","server3:9092"]
  topic = "xx"
  client_id = "telegraf-metrics-foo"
  version = "2.4.0"
  routing_tag = "host"
  required_acks = 1
  max_retry = 100
  sasl_mechanism = "SCRAM-SHA-256"
  sasl_username = "foo"
  sasl_password = "bar"
  exclude_topic_tag = true
  compression_codec = 4
  data_format = "msgpack"

[[inputs.cpu]]

[[outputs.file]]
  files = ["stdout"]

Make sure the client cant talk to server[1-3]; we did ip route add x via 127.0.0.1 to null route it but you could use a firewall or just point it to IPs that are not running Kafka.

What we expect:

  • Kafka output fails and tries to reconnect 100 times
  • I can still see the CPU input plugin sending data to stdout
  • Once Kafka manages to connect then I see the data there as well

What actually happens:

  • Kafka tries to connect a couple of times
  • CPU input plugin data is never passed to the stdout
  • After kafka fails, Telegraf exits with an error

2. If the Kafka sasl_password is wrong and SASL auth enabled

This is trivial to reproduce - just change the sasl_password for a working config.

What we expect:

  • Kafka output fails and tries to reconnect X times
  • Everything else works fine

What actually happens:

  • Telegraf immediately fails to start with this error (process exits):
[root@x ~]# /usr/local/telegraf/bin/telegraf -config /etc/telegraf/telegraf.conf --config-directory /etc/telegraf/conf.d
2021-09-16T11:23:51Z I! Starting Telegraf build-50
...
2021-09-16T11:23:51Z E! [agent] Failed to connect to [outputs.kafka], retrying in 15s, error was 'kafka server: SASL Authentication failed.'

It would be really awesome to

  • Fix these two particular cases in Telegraf 1.20 before GA
  • Get some sort of generic change in so this cant happen (this game of whack-a-mole where we see these failures in production, telegraf stops working, and we wait for a release to fix each case is not good)

After some more thought maybe we should change outputs so that Connect does not return an error unless the plugin will never be able to start up. Outputs all need to have reconnection logic anyway, the idea of connecting once at startup is quite a large simplification, and the initial connection does not indicate that metrics will be sendable when the time comes. It is nice to catch misconfigured outputs that will never work but I think the test shouldn’t be ran during normal startup.

Perhaps in the next version of the Output interface we should have both a Check() and Start() function instead. Check would optionally connect or do some other network validation, but would only be ran when specifically asked for. Start could be used to start needed background goroutines but would only error on fatal issues that cannot be recovered from.

+1 Ran into similar issue while testing with Jolokia plugin if agent wasn’t started before Telegraf.

A good retry period would likely be the (flush_)interval. In the case of output plugins, it would also be ideal if points were still buffered.