kafka-python: kafka.common.KafkaTimeoutError: ('Failed to update metadata after %s secs.', 60.0)

kafka version: 0.8.2.0-1.kafka1.3.2.p0.15 (cloudera released)

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/archimonde/lib/python2.6/site-packages/kafka/producer/kafka.py", line 357, in send
    self._wait_on_metadata(topic, self.config['max_block_ms'] / 1000.0)
  File "/opt/archimonde/lib/python2.6/site-packages/kafka/producer/kafka.py", line 465, in _wait_on_metadata
    "Failed to update metadata after %s secs.", max_wait)
kafka.common.KafkaTimeoutError: ('Failed to update metadata after %s secs.', 60.0)

But it’s ok on 2.0.0-1.kafka2.0.0.p0.12.

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Comments: 51 (3 by maintainers)

Most upvoted comments

Also wondering fix. I see this error when running kafka via Docker but not when installing from binary.

EDIT: So wondering what the root cause may be to help me pinpoint issues between environments.

I got this message when trying to publish to a topic that doesn’t exist.

Another cause for this issue is using a bytes object as the topic name, instead of a string. In python 3, you could have a string come in as a bytes object (a = b'test_string'). If this happens, you can just convert the topic name to a utf-8 string, and it might start working. It worked for me.

if type(kafka_topic) == bytes:
    kafka_topic = kafka_topic.decode('utf-8')
kafka_producer = KafkaProducer(bootstrap_servers=kafka_brokers)
kafka_producer.send(kafka_topic, payload)
kafka_producer.flush()

Whats the fix for this issue guys?

I’ve also noticed that using kafka-python in the interpreter works just fine, with the exact same bootstrap_servers and everything else. The bug only surfaces when executed as part of a Python program.

I had same problem, To solve the problem I increased request_timeout_ms (I used 10ms before)

Steps to reproduce the issue:

  1. Clone Kafka Docker repo git clone https://github.com/wurstmeister/kafka-docker
  2. Clone kafka-python repo git clone https://github.com/dpkp/kafka-python
  3. Start Kafka ( cd kafka-docker; docker-compose up -d )
  4. Find out Kafka broker port docker ps
  5. Set correct broker port in example.py cd kafka-python; vi example.py (line 17 and 35)
  6. Execute example python example.py

You will also see this if you disallow topic creation on publish on the broker (auto.create.topics.enable=false) and then try to produce to a topic that hasn’t been created yet.

to enable python logging at its most simple form:

import logging
logging.basicConfig(level=logging.DEBUG)

I solved the problem by setting KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1 in docker-compose.yml for wurstmeister/kafka image.

I solved this problem after starting zookeeper on my macOS,

brew services start kafka
brew services start zookeepr

Btw, you can enable logging debug for more information:

import logging
logging.basicConfig(level=logging.DEBUG)

Because producer complained like this: DEBUG:kafka.client:Give up sending metadata request since no node is available

I am re-opening, I also saw the same issue in production with kafka-python 1.3.5 and kafka broker 0.10.0.1. The producer was a VM, the broker was baremetal–no docker here.

The error showed up after rolling the kafka cluster, and the KafkaProducer instance never recovered, just continually threw this error once a minute. The brokers were rolled by stopping/restarting the broker processes–the underlying machines did not change and DNS was not affected.

I watched the tcp stream and the producer never tried to even talk to Kafka, so it wasn’t timing out cluster side.

When I restarted the service, it immediately started working.

so what is the fix here?

resolved. must set all brokers for bootstrap_servers.

If useful for anyone, I started see this error after auto.create.topics.enable was set to false… creating the topic manually resolved the issue

I had this issue when porting the code from python 2 to 3.

Solved by changing

kafkaValue = json.dumps(someDict, separators=(',', ':'))
producer.send(
    topic=bytes(topicNameAsString), 
    key=bytes(someString), 
    value=kafkaValue)

to

kafkaValue = json.dumps(someDict, separators=(',', ':'))
producer.send(
    topic=topicNameAsString, # <-- just string
    key=bytes(someString, encoding='utf8'), # <-- encoding='utf8'
    value=bytes(kafkaValue, encoding='utf8')) # <-- bytes

The 1.4.5 release includes some commits that may fix some unconfirmed edge cases related to this. So if you’re following this ticket, please retry with the latest release and let us know if you’re still seeing this.

For others in thread, this error means that the client could not connect to a node. Most likely it’s not visible. Please make sure the advertized.listener config is valid. For Kubenetes users, please make sure you adverize the same dns as you bootstrap from.

@archiechen I am getting the same error too, what did you do?

I used docker-compose, in which the Kafka advertised listeners are: KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092

Then I changed the port in python code to 29029 and solved the problem.

Ran into this issue in a Kubernetes environment (incubator/kafka Helm chart). The issue was that the number of replicas was less than the value for “default.replication.factor”

If running the kafka brokers inside a container, make sure that it advertises the correct hostnames that are accessible by the clients. If not specified it will use the canonical hostname of the container and that may be an internal one that cannot be used outside of the container.

You can set the advertised hosts with advertised.listeners in the server properties, or if you just want to test with an unmodified kafka docker image, you can override the default setting with:

bin/kafka-server-start.sh config/server.properties --override advertised.listeners=PLAINTEXT://<accessible-hostname>:9092

I am having the same issue while using minikube and https://github.com/Yolean/kubernetes-kafka (Kubernetes version 1.9.0, Kafka 1.0)