kafka-python: kafka.common.KafkaTimeoutError: ('Failed to update metadata after %s secs.', 60.0)
kafka version: 0.8.2.0-1.kafka1.3.2.p0.15 (cloudera released)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/archimonde/lib/python2.6/site-packages/kafka/producer/kafka.py", line 357, in send
self._wait_on_metadata(topic, self.config['max_block_ms'] / 1000.0)
File "/opt/archimonde/lib/python2.6/site-packages/kafka/producer/kafka.py", line 465, in _wait_on_metadata
"Failed to update metadata after %s secs.", max_wait)
kafka.common.KafkaTimeoutError: ('Failed to update metadata after %s secs.', 60.0)
But it’s ok on 2.0.0-1.kafka2.0.0.p0.12.
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 51 (3 by maintainers)
Also wondering fix. I see this error when running kafka via Docker but not when installing from binary.
EDIT: So wondering what the root cause may be to help me pinpoint issues between environments.
I got this message when trying to publish to a topic that doesn’t exist.
Another cause for this issue is using a
bytesobject as the topic name, instead of a string. In python 3, you could have a string come in as a bytes object (a = b'test_string'). If this happens, you can just convert the topic name to a utf-8 string, and it might start working. It worked for me.Whats the fix for this issue guys?
I’ve also noticed that using kafka-python in the interpreter works just fine, with the exact same bootstrap_servers and everything else. The bug only surfaces when executed as part of a Python program.
I had same problem, To solve the problem I increased request_timeout_ms (I used 10ms before)
Steps to reproduce the issue:
git clone https://github.com/wurstmeister/kafka-dockergit clone https://github.com/dpkp/kafka-python( cd kafka-docker; docker-compose up -d )docker pscd kafka-python; vi example.py(line 17 and 35)python example.pyYou will also see this if you disallow topic creation on publish on the broker (auto.create.topics.enable=false) and then try to produce to a topic that hasn’t been created yet.
to enable python logging at its most simple form:
I solved the problem by setting
KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1indocker-compose.ymlforwurstmeister/kafkaimage.I solved this problem after starting zookeeper on my macOS,
Btw, you can enable logging debug for more information:
Because producer complained like this:
DEBUG:kafka.client:Give up sending metadata request since no node is availableI am re-opening, I also saw the same issue in production with
kafka-python1.3.5 and kafka broker 0.10.0.1. The producer was a VM, the broker was baremetal–no docker here.The error showed up after rolling the kafka cluster, and the
KafkaProducerinstance never recovered, just continually threw this error once a minute. The brokers were rolled by stopping/restarting the broker processes–the underlying machines did not change and DNS was not affected.I watched the tcp stream and the producer never tried to even talk to Kafka, so it wasn’t timing out cluster side.
When I restarted the service, it immediately started working.
so what is the fix here?
resolved. must set all brokers for bootstrap_servers.
If useful for anyone, I started see this error after
auto.create.topics.enablewas set tofalse… creating the topic manually resolved the issueI had this issue when porting the code from python 2 to 3.
Solved by changing
to
The
1.4.5release includes some commits that may fix some unconfirmed edge cases related to this. So if you’re following this ticket, please retry with the latest release and let us know if you’re still seeing this.For others in thread, this error means that the client could not connect to a node. Most likely it’s not visible. Please make sure the
advertized.listenerconfig is valid. For Kubenetes users, please make sure you adverize the same dns as you bootstrap from.@archiechen I am getting the same error too, what did you do?
I used docker-compose, in which the Kafka advertised listeners are: KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
Then I changed the port in python code to 29029 and solved the problem.
Ran into this issue in a Kubernetes environment (incubator/kafka Helm chart). The issue was that the number of replicas was less than the value for “default.replication.factor”
If running the kafka brokers inside a container, make sure that it advertises the correct hostnames that are accessible by the clients. If not specified it will use the canonical hostname of the container and that may be an internal one that cannot be used outside of the container.
You can set the advertised hosts with
advertised.listenersin the server properties, or if you just want to test with an unmodified kafka docker image, you can override the default setting with:I am having the same issue while using minikube and https://github.com/Yolean/kubernetes-kafka (Kubernetes version 1.9.0, Kafka 1.0)