kafka-docker: WARN Connection to node 1001 could not be established. Broker may not be available.

Running this on Mac Sierra

Started this up using this config:

version: '2'
services:
  zookeeper:
    image: wurstmeister/zookeeper
    ports:
      - "2181:2181"
  kafka:
    build: .
    ports:
      - "9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_CREATE_TOPICS: "lambda1:1:1"
    volumes:
      - /Users/<user>/projects/aa_dev_kafka/docker.sock:/var/run/docker.sock

Everything appears to start up, however the log kafka log is full of these errors:

WARN Connection to node 1001 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)

I saw a couple of post indicating it might be a firewall issue, however I am not running a FW locally

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 20
  • Comments: 18

Commits related to this issue

Most upvoted comments

for me the issue was that i had not exposed the port to the host machine. I had it as kafka: image: wurstmeister/kafka ports: - "9092"

as against

kafka: image: wurstmeister/kafka ports: - "9092:9092"

Well I’ve figured out.

Fixing port binding like @dattatre263 said is wrong, since you can’t scale out kafka anymore. At this moment, master kafka version is 1.0.0, and problem lies with deprecated advertised.host.name in docker-compose.yml.

Solution: clone and change environment variable in docker-compose.yml from KAFKA_ADVERTISED_HOST_NAME to KAFKA_ADVERTISED_LISTENERS, and everything will work as expected.

I was able to Resolve this issue on my setups. It appears that this behavior is Kafka correctly reporting that it is unable to open a TCP connection to the advertised host (the Docker Host). There must be code changes in Kafka 0.11.0 which make this more likely to occur.

I initially observed this issue under CentOS 7. I eventually found the default firewall in CentOS 7 did not allow connection to port 9092/tcp from the Default Zone used with Docker containers. When I modified the firewall, the Kafka warnings ceased and code worked as expected. This CentOS machine worked, with the default firewall, for the previous kafka-docker (v 0.10.1.1).

I initially did NOT observe this issue under Ubuntu 16.04, but was able to make it occur by entering an incorrect ip address for KAFKA_ADVERTISED_HOST_NAME, so the host never responded.

I found it helpful to ping the Docker Host from the kafka containers to prove the host was reachable, and to run “netstat -tlpa” on the containers. Working setups showed that a TCP connection to [host id]:9092 was ESTABLISHED. Setups exhibiting the “Broker may not be available” issue showed the status to be SYN_SENT, indicating the container had tried to establish the connection but the host did not complete the process.

By the way, I found working with the CentOS 7 firewall a little confusing. It seems that a new “connection” is created, in the firewall “Default Zone”, when “docker-compose up” is run. Whatever zone is set as the Default Zone must be set to allow connections to 9092/tcp. Since the Default Zone is used by default, that applies the permissions more widely than I would prefer.

Of course, there could be many reasons why a container might not be able to establish a TCP connection to its host. I hope my results will be helpful to others dealing with the same issue.

I am new to this docker image so I assumed it had to be me doing it wrong. For the past two days I have been trying to get this running on AWS ECS. No matter what I try with every docker-compose up, it seems that the kafka broker alternates between shutting down or throwing this “warning” indefinitely.

I was running into the exact same issue as I was changing the origin kafka-Dockerfile as provided by @wurstmeister to have a Ubuntu 16.0.4 LTS as the base (due to requirements of my company):

#Dockerfile
FROM ubuntu:xenial
...

I think you guys have tried this already - have you turned off and on again ;- ) - but I was able to solve this strange issue by just stopping and then deleting all containers/images. May this is also the solution in your case:

docker stop $(ps -a -q)
docker rm $(ps -a -q) 
docker rmi $(docker images -q)