docker-elk: Elasticsearch cant be reach

Problem description

So i was just install this ELK stack on my Centos 8. I followed your instruction and when i started the stack, the Kibana and Logstash cant reach Elasticsearch

Extra information

I configured the SELINUX to disabled

Stack configuration

I’m just changing the xpack from trial to basic

$ cat docker-elk/elasticsearch/config/elasticsearch.yml
---
## Default Elasticsearch configuration from Elasticsearch base image.
## https://github.com/elastic/elasticsearch/blob/master/distribution/docker/src/docker/config/elasticsearch.yml
#
cluster.name: "docker-cluster"
network.host: 0.0.0.0

## X-Pack settings
## see https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-xpack.html
#
xpack.license.self_generated.type: basic
xpack.security.enabled: true
xpack.monitoring.collection.enabled: true

Docker setup

this is my docker version

Client: Docker Engine - Community
 Version:           19.03.13
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        4484c46d9d
 Built:             Wed Sep 16 17:02:36 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.13
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       4484c46d9d
  Built:            Wed Sep 16 17:01:11 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.3.7
  GitCommit:        8fba4e9a7d01810a393d5d25a3621dc101981175
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

and this is my docker-compose version

docker-compose version 1.27.4, build 40524192
docker-py version: 4.3.1
CPython version: 3.7.7
OpenSSL version: OpenSSL 1.1.0l  10 Sep 2019

Docker logs

this is my Kibana logs

kibana_1         | {"type":"log","@timestamp":"2020-10-12T13:34:22Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"No living connections"}
kibana_1         | {"type":"log","@timestamp":"2020-10-12T13:34:25Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1         | {"type":"log","@timestamp":"2020-10-12T13:34:25Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"No living connections"}
kibana_1         | {"type":"log","@timestamp":"2020-10-12T13:34:27Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1         | {"type":"log","@timestamp":"2020-10-12T13:34:27Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"No living connections"}
kibana_1         | {"type":"log","@timestamp":"2020-10-12T13:34:30Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1         | {"type":"log","@timestamp":"2020-10-12T13:34:30Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"No living connections"}
kibana_1         | {"type":"log","@timestamp":"2020-10-12T13:34:32Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1         | {"type":"log","@timestamp":"2020-10-12T13:34:32Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"No living connections"}

and this is my Logstash ones

logstash_1       | [2020-10-12T13:39:44,914][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketException] No route to host (Host unreachable)"}
logstash_1       | [2020-10-12T13:39:48,434][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketException] No route to host (Host unreachable)"}
logstash_1       | [2020-10-12T13:39:49,971][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketException] No route to host (Host unreachable)"}
logstash_1       | [2020-10-12T13:39:55,025][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketException] No route to host (Host unreachable)"}
logstash_1       | [2020-10-12T13:40:00,082][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketException] No route to host (Host unreachable)"}

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 21

Most upvoted comments

Glad you figured it out!

This is firewalld logs. There are many WARNING: COMMAND_FAILED messages

@tombo0 I think I know what happened. firewalld probably tries to backup existing IPtables rules when it stops, and restore them when it starts.

The problem is that Docker dynamically manages its own set of rules (all the chains stating with DOCKER*), so those should be excluded from firewalld’s control. I’m not sure if that’s possible, because the StackOverflows discussions you linked do not suggest such thing, but hopefully the solution you found will be durable.

Anyway, i discovered similar issue in this link and this link and says that the firewalld is the main issue

so i added custom firewalld rule like they told so, the IP is depend on your stack bridge network

$ firewall-cmd --permanent --zone=public --add-rich-rule='rule family=ipv4 source address=172.18.0.0/16 accept'
$ firewall-cmd --zone=public --add-masquerade --permanent
$ firewall-cmd --reload

and after i rebooted the machine, it works just fine. Thank you so much @antoineco for being awesome. Keep up the good work

PS. I changed the SELINUX to enforcing

usefull info : Add custom rule Also add custom rule Awesome info