kubernetes: Elasticsearch stops logging on AWS cluster

It is the second time this happens to me, after some time kibana stops showing new log messages from my pods:

image

My pods continue to work normally and I can see the logs with logs -f. kube-system pods seems to be working well too:

felipejfc@MyMac ~/dev/tfg/kubernetes-prod-cluster$ kubectl-prod get pods --namespace kube-system
NAME                                                 READY     STATUS    RESTARTS   AGE
elasticsearch-logging-v1-e5jj5                       1/1       Running   0          6d
elasticsearch-logging-v1-wkn9m                       1/1       Running   0          6d
fluentd-elasticsearch-ip-172-20-0-120.ec2.internal   1/1       Running   0          12d
fluentd-elasticsearch-ip-172-20-0-21.ec2.internal    1/1       Running   0          10d
fluentd-elasticsearch-ip-172-20-0-56.ec2.internal    1/1       Running   0          12d
fluentd-elasticsearch-ip-172-20-1-108.ec2.internal   1/1       Running   0          12d
fluentd-elasticsearch-ip-172-20-1-112.ec2.internal   1/1       Running   0          10d
fluentd-elasticsearch-ip-172-20-1-57.ec2.internal    1/1       Running   0          12d
heapster-v1.0.2-980111707-92myy                      4/4       Running   0          12d
kibana-logging-v1-zfljc                              1/1       Running   2          12d
kube-dns-v11-uzype                                   4/4       Running   2          12d
kube-proxy-ip-172-20-0-120.ec2.internal              1/1       Running   0          12d
kube-proxy-ip-172-20-0-21.ec2.internal               1/1       Running   0          10d
kube-proxy-ip-172-20-0-56.ec2.internal               1/1       Running   0          12d
kube-proxy-ip-172-20-1-108.ec2.internal              1/1       Running   0          12d
kube-proxy-ip-172-20-1-112.ec2.internal              1/1       Running   0          10d
kube-proxy-ip-172-20-1-57.ec2.internal               1/1       Running   0          12d
kubernetes-dashboard-v1.0.1-xmwq0                    1/1       Running   1          12d
monitoring-influxdb-grafana-v3-9v0vz                 2/2       Running   0          6d

The first time this happened I deleted the elasticsearch pods and when new ones born they started to receive logging again…

Any tip on how can I debug this?

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 1
  • Comments: 23 (5 by maintainers)

Most upvoted comments

@drora Elasticsearch is going to stop being a part of the core Kubernetes, because it’s not about Kubernetes per se. Current setup in this repo is far from perfect, e.g. there’s no persistent storage. I encourage you to take a look at different open-source integrations or event at Elastic offerings.

We hope to polish our integration with different logging solutions and than make a clear list of recommended sources of information about it, instead of keeping those integrations inside this repo. Stay tuned!

@therc, I’m now using aws-es-proxy with a elasticsearch cluster in aws elastic search service, It seems that my fluentd pods stops sending logs to my elasticsearch just like when I was using the elasticsearch pods provided by get.k8s… I will try to set that config you said before (reload_connections)… but if thats the case I think it should be set by default… @a-robinson I do think it is an issue with k8s default fluentd pods config…