kubernetes: Kibana doesn't load any data

Hello, I am trying to add fluentd-elasticsearch in my CoreOS cluster. But I end up having this
2015-08-13_1640 Kibana doesn’t show anything.

Some info about setup-

Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.1", GitCommit:"6a5c06e3d1eb27a6310a09270e4a5fb1afa93e74", GitTreeState:"clean"}

http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kibana-logging/#/settings/indices/?_g=() I get console error like “Error: IndexPattern’s configured pattern does not match any indices” like shown in above screenshot

http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kibana-logging/elasticsearch/ this gives me something like this { status: 200, name: "Caiman", cluster_name: "kubernetes-logging", version: { number: "1.5.2", build_hash: "62ff9868b4c8a0c45860bebb259e21980778ab1c", build_timestamp: "2015-04-27T09:21:06Z", build_snapshot: false, lucene_version: "4.10.4" }, tagline: "You Know, for Search" }

Cluster info Kubernetes master is running at http://192.168.20.100:8080 Elasticsearch is running at http://192.168.20.100:8080/api/v1beta3/proxy/namespaces/kube-system/services/elasticsearch-logging Kibana is running at http://192.168.20.100:8080/api/v1beta3/proxy/namespaces/kube-system/services/kibana-logging KubeDNS is running at http://192.168.20.100:8080/api/v1beta3/proxy/namespaces/kube-system/services/kube-dns

elasticsearch:1.7 docker log

I0813 09:44:50.709621       6 elasticsearch_logging_discovery.go:42] Kubernetes Elasticsearch logging discovery
I0813 09:44:50.749147       6 elasticsearch_logging_discovery.go:75] Found ["20.0.5.5" "20.0.96.4"]
I0813 09:45:00.753193       6 elasticsearch_logging_discovery.go:75] Found ["20.0.5.5" "20.0.96.4"]
I0813 09:45:00.753256       6 elasticsearch_logging_discovery.go:87] Endpoints = ["20.0.5.5" "20.0.96.4"]
[2015-08-13 09:45:01,822][INFO ][node                     ] [Caiman] version[1.5.2], pid[12], build[62ff986/2015-04-27T09:21:06Z]
[2015-08-13 09:45:01,823][INFO ][node                     ] [Caiman] initializing ...
[2015-08-13 09:45:01,828][INFO ][plugins                  ] [Caiman] loaded [], sites []
[2015-08-13 09:45:05,113][INFO ][node                     ] [Caiman] initialized
[2015-08-13 09:45:05,114][INFO ][node                     ] [Caiman] starting ...
[2015-08-13 09:45:05,306][INFO ][transport                ] [Caiman] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/20.0.5.5:9300]}
[2015-08-13 09:45:05,319][INFO ][discovery                ] [Caiman] kubernetes-logging/xw-Jftk_RLefH6P_gUVvSQ
[2015-08-13 09:45:08,402][INFO ][cluster.service          ] [Caiman] detected_master [Conquer Lord][Py2pJLfeR1aEuFlyj6pF_g][elasticsearch-logging-v1-wxmik][inet[/20.0.96.4:9300]]{master=true}, added {[Conquer Lord][Py2pJLfeR1aEuFlyj6pF_g][elasticsearch-logging-v1-wxmik][inet[/20.0.96.4:9300]]{master=true},}, reason: zen-disco-receive(from master [[Conquer Lord][Py2pJLfeR1aEuFlyj6pF_g][elasticsearch-logging-v1-wxmik][inet[/20.0.96.4:9300]]{master=true}])
[2015-08-13 09:45:08,442][INFO ][http                     ] [Caiman] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/20.0.5.5:9200]}
[2015-08-13 09:45:08,443][INFO ][node                     ] [Caiman] started

kibana:1.3 docker log

{"@timestamp":"2015-08-13T10:04:11.613Z","level":"info","message":"No existing kibana index found","node_env":"production"}
{"@timestamp":"2015-08-13T10:04:11.630Z","level":"info","message":"Listening on 0.0.0.0:5601","node_env":"production"}

Please do let me know if any other info you want me to share. (I’m tagging @satnam6502 here, as I saw you did lot of work around this addon) Please let me know what’s wrong here.

About this issue

  • Original URL
  • State: closed
  • Created 9 years ago
  • Comments: 18 (10 by maintainers)

Most upvoted comments

Hi,

we’ve got the same problem and as far as i understood it it’s within the fluentd-elasticsearch bridge. It mounts /var/log from the host which holds no logs because coreos uses journalctl. It mounts /var/lib/docker/containers which displays no data because on coreos this is root:root 700.

Kind regards Marian Schwarz

edit: nvm got it running