kubernetes: Accessing dashboard results in i/o timeout
I’m starting my cluster with the following:
export KUBE_AWS_ZONE=us-east-1b
export NUM_NODES=2
export MASTER_SIZE=t2.small
export NODE_SIZE=t2.small
export AWS_S3_REGION=us-east-1
export AWS_S3_BUCKET=cw-users
export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash
The cluster is setup with no errors. I can access the master with the IP printed out by the script. However, when I attempt to access /ui I receive the following error:
Error: 'dial tcp 10.244.0.6:9090: i/o timeout'
Trying to reach: 'http://10.244.0.6:9090/'
Listing the kube-system pods shows me that something is wrong with kibana-logging
:
NAME READY STATUS RESTARTS AGE
elasticsearch-logging-v1-kmiom 1/1 Running 0 7m
elasticsearch-logging-v1-oljhw 1/1 Running 0 7m
fluentd-elasticsearch-ip-172-20-0-151.ec2.internal 1/1 Running 0 7m
fluentd-elasticsearch-ip-172-20-0-152.ec2.internal 1/1 Running 0 7m
heapster-v1.0.2-qvya4 2/2 Running 0 7m
kibana-logging-v1-8ciwe 0/1 CrashLoopBackOff 1 48s
kube-dns-v11-0uz05 4/4 Running 0 7m
kube-proxy-ip-172-20-0-151.ec2.internal 1/1 Running 0 7m
kube-proxy-ip-172-20-0-152.ec2.internal 1/1 Running 0 7m
kubernetes-dashboard-v1.0.0-0y0mp 1/1 Running 0 7m
monitoring-influxdb-grafana-v3-16sii 2/2 Running 0 7m
Logs for kibana-logging:
{"@timestamp":"2016-04-07T22:18:12.301Z","level":"error","message":"Request Timeout after 1500ms","node_env":"production","error":{"message":"Request Timeout after 1500ms","name":"Error","stack":"Error: Request Timeout after 1500ms\n at [object Object].<anonymous> (/kibana-4.0.2-linux-x64/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)\n"}}
{"@timestamp":"2016-04-07T22:18:12.303Z","level":"fatal","message":"Request Timeout after 1500ms","node_env":"production","error":{"message":"Request Timeout after 1500ms","name":"Error","stack":"Error: Request Timeout after 1500ms\n at [object Object].<anonymous> (/kibana-4.0.2-linux-x64/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)\n"}}
I don’t know if the kibana-logging errors have anything to do with my dashboard i/o errors or if they are unrelated.
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Reactions: 1
- Comments: 19 (5 by maintainers)
@nebirhos Can you report an issue to the documentation repo? This looks like a problem many people encounter, so I guess you’re doing this correct, but the guide is invalid. It is not the first time I see people reporting problems to me while using ubuntu-based setup.
I found that the minion node route tables are being added by the createRoute function in https://github.com/kubernetes/kubernetes/blob/c6e995a824094a96f7d43a25e897283f83a12997/pkg/cloudprovider/providers/aws/aws_routes.go#L112. Seeing what that does I figured out that the Source/Dest check was enabled on my minions, even though it shouldn’t have been.
Now, I can manually fix my cluster after bringing it up by doing the following:
I’m still trying to figure out where createRoute is called from and why it fails to properly handle the above steps automatically.