kubeedge: `kubectl exec` and `kubectl logs` failed
What happened: I successfully deployed the kubeedge cluster, but it seems that the cluster network has some issues, pods and services are connection refused
What you expected to happen: Pods and service can be accessed normally
How to reproduce it (as minimally and precisely as possible): I successfully deployed the kubeedge cluster
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
172.31.40.171 Ready <none> 88m 2.0.0 172.31.40.171 <none> Ubuntu 18.04.2 LTS 4.15.0-1032-aws docker://4.15.0-1032-aws
172.31.43.104 Ready <none> 6h20m 2.0.0 172.31.43.104 <none> Ubuntu 18.04.2 LTS 4.15.0-1032-aws docker://4.15.0-1032-aws
ip-172-31-37-250 Ready master 7h16m v1.13.4 172.31.37.250 <none> Ubuntu 18.04.2 LTS 4.15.0-1032-aws docker://18.9.3
I am trying to create a simple app, here is the template I created the app.
# Source: emqx-helm/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-emqx-helm-env
namespace: default
labels:
app.kubernetes.io/name: emqx-helm
helm.sh/chart: emqx-helm-v1.0
app.kubernetes.io/instance: test
app.kubernetes.io/managed-by: Tiller
data:
---
# Source: emqx-helm/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: test-emqx-helm
namespace: default
labels:
app.kubernetes.io/name: emqx-helm
helm.sh/chart: emqx-helm-v1.0
app.kubernetes.io/instance: test
app.kubernetes.io/managed-by: Tiller
spec:
type: NodePort
ports:
- name: mqtt
port: 1883
protocol: TCP
targetPort: 1883
- name: mqttssl
port: 8883
protocol: TCP
targetPort: 8883
- name: mgmt
port: 8080
protocol: TCP
targetPort: 8080
- name: websocket
port: 8083
protocol: TCP
targetPort: 8083
- name: wss
port: 8084
protocol: TCP
targetPort: 8084
- name: dashboard
port: 18083
protocol: TCP
targetPort: 18083
selector:
app.kubernetes.io/name: emqx-helm
app.kubernetes.io/instance: test
---
# Source: emqx-helm/templates/deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: test-emqx-helm
namespace: default
labels:
app.kubernetes.io/name: emqx-helm
helm.sh/chart: emqx-helm-v1.0
app.kubernetes.io/instance: test
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: emqx-helm
app.kubernetes.io/instance: test
template:
metadata:
labels:
app.kubernetes.io/name: emqx-helm
app.kubernetes.io/instance: test
spec:
nodeSelector:
name: edge-node
containers:
- name: emqx
image: emqx/emqx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1883
hostPort: 1883
- containerPort: 8883
hostPort: 8883
- containerPort: 8080
hostPort: 8080
- containerPort: 8083
hostPort: 8083
- containerPort: 8084
hostPort: 8084
- containerPort: 18083
hostPort: 18083
envFrom:
- configMapRef:
name: test-emqx-helm-env
env:
- name: EMQX_NAME
value: emqx
- name: EMQX_CLUSTER__K8S__APP_NAME
value: emqx
- name: EMQX_CLUSTER__DISCOVERY
value: k8s
- name: EMQX_CLUSTER__K8S__SERVICE_NAME
value: test-emqx-helm
- name: EMQX_CLUSTER__K8S__APISERVER
value: 172.31.37.250
- name: EMQX_CLUSTER__K8S__NAMESPACE
value: default
- name: EMQX_CLUSTER__K8S__ADDRESS_TYPE
value: ip
tty: true
Use kubectl
to view, you can see that the service
and pods
have been successfully created.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7h21m
test-emqx-helm NodePort 10.104.124.221 <none> 1883:30761/TCP,8883:30163/TCP,8080:30021/TCP,8083:32216/TCP,8084:31446/TCP,18083:31907/TCP 44m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-emqx-helm-6c56d88ff-h8b44 1/1 Running 0 44m
test-emqx-helm-6c56d88ff-jvxgr 1/1 Running 0 44m
But using curl
to access the service
cluster IP and using kubectl
exec to enter pods
will report Connection refused
$ curl 10.104.124.221:18083
curl: (7) Failed to connect to 10.104.124.221 port 18083: Connection refused
$ kubectl exec -it test-emqx-helm-6c56d88ff-h8b44 sh
Error from server: error dialing backend: dial tcp 172.31.40.171:10250: connect: connection refused
$ kubectl exec -it test-emqx-helm-6c56d88ff-jvxgr sh
Error from server: error dialing backend: dial tcp 172.31.43.104:10250: connect: connection refused
It is okay to use curl directly to access pods.
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-emqx-helm-6c56d88ff-h8b44 1/1 Running 0 52m <none> 172.31.40.171 <none> <none>
test-emqx-helm-6c56d88ff-jvxgr 1/1 Running 0 52m <none> 172.31.43.104 <none> <none>
$ curl 172.31.40.171:18083
<!DOCTYPE html><html class=dark-themes><head><meta charset=UTF-8><meta name=viewport content="width=device-width,user-scalable=no,initial-scale=1,maximum-scale=1,minimum-scale=1"><meta http-equiv=X-UA-Compatible content="ie=edge"><meta charset=utf-8><meta name=renderer content=webkit><meta http-equiv=X-UA-Compatible content="IE=Edge"><link rel="shortcut icon" type=image/x-icon href=/static/emq.ico><link rel=stylesheet href=/static/css/font-awesome.min.css><link rel=stylesheet href=/static/css/icon-font.css><!--[if lt IE 8]><script type="text/javascript" src="/static/js/upgrade.js"></script><![endif]--><script src=/static/js/env.js></script><title>Dashboard</title><!--[if lte IE 9]>
<script src="/static/js/base64.min.js"></script>
<![endif]--><link href=/static/css/app.22f05f31024b205fa95684c117d41d0d.css rel=stylesheet></head><body><div id=app></div><script type=text/javascript src=/static/js/manifest.bfb23c1c1daf4db12257.js></script><script type=text/javascript src=/static/js/vendor.c5267157a23ae3361b4d.js></script><script type=text/javascript src=/static/js/app.20f9a02f303cbb0201f0.js></script></body></html>
The master node
and the edge node
can also be pinged.
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
172.31.40.171 Ready <none> 100m 2.0.0 172.31.40.171 <none> Ubuntu 18.04.2 LTS 4.15.0-1032-aws docker://4.15.0-1032-aws
172.31.43.104 Ready <none> 6h33m 2.0.0 172.31.43.104 <none> Ubuntu 18.04.2 LTS 4.15.0-1032-aws docker://4.15.0-1032-aws
ip-172-31-37-250 Ready master 7h29m v1.13.4 172.31.37.250 <none> Ubuntu 18.04.2 LTS 4.15.0-1032-aws docker://18.9.3
$ ping 172.31.43.104
PING 172.31.43.104 (172.31.43.104) 56(84) bytes of data.
64 bytes from 172.31.43.104: icmp_seq=1 ttl=64 time=0.422 ms
64 bytes from 172.31.43.104: icmp_seq=2 ttl=64 time=0.407 ms
64 bytes from 172.31.43.104: icmp_seq=3 ttl=64 time=0.439 ms
$ ping 172.31.40.171
PING 172.31.40.171 (172.31.40.171) 56(84) bytes of data.
64 bytes from 172.31.40.171: icmp_seq=1 ttl=64 time=0.422 ms
64 bytes from 172.31.40.171: icmp_seq=2 ttl=64 time=0.379 ms
64 bytes from 172.31.40.171: icmp_seq=3 ttl=64 time=0.368 ms
64 bytes from 172.31.40.171: icmp_seq=4 ttl=64 time=0.361 ms
I am using AWS EC2, the security group allows all ports to pass
Anything else we need to know?:
The kubeedge cluster has deployed the kube-flannel
plugin. I am not sure if this is necessary.
Environment:
- KubeEdge version: kubeedge/kubeedge master branch
- Hardware configuration: AWS EC2 t2.medium
- OS (e.g. from /etc/os-release): ubuntu 18.04.2 LTS (Bionic Beaver)
- Kernel (e.g. uname -a): Linux ip-172-31-37-250 4.15.0-1032-aws #34-Ubuntu SMP Thu Jan 17 15:18:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
- Others:
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 16 (10 by maintainers)
Yes, this is expected.
Kubelet expose port serving APIserver client for kubectl exec/log etc. They are for debugging/monitoring purpose that ITAdmin/user can remotely do against edge node from cloud side. We were planning to use separate mechanism for collecting log and debugging. Plus we didn’t make edged as a server thus no support.
@tedli, can you please share the customer scenarios that you are trying with exec/log? Thanks