kubeedge: Unable to run cloud as k8s deployments

What happened: Follow the guide to deploy cloud as k8s deployments, “edgecontroller” is started on host successfully, then create a edge node, but the edge node never come to be ready.

What you expected to happen: node should be ready just like how to deploy with binaries, and there should be a way to know what happened on controller node.

How to reproduce it (as minimally and precisely as possible):

# ps -ef | grep edge
root     21894 13963  0 18:10 pts/1    00:00:00 grep --color=auto edge
root     24088 24053  0 13:49 ?        00:00:00 edgecontroller
{
  "kind": "Node",
  "apiVersion": "v1",
  "metadata": {
    "name": "10.169.41.219",
    "labels": {
      "name": "edge-node"
    }
  }
}
  • kubectl get nodes shows the node is not ready and always not ready.
# kubectl get nodes
NAME            STATUS     ROLES    AGE    VERSION
10.169.41.219   NotReady   <none>   158m
dave-desktop    Ready      master   6d2h   v1.14.1

BTW, If master is deployed with binary everything should be fine. And I also want to know how to debug for this case? where I can get the log if the controller is deployed in k8s cluster?

Anything else we need to know?:

Environment:

  • KubeEdge version: latest
  • Hardware configuration: x84_64 / arm64
  • OS (e.g. from /etc/os-release): ubuntu18.04
  • Kernel (e.g. uname -a):
  • Others:

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 19 (18 by maintainers)

Most upvoted comments

Thanks, the issue has been addressed after exposing the service to outside, still have some question on the error message from edge’s log, if the image is ephemeral, it should be expected instead of an error.

Anyway, I will look into the source later to sort it out, close the issue.

What’s your k8s ‘Service’ type? Is there any significant log at edge side?