ingress-nginx: ingress doesn't work on docker desktop kubernetes cluster after reset

Hi there!

I’m using Docker Desktop kubernetes cluster for local development on Windows with WSL 2. I found interesting issue, but I’m not sure if it’s ingress or docker desktop problem, anyway I want to share it here. Let’s take following scenario:

  1. install Docker Desktop for Windows
  2. in settings, enable Kubernetes
  3. install Ingress Nginx through command: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.1/deploy/static/provider/cloud/deploy.yaml at this point everything works fine, accessing 127.0.0.1 throws 404 not found nginx page, which is fine, because there is no services or event ingress config yet. Let’s continue:
  4. in Docker Dektop settings there is option ‘Reset Kubernetes Cluster’ which wipes all k8s resources and mounts cluster from scratch.
  5. as everything has been wiped, install again ingress same as before. And now there is a problem, ingress doesn’t work anymore. Acessing 127.0.0.1 results ERR_EMPTY_RESPONSE error. Next attempts to reset cluster doesn’t help, the only way to fix issue is reinstall entire docker in system.

I tested it on two Windows machines, and result was the same, so it should easily be reproducible.

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 2
  • Comments: 21 (10 by maintainers)

Most upvoted comments

Hi, The code in the ingress-nginx controller only runs some processes inside the cluster. For connecting from outside the cluster to anything inside the cluster, a service type LoadBalancer or a service type NodePort is created. The ipaddress of this service type LoadBalancer or the NodePort needs to be reachable from wherever you are sending a http request.

The “wherever” part in above sentence is not the same on Windows, Linux and Mac. On linux, the docker-network and the LoadBalancer/NodePort ipaddress end up being on the same TCP/IP stack. This is the host tcp/ip stack. So the client from where you send the http request and the destination to which you send the http request are relatively easily connected, because they are on the same tcp/ip stack.

On Windows & Mac, docker is implemented in a Virtual-Machine, because Windows & MacOS can not run linux containers natively. Beyond that, there is port-forwarding configured to connect the client tcp/ip stack on Windows/MacOS host to the tcp/ip stack of the virtual-machine. Thus users experience seamless connectivity, while sending http requests from Windows/MacOS to the tcp/ip stack of the virtual-machine, when they try a destination like 127.0.0.1. You can check this by looking at the output of kubectl cluster-info. And you can deep dive into it by looking at the packet filtering/manipulating software of Windows/MacOS (Windows Firewall of MacOS pfctl etc.)

All above info is sort of obvious to some and sort of a new perspective to others. Its relevant here because you need to look at the information described above, when you have problems that are similar to the problem this issue is opened for.

The restarts or re-inits of the Docker0Desktop cluster are effectively re-establishing the port-forwarding or the networking so this problem is out of scope of ingress-controller. This problem is better handled by the developers & support of Docker-Desktop. If there are specific questions on the ingress-nginx controller, then they can be directed here.

Thanks, ; Long

Another thing I realized: it’s not necessary to reinstall docker, but it is necessary to restart it.

For all interested, I’ve switched to k3d instead of Docker Desktop, it’s working well for this use case and I’m able to created multiple cluster… so it’s a major win for me and no more ingress issue like this.

If you quit docker desktop and then manually start it again the ingress controller started working again for me.

One thing I just realized, when I was continuing my Google search that lead me to this issue:

$ kubectl get service -n ingress-nginx
NAME                       TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller   LoadBalancer   10.106.238.5   <pending>     80:32479/TCP,443:30407/TCP   43m

So it seems that the ingress-controller is not getting any external IP, which would indeed prevent any outside request from reaching it…