rancher: Nginx ingress controller unable to obtain endpoints for service(s)

When deploying a fresh installation of Rancher v2.2.2, I create a new project and namespace, create a new workload with a simple nginx:latest pod, and attach a L7 load balancer to the workload with a hostname/URL and SSL certificate.

Expectation The NGINX Ingress Controller will allow traffic to reach the workload/pods and and display the NGINX welcome page.

Actual Result Traffic to the URL times out. The logs from the NGINX ingress controller throw the following error: W0425 22:08:03.044641 7 controller.go:753] Error obtaining Endpoints for Service "onboarding/ingress-b9f5db46e8a39e9a7cfb5fcd09aaf56e": no object matching key "onboarding/ingress-b9f5db46e8a39e9a7cfb5fcd09aaf56e" in local store

As a result, it is impossible to deploy any project/namespace with NGINX Ingress to be reached from the outside world.

Note: This exact same configuration worked without issue in build 2.2.1.

Other Testing Performed

  • I’ve tried deleting and redeploying the LB, both with and without the SSL cert but with the same results.
  • I’ve tried downgrading the version/configs of the nginx ingress controllers and the nginx http backend to match that of version 2.2.1, but to no avail.
  • I’ve tried deploying other Ingress methods, but also to no avail.

Other Notes Whenever I attempt to edit the YAML, it comes up blank. This is true for editing the YAML for the Project Namespace Ingress YAML, the System Ingress-Nginx YAML, and the System Ingress-Nginx Config Maps YAML.


Useful Info
Versions Rancher v2.2.2 UI: v2.2.41
Route containers.index

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 16

Most upvoted comments

I have the same problem with nginx ingress. Added a new cluster and get the same errors. I installed a new cluster from 2.2.2. It is not possible to access any services via ingress. Another cluster, created 2 days ago works well. Both created as Rancher Launched Clusters with Rancher 2.2.2.

Problem fixed. It was a misconfiguration of the layer 4 loadbalancer in front of the worker nodes. The error message seems ok and is just for information if an ingress will be created for a new service.