kubernetes: [GKE] Ingress does not connect to NodePort Service

Is this a request for help?: No

What keywords did you search in Kubernetes issues before filing this one? : is:issue is:open gke


Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug Report

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: GKE

What happened: Hitting the static IP assigned to my Ingress results in an HTTP 502

What you expected to happen: 200 OK

How to reproduce it (as minimally and precisely as possible): I followed steps adapted from here.

I have a service that looks like this:

kind: Service
apiVersion: v1
metadata:
  name: myapp-service
spec:
  ports:
    # Accept traffic sent to port 80
    - name: http
      port: 80
      targetPort: 5000
  selector:
    # Loadbalance traffic across Pods matching
    # this label selector
    app: myapp-web
  type: NodePort

And an Ingres that looks like this:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: myapp-ingress
  annotations:
    kubernetes.io/ingress.global-static-ip-name: "myapp-qa-staticip"
spec:
  backend:
    serviceName: myapp-service
    servicePort: 80

The static IP is a global static IP assigned via GCP Networking.

When I create the Service and the Ingress, I keep getting a 502 despite giving GCP upto an hour to provision whatever it needs to provision. Anything else we need to know: If I change service type from NodePort to LoadBalancer and use the ephemeral static IP, my app comes up beautifully.

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 47 (16 by maintainers)

Most upvoted comments

@nicksardo appreciate your help. I will try your steps and let you know. Please realize that this defect ticket was opened after a fair amount of searching on Stackoverflow. Also realize that both GCP and k8s UI do not give sufficient feedback, leading to defect reports - maybe something to bring up with product management.

Here’s my full working example. I just tested it and it took ~3 minutes to start returning.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-echo-deploy
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echoserver
        image: nicksardo/echoserver:latest
        imagePullPolicy: Always
        env:
        - name: namespace
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: podname
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: nodename
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        ports:
        - name: http
          containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-echo-svc
  labels:
    app: echo
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
    name: my-http-port
  selector:
    app: echo
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-echo-ingress
spec:
  backend:
    serviceName: my-echo-svc
    servicePort: my-http-port

I ran across this same issue recently and the cause was the backend service responding to base url requests with a 404. The default health check endpoint for GCP load balancers is / which caused all the nodes in the cluster to report an unhealthy status resulting in no traffic making it to the GKE cluster. Solution is to have your backend service return a 200 to / or change the health check url in the load balancer to a valid URL.

Same happening with me. Ingress not able to find a healthy back end. Removing cluster is not helping.

@breinken The GCE ingress controller code specifically uses /healthz for the health check endpoint on default-http-backend. I just tested a GKE cluster and the backends are heatlhy for the default-http-backend.

1- I would assume not. Most people don’t let alpha clusters sit around as they cost money and the time limit prevents people from using them for production.

2- No, it’s not sufficient. As I just said, you need to delete the ingresses, then all GCP network resources that have the k8s- prefix. With orphaned resources, the controller can get into a bad state in several areas. In your case, I recommend purging them. You should have no problems creating an ingress after this is done.

Since there is nothing wrong with the controller and this is a specific problem with your project, this ticket should be closed. In the future, stack-overflow would be a better venue for configuration problems.

By the way nginx-ingress on gce works with services exposed as clusterIP.

I’ve had same issue as @warent for gce and nginx-ingress. After hours of head banging when I changed the name of chart everything started to work. I was not able to reproduce this issue hence I’ve not created a new issue in Git.

Thank @warent I forgot the readiness for 1 deployment and keep investigating on the services & ingress sides, felt very annoyed because the others worked but this one 😂

@thockin here is the output from when i SSH in to one of my nodes. i also use a NodePort service, with an ingress pointing at it, and get 502s.

gke-showlist-1-default-pool-87ac5143-scwj ~ # curl -I  127.0.0.1:32414/healthz  
HTTP/1.1 200 OK
Date: Thu, 14 Sep 2017 23:24:03 GMT
Content-Length: 2
Content-Type: text/plain; charset=utf-8

for some reason, the health check is failing… but when i ssh in, i can do a GET just fine, and get a 200 back. no idea why the loadbalancer is telling me that the health check is failing

I have exactly the same problem reproduced in this issue and can’t find the solution. I follow the same tutorial and try to create a cluster for a wordpress + mysql. Using a loadbalancer with ephemeral external ip it works fine, using a nodeport + ingress it does not work and return a 502 error. The unique error received is the unhealthy backends one but I have been days trying different combinations and can’t open my cluster to a static public ip.

$ kubectl describe ing basic-ingress Name: basic-ingress Namespace: default Address: 35.186.244.227 Default backend: wordpress:80 (10.44.0.6:80) Rules: Host Path Backends


    • wordpress:80 (10.44.0.6:80) Annotations: backends: {“k8s-be-30597–e012d73b802ed1b0”:“UNHEALTHY”} forwarding-rule: k8s-fw-default-basic-ingress–e012d73b802ed1b0 target-proxy: k8s-tp-default-basic-ingress–e012d73b802ed1b0 url-map: k8s-um-default-basic-ingress–e012d73b802ed1b0 Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message

1h 7m 14 loadbalancer-controller Normal Service default backend set to wordpress:30597