ingress-gce: GCE health check does not pick up changes to pod readinessProbe

From @ConradIrwin on April 11, 2017 4:22

Importing this issue from https://github.com/kubernetes/kubernetes/issues/43773 as requested.

Kubernetes version (use kubectl version): 1.4.9

Environment:

  • Cloud provider or hardware configuration: GKE
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

What happened:

I created an ingress with type GCE

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: admin-proxy
  labels:
    name: admin-proxy
  annotations:
    kubernetes.io/ingress.allow-http: "false"
spec:
  tls:
    - secretName: ingress-admin-proxy
  backend:
    serviceName: admin-proxy
    servicePort: 80

This set up the health-check on the google cloud backend endpoint to call “/” as documented.

Unfortunately my service doesn’t return 200 on "/" and so I added a readinessCheck to the pod as suggested by the documentation.

What you expected to happen:

I expected the health check to be automatically updated.

Instead I had to delete the ingress and re-create it in order for the health-check to update.

How to reproduce it (as minimally and precisely as possible):

  1. Create a deployment with no readiness probe.
  2. Create an ingress pointing to the pods created by that deployment.
  3. Add a readiness probe to the deployment.

Anything else we need to know:

Copied from original issue: kubernetes/ingress-nginx#582

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 18
  • Comments: 29 (21 by maintainers)

Most upvoted comments

I’ve been bitten by this issue more than any other issue regarding k8s and GKE. I think it happens due to incorrect abstraction level. Looking at this from 30000 ft, it seems wrong that the ingress controller looks through services into pods to see their probe configuration. What if there are different pods with different probe configurations, or even non-pod backends? Service-level health check should be defined on the service level. Ingress level configuration would work too but would be suboptimal (services can belong to multiple ingresses and it would be bad to duplicate).

Furthermore, the ingress is actually checking whether the service is reachable via a node, not that a pod is alive - it might make sense to have totally separate health checks for these. Also, this would be useful service metadata for any service clients, not just ingress.

2 cents as a user - I can appreciate the desire to not break things on upgrade, but the ingress not updating the health check is a real nuisance.

Would annotations be a solution to this? We use them already for specifying a static IP for the load balancer (for example), so could extend that to other resources. For upgrade compatibility we could assign whatever backend/loadbalancer/healthcheck/etc is currently in use to said annotation. Then, in future, if the annotation isn’t set then the ingress knows it has carte blanche to do whatever, but if it is then it knows it’s been human-specified. This would also be future-proof if we decide to allow specifying an existing load balancer on creation, to balance between clusters.

Adding to the docs that we need to deploy the PODS and then rollout an ingress would help… with a warning. Waste quite a bit of time on this one.

From @kilianc on May 14, 2017 21:50

To answer my own question and for everybody landing here with the same problem, rotate the name of the service in your rule or backend and the controller will pick up the path correctly. I assume this could be done with 0 downtime creating a new rule and deleting the old one.