ingress-nginx: service default/... does not have any active endpoints

Hi,

I’ve upgraded gcr.io/google_containers/nginx-ingress-controller from 0.8.3 to 0.9.0-beta.1. But now I’m facing the issue that no service is available through the ingress controller. It will always result in 503 status code responses.

Log looks like this:

W0208 15:59:58.498949       6 controller.go:806] service default/webapp-master does not have any active endpoints
W0208 15:59:58.499025       6 controller.go:560] service default/ingress-nginx-default-backend does not have any active endpoints
(repeating every 10 seconds)

The generated nginx.conf contains ony upstream entries with 127.0.0.1:8181 as backend endpoint:

   upstream default-webapp-master-80 {
        least_conn;
        server 127.0.0.1:8181 max_fails=0 fail_timeout=0;
    }
    upstream upstream-default-backend {
        least_conn;
        server 127.0.0.1:8181 max_fails=0 fail_timeout=0;
    }

Any idea what is going wrong?

I only updated the ingress controller. ingress resources where not changed. One example ingress resource looks like this:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: webapp-master
  annotations:
    proxy-body-size: 100m
spec:
  rules:
  - host: hostname.domain.example
    http:
      paths:
      - path: /
        backend:
          serviceName: webapp-master
          servicePort: 80

The matching service like this:

kind: Service
apiVersion: v1
metadata:
  name: webapp-master
spec:
  ports:
  - port: 80
    targetPort: http
  selector:
    app: webapp-master

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 34 (13 by maintainers)

Most upvoted comments

I had the same error and changed my path from /* to / and it worked (I see you have / above though

This is not fixed in 0.9.0-beta.4

2017/04/27 09:19:30 [warn] 12477#12477: *39354 using uninitialized “proxy_upstream_name” variable while logging request

@aledbf i have the same problem

In gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.5 it does work, with

gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.7 i got the error

W0527 15:27:27.667881       1 controller.go:842] service external-logging/kibana-svc does not have any active endpoints

Here are the deployment, service and Ingress

metadata:
  name: kibana-deployment
  namespace: external-logging
spec:
  replicas: 1
  template:
    metadata:
      labels:
        component: kibana
    spec:
      containers:
      - name: kibana
        image: example.net/kibana:5.2.2
        imagePullPolicy: IfNotPresent
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
          requests:
            cpu: 100m
        env:
        - name: "ELASTICSEARCH_URL"
          value: "http://elasticsearch-external-logging:9200"
        ports:
        - containerPort: 5601
          name: ui
          protocol: TCP
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kibana-ingress
  namespace: external-logging
spec:
  rules:
  - host: kibana.example.net
    http:
      paths:
      - path: /
        backend:
          serviceName: kibana-svc
          servicePort: 5601
apiVersion: v1
kind: Service
metadata:
  name: kibana-svc
  namespace: external-logging
  labels:
    component: kibana-svc
spec:
  type: NodePort
  ports:
  - port: 5601
    protocol: TCP
    targetPort: 5601
  selector:
    component: kibana

I’ve encountered the same issue with the fluentd-elasticsearch service and it turned out to be RBAC related.

I viewed the following in my logs

W0601 18:40:36.085507 1 controller.go:1106] error mapping service ports: error syncing service kube-system/kibana-logging: User “system:serviceaccount:nginx-ingress:nginx-ingress-serviceaccount” cannot update services in the namespace “kube-system”. (put services kibana-logging)

An easy fix was to change the targetPort parameter from ui to 5601 in the kibana-logging service definition. Proper fix was updating the permissions.