ingress-nginx: TCP Proxy Not Listening (tcp-services)

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

BUG REPORT

NGINX Ingress controller version:

quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:“1”, Minor:“14”, GitVersion:“v1.14.2”, GitCommit:“66049e3b21efe110454d67df4fa62b08ea79a19b”, GitTreeState:“clean”, BuildDate:“2019-05-16T16:23:09Z”, GoVersion:“go1.12.5”, Compiler:“gc”, Platform:“linux/amd64”}

Server Version: version.Info{Major:“1”, Minor:“13”, GitVersion:“v1.13.0”, GitCommit:“ddf47ac13c1a9483ea035a79cd7c10005ff21a6d”, GitTreeState:“clean”, BuildDate:“2018-12-03T20:56:12Z”, GoVersion:“go1.11.2”, Compiler:“gc”, Platform:“linux/amd64”}

Environment:

Baremetal: kubeadm install on RedHat 7.6

What happened:

Created a TCP proxy and the port is not being listened on.

What you expected to happen:

TCP port to be exposed.

How to reproduce it (as minimally and precisely as possible):

  1. Controller installed via the mandatory YAML file in the docs.

  2. Nginx Service created like so

---
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
    - name: logstash-port-9615
      port: 9615
      targetPort: 9615
      protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
  1. Have an Application service running:
logstash-service   ClusterIP   10.106.112.104   <none>        9615/TCP   48m
  1. tcp-services configured like so:
apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
data:
  9615: "logstash/logstash-service:9615"

namespace is correct.

Anything else we need to know:

Logs:

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:    0.24.1
  Build:      git-ce418168f
  Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------

W0619 21:58:39.281513       6 flags.go:214] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
nginx version: nginx/1.15.10
W0619 21:58:39.284806       6 client_config.go:549] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0619 21:58:39.284992       6 main.go:205] Creating API client for https://10.96.0.1:443
I0619 21:58:39.292838       6 main.go:249] Running in Kubernetes cluster version v1.13 (v1.13.0) - git (clean) commit ddf47ac13c1a9483ea035a79cd7c10005ff21a6d - platform linux/amd64
I0619 21:58:39.495093       6 main.go:124] Created fake certificate with PemFileName: /etc/ingress-controller/ssl/default-fake-certificate.pem
I0619 21:58:39.512678       6 nginx.go:265] Starting NGINX Ingress controller
I0619 21:58:39.517114       6 event.go:209] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"nginx-configuration", UID:"11ebd92e-92d7-11e9-a2da-0050569d3226", APIVersion:"v1", ResourceVersion:"2269814", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/nginx-configuration
I0619 21:58:39.520390       6 event.go:209] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"1224cc7e-92d7-11e9-a2da-0050569d3226", APIVersion:"v1", ResourceVersion:"2269816", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I0619 21:58:39.520420       6 event.go:209] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"1207f437-92d7-11e9-a2da-0050569d3226", APIVersion:"v1", ResourceVersion:"2273791", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0619 21:58:40.713729       6 nginx.go:311] Starting NGINX process
I0619 21:58:40.713828       6 leaderelection.go:217] attempting to acquire leader lease  ingress-nginx/ingress-controller-leader-nginx...
I0619 21:58:40.714474       6 controller.go:170] Configuration changes detected, backend reload required.
I0619 21:58:40.716560       6 status.go:86] new leader elected: nginx-ingress-controller-689498bc7c-5sb9h
I0619 21:58:40.807289       6 controller.go:188] Backend successfully reloaded.
I0619 21:58:40.807330       6 controller.go:202] Initial sync, sleeping for 1 second.
E0619 21:58:41.605302       6 checker.go:57] healthcheck error: 500
[19/Jun/2019:21:58:41 +0000]TCP200000.000
[19/Jun/2019:21:58:57 +0000]TCP200000.000
I0619 21:59:30.943330       6 leaderelection.go:227] successfully acquired lease ingress-nginx/ingress-controller-leader-nginx
I0619 21:59:30.943592       6 status.go:86] new leader elected: nginx-ingress-controller-689498bc7c-5kb28

I’ve tried various combinations of PROXY:PROXY ::PROXY :PROXY as well but nada.

It’s close to https://github.com/kubernetes/ingress-nginx/issues/3984 however in that one the proxy seems to actually happen. I’m not even getting that far. netstat -an |grep 9615 is empty.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 6
  • Comments: 21 (1 by maintainers)

Most upvoted comments

Similar issue here and updating some ingress rule fixed it. Looks like editing the tcp-services configmap by adding/removing the ‘PROXY’ field(s) doesn’t trigger a re-generation of the nginx.conf file.

Setup:

  1. AWS/EKS with proxy-protocol enabled on ELB (classic load balancer and all listeners are TCP)
  2. ingress nginx with the ‘use-proxy-protocol: “true”’ in the ‘nginx-configuration’ configmap

Steps to reproduce:

  1. edit the ‘tcp-services’ configmap to add a tcp .service 8000: namespace/service:8000.
  2. edit the nginx-controller service to add a port (port:8000 --> targetPort:8000) for the tcp service in step1
  3. check /etc/nginx/nginx.conf in nginx controller pod and confirm it contains a ‘server’ block with correct listen 8000; directive for the tcp/8000 service.
  4. edit the ‘tcp-services’ configmap again to add the proxy-protocol decode directive and now the k/v for the tcp/8000 service becomes 8000: namespace/service:8000:PROXY
  5. check /etc/nginx/nginx.conf in nginx controller pod and there isn’t any change comparing that from step3, it is still listen 8000;
  6. edit some ingress rule (make some change like updating the host)
  7. check /etc/nginx/nginx.conf in nginx controller pod again and now the listen directive for the tcp/8000 service becomes listen 8000 proxy_protocol; which is correct.

I ran into the same issue, and the problem was only that nginx don’t reload automatically when changing the tcp config. Deleting the pod fixed the problem

I followed the above steps but still no luck. Service has an exposed port 5671 (rabbitmq) and I appied a tcp-services yaml on top of my existing ingress-nginx namespace. Then I went into the ingress-nginx container to check nginx.conf, I see something strange at the “stream” section - as if the upstream server did not get configured, left as a placeholder:

        upstream upstream_balancer {
                server 0.0.0.1:1234; # placeholder
                balancer_by_lua_block {
                        tcp_udp_balancer.balance()
                }
        }
...
        # TCP services
        server {
                preread_by_lua_block {
                        ngx.var.proxy_upstream_name="tcp-dev-rabbitmq-rabbitmq-ha-5671";
                }
                listen                  5671;
                proxy_timeout           600s;
                proxy_pass              upstream_balancer;
        }

I have the same behaviour here 😕

I have the same probleme the configmap tcp-services don’t seems to be used. Morever in the yml for the daemonset there are an arg for the nginx config (-nginx-configmaps=$(POD_NAMESPACE)/nginx-config) but nothing about the config map tcp-services. Some old tutorial like here or here have the arg --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services but now it don’t seems to exist anymore. Someone can explain the reason ?

something like this works for helm with values.yaml

controller:
  extraArgs:
    tcp-services-configmap: $(POD_NAMESPACE)/my-tcp-services
    udp-services-configmap: $(POD_NAMESPACE)/my-udp-services

I have the same behavior. I am using AWS load balancer with multiple services deployed in the cluster, the http/s routes are working (80/443) but TCP is not working.

I have a mosquitto runing in the cluster which is using TCP port 8883 (which is configured in the nginx tcp-services configmap and service). But, If I publish, I get a confusing message which says “Error:Success”.

Bug?