ingress-nginx: TCP Proxy Not Listening (tcp-services)
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
NGINX Ingress controller version:
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:“1”, Minor:“14”, GitVersion:“v1.14.2”, GitCommit:“66049e3b21efe110454d67df4fa62b08ea79a19b”, GitTreeState:“clean”, BuildDate:“2019-05-16T16:23:09Z”, GoVersion:“go1.12.5”, Compiler:“gc”, Platform:“linux/amd64”}
Server Version: version.Info{Major:“1”, Minor:“13”, GitVersion:“v1.13.0”, GitCommit:“ddf47ac13c1a9483ea035a79cd7c10005ff21a6d”, GitTreeState:“clean”, BuildDate:“2018-12-03T20:56:12Z”, GoVersion:“go1.11.2”, Compiler:“gc”, Platform:“linux/amd64”}
Environment:
Baremetal: kubeadm install on RedHat 7.6
What happened:
Created a TCP proxy and the port is not being listened on.
What you expected to happen:
TCP port to be exposed.
How to reproduce it (as minimally and precisely as possible):
-
Controller installed via the mandatory YAML file in the docs.
-
Nginx Service created like so
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
- name: logstash-port-9615
port: 9615
targetPort: 9615
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
- Have an Application service running:
logstash-service ClusterIP 10.106.112.104 <none> 9615/TCP 48m
tcp-servicesconfigured like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
9615: "logstash/logstash-service:9615"
namespace is correct.
Anything else we need to know:
Logs:
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.24.1
Build: git-ce418168f
Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------
W0619 21:58:39.281513 6 flags.go:214] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
nginx version: nginx/1.15.10
W0619 21:58:39.284806 6 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0619 21:58:39.284992 6 main.go:205] Creating API client for https://10.96.0.1:443
I0619 21:58:39.292838 6 main.go:249] Running in Kubernetes cluster version v1.13 (v1.13.0) - git (clean) commit ddf47ac13c1a9483ea035a79cd7c10005ff21a6d - platform linux/amd64
I0619 21:58:39.495093 6 main.go:124] Created fake certificate with PemFileName: /etc/ingress-controller/ssl/default-fake-certificate.pem
I0619 21:58:39.512678 6 nginx.go:265] Starting NGINX Ingress controller
I0619 21:58:39.517114 6 event.go:209] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"nginx-configuration", UID:"11ebd92e-92d7-11e9-a2da-0050569d3226", APIVersion:"v1", ResourceVersion:"2269814", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/nginx-configuration
I0619 21:58:39.520390 6 event.go:209] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"1224cc7e-92d7-11e9-a2da-0050569d3226", APIVersion:"v1", ResourceVersion:"2269816", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I0619 21:58:39.520420 6 event.go:209] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"1207f437-92d7-11e9-a2da-0050569d3226", APIVersion:"v1", ResourceVersion:"2273791", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0619 21:58:40.713729 6 nginx.go:311] Starting NGINX process
I0619 21:58:40.713828 6 leaderelection.go:217] attempting to acquire leader lease ingress-nginx/ingress-controller-leader-nginx...
I0619 21:58:40.714474 6 controller.go:170] Configuration changes detected, backend reload required.
I0619 21:58:40.716560 6 status.go:86] new leader elected: nginx-ingress-controller-689498bc7c-5sb9h
I0619 21:58:40.807289 6 controller.go:188] Backend successfully reloaded.
I0619 21:58:40.807330 6 controller.go:202] Initial sync, sleeping for 1 second.
E0619 21:58:41.605302 6 checker.go:57] healthcheck error: 500
[19/Jun/2019:21:58:41 +0000]TCP200000.000
[19/Jun/2019:21:58:57 +0000]TCP200000.000
I0619 21:59:30.943330 6 leaderelection.go:227] successfully acquired lease ingress-nginx/ingress-controller-leader-nginx
I0619 21:59:30.943592 6 status.go:86] new leader elected: nginx-ingress-controller-689498bc7c-5kb28
I’ve tried various combinations of PROXY:PROXY ::PROXY :PROXY as well but nada.
It’s close to https://github.com/kubernetes/ingress-nginx/issues/3984 however in that one the proxy seems to actually happen. I’m not even getting that far. netstat -an |grep 9615 is empty.
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 6
- Comments: 21 (1 by maintainers)
Similar issue here and updating some ingress rule fixed it. Looks like editing the tcp-services configmap by adding/removing the ‘PROXY’ field(s) doesn’t trigger a re-generation of the nginx.conf file.
Setup:
Steps to reproduce:
8000: namespace/service:8000.listen 8000;directive for the tcp/8000 service.8000: namespace/service:8000:PROXYlisten 8000;listen 8000 proxy_protocol;which is correct.I ran into the same issue, and the problem was only that nginx don’t reload automatically when changing the tcp config. Deleting the pod fixed the problem
I have the same behaviour here 😕
I have the same probleme the configmap
tcp-servicesdon’t seems to be used. Morever in the yml for the daemonset there are an arg for the nginx config (-nginx-configmaps=$(POD_NAMESPACE)/nginx-config) but nothing about the config maptcp-services. Some old tutorial like here or here have the arg--tcp-services-configmap=$(POD_NAMESPACE)/tcp-servicesbut now it don’t seems to exist anymore. Someone can explain the reason ?something like this works for helm with values.yaml
I have the same behavior. I am using AWS load balancer with multiple services deployed in the cluster, the http/s routes are working (80/443) but TCP is not working.
I have a mosquitto runing in the cluster which is using TCP port 8883 (which is configured in the nginx tcp-services configmap and service). But, If I publish, I get a confusing message which says “Error:Success”.
Bug?