ingress-nginx: helm upgrade controller from v0.51 to v1.4.0 caused 10.0.0.2:0: invalid port while connecting to upstream error
What happened:
I’m trying to upgrade from helm-chart 3.41.0 with ingress-controller 0.51.0
to helm-chart 4.3.0 with ingress-controller 1.4.0
on kubernetes 1.21.14
in GCP
And I get an error in the lua/balancer.lua
file on line 348 about ports.
[error] 31#31: *2450 [lua] balancer.lua:348: balance(): error while setting current upstream peer 10.0.0.2:0: invalid port while connecting to upstream, client: 10.x.x.x, server: OUR_URL, request: "GET / HTTP/1.1", host: "OUR_URL"
What you expected to happen:
I expected it to just work 😃
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version
.):
$ pod1
NGINX Ingress controller
Release: v1.4.0
Build: 50be2bf95fd1ef480420e2aa1d6c5c7c138c95ea
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.10
$ pod2
NGINX Ingress controller
Release: v1.4.0
Build: 50be2bf95fd1ef480420e2aa1d6c5c7c138c95ea
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.10
Kubernetes version (use kubectl version
):
1.21.14
Environment:
- Cloud provider or hardware configuration: GCP
- OS (e.g. from /etc/os-release):
/etc/nginx $ cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.16.2
PRETTY_NAME="Alpine Linux v3.16"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://gitlab.alpinelinux.org/alpine/aports/-/issues"
-
Kernel (e.g.
uname -a
):Linux ingress-nginx-controller-5bf7cf4684-v5hg6 5.4.202+ #1 SMP Sat Jul 16 10:06:38 PDT 2022 x86_64 Linux
-
Install tools:
Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
- Urm, we use Terraform and Helm and kubectl via Github actions
-
Basic cluster related info:
kubectl version
: v1.21.14-gke.2700kubectl get nodes -o wide
: 3 nodes, Container-Optimized OS from Google, Kernel version 5.4.202+
-
How was the ingress-nginx-controller installed:
- If helm was used then please show output of
helm ls -A | grep -i ingress
- If helm was used then please show output of
$ helm ls -A | grep -i ingress
ingress-nginx ingress-nginx 311 2022-10-12 13:31:17.525389834 +0000 UTC deployed ingress-nginx-4.3.0 1.4.0
- If helm was used then please show output of
helm -n <ingresscontrollernamepspace> get values <helmreleasename>
This is for our staging instance
$ helm -n ingress-nginx get values ingress-nginx
USER-SUPPLIED VALUES:
controller:
config:
add-headers: ingress-nginx/custom-response-headers
hide-headers: x-powered-by
hsts: "false"
hsts-include-subdomains: "false"
http-redirect-code: "301"
replicaCount: 2
service:
loadBalancerIP: x.x.x.x
defaultBackend:
enabled: false
rbac:
create: true
- If helm was not used, then copy/paste the complete precise command used to install the controller, along with the flags and options used
- if you have more than one instance of the ingress-nginx-controller installed in the same cluster, please provide details for all the instances
HELM_VERSION=3.9.0
This is how we install it in the action
install_nginx_ingress() {
echo "Adding Nginx Ingress@${NGINX_INGRESS_VERSION}"
touch "${DIR}/nginx-ingress/${CLUSTER}.yaml"
kubectl create namespace ingress-nginx || true
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/psp/psp.yaml
kubectl apply -n ingress-nginx -f "${DIR}/nginx-ingress/configMaps/response-headers.yaml"
helm repo update
helm upgrade \
--reset-values \
--install \
--wait \
--atomic \
--cleanup-on-fail \
--namespace ingress-nginx \
--set controller.service.loadBalancerIP="${K8S_LOAD_BALANCER_IP}" \
--values "${DIR}/nginx-ingress/common.yaml" \
--values "${DIR}/nginx-ingress/${CLUSTER}.yaml" \
--version="${NGINX_INGRESS_VERSION}" \
ingress-nginx \
ingress-nginx/ingress-nginx
# ConfigMap changes aren't picked up via Helm
kubectl rollout restart -n ingress-nginx deployment ingress-nginx-controller
}
Response headers file:
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-response-headers
data: {}
- Current State of the controller:
kubectl describe ingressclasses
$ kubectl describe ingressclasses -A
Name: nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.4.0
helm.sh/chart=ingress-nginx-4.3.0
Annotations: meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: ingress-nginx
Controller: k8s.io/ingress-nginx
Events: <none>
kubectl -n <ingresscontrollernamespace> get all -A -o wide
kubectl -n ingress-nginx get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/ingress-nginx-controller-5bf7cf4684-cr2fr 1/1 Running 0 11m 10.0.2.13 gke-ps-ew2-primary-s-ps-ew2-node1-sta-c5dd251a-vdrk <none> <none>
pod/ingress-nginx-controller-5bf7cf4684-v5hg6 1/1 Running 0 11m 10.0.3.12 gke-ps-ew2-primary-s-ps-ew2-node1-sta-4f1e8dd9-h4rj <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ingress-nginx-controller LoadBalancer 10.1.46.16 x.x.x.x 80:31065/TCP,443:31770/TCP 461d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-admission ClusterIP 10.1.73.105 <none> 443/TCP 461d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/ingress-nginx-controller 2/2 1 2 461d controller registry.k8s.io/ingress-nginx/controller:v1.4.0@sha256:34ee929b111ffc7aa426ffd409af44da48e5a0eea1eb2207994d9e0c0882d143 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/ingress-nginx-controller-5bf7cf4684 1 1 0 6s controller registry.k8s.io/ingress-nginx/controller:v1.4.0@sha256:34ee929b111ffc7aa426ffd409af44da48e5a0eea1eb2207994d9e0c0882d143 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=5bf7cf4684
replicaset.apps/ingress-nginx-controller-6fc96df5cd 2 2 2 55s controller registry.k8s.io/ingress-nginx/controller:v1.4.0@sha256:34ee929b111ffc7aa426ffd409af44da48e5a0eea1eb2207994d9e0c0882d143 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=6fc96df5cd
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
$ kubectl -n ingress-nginx describe svc ingress-nginx
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.4.0
helm.sh/chart=ingress-nginx-4.3.0
Annotations: cloud.google.com/neg: ***"ingress":true***
meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: ingress-nginx
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.1.46.16
IPs: 10.1.46.16
IP: x.x.x.x
LoadBalancer Ingress: x.x.x.x
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31065/TCP
Endpoints: 10.0.2.12:80,10.0.3.10:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 31770/TCP
Endpoints: 10.0.2.12:443,10.0.3.10:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Name: ingress-nginx-controller-admission
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.4.0
helm.sh/chart=ingress-nginx-4.3.0
Annotations: cloud.google.com/neg: ***"ingress":true***
meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: ingress-nginx
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.1.73.105
IPs: 10.1.73.105
Port: https-webhook 443/TCP
TargetPort: webhook/TCP
Endpoints: 10.0.2.12:8443,10.0.3.10:8443
Session Affinity: None
Events: <none>
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
$ kubectl -n ingress-nginx describe pod ingress-nginx-controller
Name: ingress-nginx-controller-5bf7cf4684-cr2fr
Namespace: ingress-nginx
Priority: 0
Node: gke-xxxxxxxxxxxxx-node1-sta-c5dd251a-vdrk/10.154.15.202
Start Time: Wed, 12 Oct 2022 14:32:40 +0100
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
pod-template-hash=5bf7cf4684
Annotations: kubectl.kubernetes.io/restartedAt: 2022-10-12T13:32:20Z
Status: Running
IP: 10.0.2.13
IPs:
IP: 10.0.2.13
Controlled By: ReplicaSet/ingress-nginx-controller-5bf7cf4684
Containers:
controller:
Container ID: containerd://74912ebb89ebc5f3dde3105c6c12172f4e788350c5018dc71ec398ae39a36f6b
Image: registry.k8s.io/ingress-nginx/controller:v1.4.0@sha256:34ee929b111ffc7aa426ffd409af44da48e5a0eea1eb2207994d9e0c0882d143
Image ID: registry.k8s.io/ingress-nginx/controller@sha256:34ee929b111ffc7aa426ffd409af44da48e5a0eea1eb2207994d9e0c0882d143
Ports: 80/TCP, 443/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
--election-id=ingress-controller-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
State: Running
Started: Wed, 12 Oct 2022 14:32:41 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-nginx-controller-5bf7cf4684-cr2fr (v1:metadata.name)
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pr9ts (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: ingress-nginx-admission
Optional: false
kube-api-access-pr9ts:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13m default-scheduler Successfully assigned ingress-nginx/ingress-nginx-controller-5bf7cf4684-cr2fr to gke-ps-ew2-primary-s-ps-ew2-node1-sta-c5dd251a-vdrk
Normal Pulled 13m kubelet Container image "registry.k8s.io/ingress-nginx/controller:v1.4.0@sha256:34ee929b111ffc7aa426ffd409af44da48e5a0eea1eb2207994d9e0c0882d143" already present on machine
Normal Created 13m kubelet Created container controller
Normal Started 13m kubelet Started container controller
Normal RELOAD 13m nginx-ingress-controller NGINX reload triggered due to a change in configuration
Name: ingress-nginx-controller-5bf7cf4684-v5hg6
Namespace: ingress-nginx
Priority: 0
Node: gke-xxxxxxxxxxx-node1-sta-4f1e8dd9-h4rj/10.154.15.208
Start Time: Wed, 12 Oct 2022 14:32:20 +0100
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
pod-template-hash=5bf7cf4684
Annotations: kubectl.kubernetes.io/restartedAt: 2022-10-12T13:32:20Z
Status: Running
IP: 10.0.3.12
IPs:
IP: 10.0.3.12
Controlled By: ReplicaSet/ingress-nginx-controller-5bf7cf4684
Containers:
controller:
Container ID: containerd://1865e56a30576a0f2aa4259eac458ea4c42a2ba2fc5a775990bb7b04f1d96e63
Image: registry.k8s.io/ingress-nginx/controller:v1.4.0@sha256:34ee929b111ffc7aa426ffd409af44da48e5a0eea1eb2207994d9e0c0882d143
Image ID: registry.k8s.io/ingress-nginx/controller@sha256:34ee929b111ffc7aa426ffd409af44da48e5a0eea1eb2207994d9e0c0882d143
Ports: 80/TCP, 443/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
--election-id=ingress-controller-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
State: Running
Started: Wed, 12 Oct 2022 14:32:21 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-nginx-controller-5bf7cf4684-v5hg6 (v1:metadata.name)
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wjwfc (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: ingress-nginx-admission
Optional: false
kube-api-access-wjwfc:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 14m default-scheduler Successfully assigned ingress-nginx/ingress-nginx-controller-5bf7cf4684-v5hg6 to gke-ps-ew2-primary-s-ps-ew2-node1-sta-4f1e8dd9-h4rj
Normal Pulled 14m kubelet Container image "registry.k8s.io/ingress-nginx/controller:v1.4.0@sha256:34ee929b111ffc7aa426ffd409af44da48e5a0eea1eb2207994d9e0c0882d143" already present on machine
Normal Created 14m kubelet Created container controller
Normal Started 14m kubelet Started container controller
Normal RELOAD 14m nginx-ingress-controller NGINX reload triggered due to a change in configuration
- Current state of ingress object, if applicable:
kubectl -n <appnnamespace> get all,ing -o wide
$ kubectl -n app get all,ing -o wide [OUTPUT TRIMMED]
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/app-admin-server-6675986868-2zbpm 1/1 Running 0 16m 10.0.3.14 gke-xxxxxxxxxxxxx-node1-sta-4f1e8dd9-h4rj <none> <none>
pod/app-admin-server-6675986868-44mgj 1/1 Running 0 16m 10.0.1.11 gke-xxxxxxxxxxxxx-node1-sta-c5dd251a-5w89 <none> <none>
pod/app-api-server-5dcb755ddd-2z7hp 1/1 Running 0 11h 10.0.0.4 gke-xxxxxxxxxxxxx-node1-sta-a6f0d61e-7p1m <none> <none>
pod/app-api-server-5dcb755ddd-mhlq5 1/1 Running 0 5h5m 10.0.2.4 gke-xxxxxxxxxxxxx-node1-sta-c5dd251a-vdrk <none> <none>
pod/app-cloud-sql-proxy-89d4d47dc-4gnbw 1/1 Running 0 11h 10.0.0.6 gke-xxxxxxxxxxxxx-node1-sta-a6f0d61e-7p1m <none> <none>
pod/app-a-worker-688c98759d-qqvnt 1/1 Running 0 5h5m 10.0.2.5 gke-xxxxxxxxxxxxx-node1-sta-c5dd251a-vdrk <none> <none>
pod/app-b-worker-6f8785fcbb-5vlb4 1/1 Running 0 5h17m 10.0.0.17 gke-xxxxxxxxxxxxx-node1-sta-a6f0d61e-7p1m <none> <none>
pod/app-c-worker-f78c455d7-l8lfm 1/1 Running 0 5h5m 10.0.1.8 gke-xxxxxxxxxxxxx-node1-sta-c5dd251a-5w89 <none> <none>
pod/app-d-worker-5c48cc764b-4w4m7 1/1 Running 0 11h 10.0.0.7 gke-xxxxxxxxxxxxx-node1-sta-a6f0d61e-7p1m <none> <none>
pod/app-migration-up-76hpg 0/2 Completed 0 16m 10.0.3.13 gke-xxxxxxxxxxxxx-node1-sta-4f1e8dd9-h4rj <none> <none>
pod/app-nginx-cache-buster-864db5b7df-l7ww5 1/1 Running 0 5h5m 10.0.2.3 gke-xxxxxxxxxxxxx-node1-sta-c5dd251a-vdrk <none> <none>
pod/app-nginx-cache-buster-864db5b7df-x2qvv 1/1 Running 0 11h 10.0.0.5 gke-xxxxxxxxxxxxx-node1-sta-a6f0d61e-7p1m <none> <none>
pod/app-e-worker-77996d66d4-zrk5p 1/1 Running 0 5h17m 10.0.0.13 gke-xxxxxxxxxxxxx-node1-sta-a6f0d61e-7p1m <none> <none>
pod/app-f-worker-567fcdfcfd-bp6cf 1/1 Running 0 5h5m 10.0.1.7 gke-xxxxxxxxxxxxx-node1-sta-c5dd251a-5w89 <none> <none>
pod/app-web-nginx-0 1/1 Running 0 5h17m 10.0.1.3 gke-xxxxxxxxxxxxx-node1-sta-c5dd251a-5w89 <none> <none>
pod/app-web-nginx-1 1/1 Running 0 5h5m 10.0.3.3 gke-xxxxxxxxxxxxx-node1-sta-4f1e8dd9-h4rj <none> <none>
pod/app-web-server-5c88776fd9-bpqjd 1/1 Running 0 5h17m 10.0.0.18 gke-xxxxxxxxxxxxx-node1-sta-a6f0d61e-7p1m <none> <none>
pod/app-web-server-5c88776fd9-jmwwp 1/1 Running 0 5h5m 10.0.1.4 gke-xxxxxxxxxxxxx-node1-sta-c5dd251a-5w89 <none> <none>
pod/app-www-d6866cb56-6lz8j 1/1 Running 0 5h5m 10.0.2.2 gke-xxxxxxxxxxxxx-node1-sta-c5dd251a-vdrk <none> <none>
pod/app-www-d6866cb56-ld72l 1/1 Running 0 5h5m 10.0.1.6 gke-xxxxxxxxxxxxx-node1-sta-c5dd251a-5w89 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/app ClusterIP 10.1.216.25 <none> 3004/TCP,3000/TCP,3001/TCP,3002/TCP 460d app.kubernetes.io/instance=app,app.kubernetes.io/name=app
service/nginx-cache-buster ClusterIP 10.1.71.166 <none> 80/TCP 460d app.kubernetes.io/instance=app,app.kubernetes.io/name=app
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/app nginx staging.cms.oursite,staging.api.oursite,staging.media.oursite x.x.x.x 80, 443 460d
kubectl -n <appnamespace> describe ing <ingressname>
$ kubectl -n app describe ing
Name: app
Namespace: app
Address: x.x.x.x
Default backend: default-http-backend:80 (10.0.0.14:8080)
TLS:
tls-secret-letsencrypt-staging.cms.oursite terminates staging.cms.oursite
tls-secret-letsencrypt-staging.api.oursite terminates staging.api.oursite
tls-secret-letsencrypt-staging.media.oursite terminates staging.media.oursite
Rules:
Host Path Backends
---- ---- --------
staging.cms.oursite
/ app:3004 (10.0.1.10:8088,10.0.3.8:8088)
staging.api.oursite
/ app:3000 (10.0.0.4:3010,10.0.2.4:3010)
staging.media.oursite
/ app:3001 (10.0.1.6:80,10.0.2.2:80)
Annotations: cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: true
meta.helm.sh/release-name: app
meta.helm.sh/release-namespace: app
nginx.ingress.kubernetes.io/proxy-body-size: 50m
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 10m nginx-ingress-controller Scheduled for sync
Normal Sync 10m nginx-ingress-controller Scheduled for sync
Normal Sync 57s nginx-ingress-controller Scheduled for sync
Normal Sync 35s nginx-ingress-controller Scheduled for sync
Normal Sync 14s nginx-ingress-controller Scheduled for sync
-
If applicable, then, your complete and exact curl/grpcurl command (redacted if required) and the reponse to the curl/grpcurl command with the -v flag
-
Others:
- Any other related information like ;
- copy/paste of the snippet (if applicable)
kubectl describe ...
of any custom configmap(s) created and in use- Any other related information that may help
- Any other related information like ;
These are the YAML files from GCP for both current (3.41) and upgrade attempt (4.3) for the 4 “resources” - the ingress-controller pod; the 2 services and the ingress app. They have been sanitised for potentially private data.
How to reproduce this issue:
Anything else we need to know:
I can provide all the other config we have - but it’s just a pretty basic - here are our pods and the ports and domain names.
Works perfectly with the old version, but need to be able to upgrade to the new APIs as we want to upgrade our k8 cluster past 1.21.
Tried asking in the Slack channel but no-one has any information.
Couldn’t find anything pertinent in the documentation re: upgrading from Helm chart 3 to 4.
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 30 (11 by maintainers)
@bmv126 yes, i’m going to fix it.
@angelsk please share your application service. endpointslices, deployment and ingress
Patch should fix your problem as well. According backends from controller, that your already provided, you have two valid endpoints with older controller and there is bunch of endpoints with port equal to 0 with v1.4. Endpoints with zero port was a bug, they will disappear.