ingress-nginx: Canary Never Receiving Any Traffic
Is this a request for help? Yep, it’s a request for help.
What keywords did you search in NGINX Ingress controller issues before filing this one?
Searched for canary, nginx.ingress.kubernetes.io/canary, matching backend and diverse others across Google/GitHub. I went through all results, but could not find hints that resolve this issue. The only significant hint I found was this (@ https://medium.com/@domi.stoehr/canary-deployments-on-kubernetes-without-service-mesh-425b7e4cc862 ):
It is important that the canary ingress has the exact same name as the production ingress to ensure that the controller puts the configuration in the right section of nginx configuration. That means that you will need a second namespace as it is not possible to create a second resource with the same name in a single namespace
However, this seems to be only related to the ingress controller provided by NGINX INC.
Something that also seems to be related: https://github.com/kubernetes/ingress-nginx/issues/3580. However, here the issue seems to be tied to 100% of traffic alternating between primary and canary, as well a change only going into effect when the primary ingress is updated.
Is this a BUG REPORT or FEATURE REQUEST? (choose one): It might turn out to be a bug report - depending on the answers to this request.
NGINX Ingress controller version:
0.26.1 deployed via helm chart version nginx-ingress-1.24.3
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T23:41:55Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.7-eks-e9b1d0", GitCommit:"e9b1d0551216e1e8ace5ee4ca50161df34325ec2", GitTreeState:"clean", BuildDate:"2019-09-21T08:33:01Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Environment:
- Cloud provider or hardware configuration:
AWS EKS running centos based worker nodes. NGINX is running on m5.large. - OS (e.g. from /etc/os-release):
CentOS Linux 7 (Core) - Kernel (e.g.
uname -a):3.10.0-1062.1.1.el7.x86_64
What happened: For reference, I have three types of NGINX ingressClasses. I’m mentioning this because I do not know if having multiple can unexpectedly affect each other:
- nginx-monitoring
- nginx-external
- nginx-internal
My issues are occurring on ingress nginx-internal.
I have one ingress listening on “internal.example.com” going to “api-green”.
I create a canary ingress listening on “internal.example.com” going to “api-blue”.
No traffic is being routed to “api-blue”.
What you expected to happen: Once canary ingress is being enabled and parsed by the nginx controller I expect that traffic is going to the canary backend.
How to reproduce it (as minimally and precisely as possible):
- Assumption is that there is a working backend “api-green” and “api-blue” in namespace “ana-api”
- Currently active primary serving traffic to “api-green”:
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx-internal
nginx.ingress.kubernetes.io/proxy-body-size: 1M
creationTimestamp: "2019-10-07T22:21:48Z"
generation: 1
labels:
app: api-primary-internal-ingress
app.kubernetes.io/instance: api-primary-internal-ingress
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: api-primary-internal-ingress
app.kubernetes.io/version: ""
helm.sh/chart: api-ingress-1.0.682
name: api-primary-internal-ingress
namespace: ana-api
resourceVersion: "17318889"
selfLink: /apis/extensions/v1beta1/namespaces/ana-api/ingresses/api-primary-internal-ingress
uid: da97b3a6-e950-11e9-88e9-0668887889bc
spec:
rules:
- host: internal.example.com
http:
paths:
- backend:
serviceName: api-green
servicePort: 9080
path: /
status:
loadBalancer:
ingress:
- {}
I install the following ingress to act as canary:
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx-internal
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "10"
nginx.ingress.kubernetes.io/proxy-body-size: 1M
creationTimestamp: "2019-10-29T15:33:18Z"
generation: 1
labels:
app: api-blue
app.kubernetes.io/instance: api-blue
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: api-blue
app.kubernetes.io/version: 4.5.11.305
helm.sh/chart: api-1.0.716
name: api-blue-canary
namespace: ana-api
resourceVersion: "17318891"
selfLink: /apis/extensions/v1beta1/namespaces/ana-api/ingresses/api-blue-canary
uid: 6e700be6-fa61-11e9-88e9-0668887889bc
spec:
rules:
- host: internal.example.com
http:
paths:
- backend:
serviceName: api-blue
servicePort: 9080
path: /
status:
loadBalancer:
ingress:
- {}
I would expect 10% of traffic to be routed to api-blue. This does not happen. No traffic is being routed to the canary ingress.
Anything else we need to know:
I have multiple other ingresses:
(api-blue-external external-blue.example.com) --> removed at some point for debugging
(api-blue-internal internal-blue.example.com) --> removed at some point for debugging
api-green-external external-green.example.com
api-green-internal internal-green.example.com
api-primary-external-ingress external.example.com
api-primary-ingress public.example.com
api-primary-internal-ingress internal.example.com
Dumped nginx.conf doesn’t have any references to the canary (Can be provided if requested).
Controller log filtered for “api-blue”:
I1028 23:41:49.269838 6 controller.go:875] Obtaining ports information for Service "ana-api/api-blue"
I1028 23:41:49.269845 6 endpoints.go:74] Getting Endpoints for Service "ana-api/api-blue" and port &ServicePort{Name:api,Protocol:TCP,Port:9080,TargetPort:{0 9080 },NodePort:0,}
I1028 23:41:49.269853 6 endpoints.go:117] Endpoints found for Service "ana-api/api-blue": [{10.160.145.29 9080 &ObjectReference{Kind:Pod,Namespace:ana-api,Name:api-blue-5dd64d6bb4-sfsqg,UID:30152bf8-f9a9-11e9-88e9-0668887889bc,APIVersion:,ResourceVersion:16405922,FieldPath:,}} {10.160.146.9 9080 &ObjectReference{Kind:Pod,Namespace:ana-api,Name:api-blue-5dd64d6bb4-vvx7w,UID:328a5457-f9a9-11e9-88e9-0668887889bc,APIVersion:,ResourceVersion:16415296,FieldPath:,}}]
I1028 23:41:49.269900 6 controller.go:1002] Ingress ana-api/api-blue-canary is marked as Canary, ignoring
I1028 23:41:49.269920 6 controller.go:1112] Ingress "ana-api/api-blue-internal" does not contains a TLS section.
I1028 23:41:49.269928 6 controller.go:1073] Ingress ana-api/api-blue-canary is marked as Canary, ignoring
I1028 23:41:49.269949 6 controller.go:536] Replacing location "/" for server "internal-blue.example.com" with upstream "upstream-default-backend" to use upstream "ana-api-api-blue-9080" (Ingress "ana-api/api-blue-internal")
I1028 23:41:49.269958 6 controller.go:531] Location "/" already configured for server "internal.example.com" with upstream "ana-api-api-green-9080" (Ingress "ana-api/api-blue-canary")
I1028 23:41:49.269964 6 controller.go:1308] matching backend ana-api-api-green-9080 found for alternative backend ana-api-api-blue-9080
I1028 23:41:53.356731 6 controller.go:770] Creating upstream "ana-api-api-blue-9080"
I1028 23:41:53.356746 6 controller.go:875] Obtaining ports information for Service "ana-api/api-blue"
I1028 23:41:53.356758 6 endpoints.go:74] Getting Endpoints for Service "ana-api/api-blue" and port &ServicePort{Name:api,Protocol:TCP,Port:9080,TargetPort:{0 9080 },NodePort:0,}
I1028 23:41:53.356769 6 endpoints.go:117] Endpoints found for Service "ana-api/api-blue": [{10.160.145.29 9080 &ObjectReference{Kind:Pod,Namespace:ana-api,Name:api-blue-5dd64d6bb4-sfsqg,UID:30152bf8-f9a9-11e9-88e9-0668887889bc,APIVersion:,ResourceVersion:16405922,FieldPath:,}} {10.160.146.9 9080 &ObjectReference{Kind:Pod,Namespace:ana-api,Name:api-blue-5dd64d6bb4-vvx7w,UID:328a5457-f9a9-11e9-88e9-0668887889bc,APIVersion:,ResourceVersion:16415296,FieldPath:,}}]
I1028 23:41:53.356822 6 controller.go:1002] Ingress ana-api/api-blue-canary is marked as Canary, ignoring
I1028 23:41:53.356856 6 controller.go:1112] Ingress "ana-api/api-blue-internal" does not contains a TLS section.
I1028 23:41:53.356869 6 controller.go:1073] Ingress ana-api/api-blue-canary is marked as Canary, ignoring
I1028 23:41:53.356914 6 controller.go:536] Replacing location "/" for server "internal-blue.example.com" with upstream "upstream-default-backend" to use upstream "ana-api-api-blue-9080" (Ingress "ana-api/api-blue-internal")
I1028 23:41:53.356935 6 controller.go:531] Location "/" already configured for server "internal.example.com" with upstream "ana-api-api-green-9080" (Ingress "ana-api/api-blue-canary")
I1028 23:41:53.356946 6 controller.go:1308] matching backend ana-api-api-green-9080 found for alternative backend ana-api-api-blue-9080
I1028 23:42:07.915135 6 controller.go:770] Creating upstream "ana-api-api-blue-9080"
Controller Deployment:
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: "2019-10-07T21:49:56Z"
generation: 2
labels:
app: nginx-ingress
chart: nginx-ingress-1.24.0
component: controller
heritage: Tiller
release: nginx-internal
name: nginx-internal-nginx-ingress-controller
namespace: ana-ingress
resourceVersion: "17443877"
selfLink: /apis/extensions/v1beta1/namespaces/ana-ingress/deployments/nginx-internal-nginx-ingress-controller
uid: 6703bc22-e94c-11e9-88e9-0668887889bc
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx-ingress
release: nginx-internal
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx-ingress
component: controller
release: nginx-internal
spec:
containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=ana-ingress/nginx-internal-nginx-ingress-default-backend
- --election-id=ingress-controller-leader
- --ingress-class=nginx-internal
- --configmap=ana-ingress/nginx-internal-nginx-ingress-controller
- --v=2
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- containerPort: 10254
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 1700m
memory: 2560Mi
requests:
cpu: 1300m
memory: 500Mi
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 33
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: nginx-internal-nginx-ingress
serviceAccountName: nginx-internal-nginx-ingress
terminationGracePeriodSeconds: 60
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2019-10-07T22:02:00Z"
lastUpdateTime: "2019-10-29T15:32:08Z"
message: ReplicaSet "nginx-internal-nginx-ingress-controller-84697498f6" has successfully
progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2019-10-29T18:27:55Z"
lastUpdateTime: "2019-10-29T18:27:55Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 2
readyReplicas: 3
replicas: 3
updatedReplicas: 3
Dump of curl localhost:10246/configuration/backends:
{
"name": "ana-api-api-blue-9080",
"service": {
"metadata": { "creationTimestamp": null },
"spec": {
"ports": [
{
"name": "api",
"protocol": "TCP",
"port": 9080,
"targetPort": 9080
},
{
"name": "health",
"protocol": "TCP",
"port": 9081,
"targetPort": 9081
}
],
"selector": { "app": "api-blue" },
"clusterIP": "172.20.56.56",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": { "loadBalancer": {} }
},
"port": 9080,
"secureCACert": {
"secret": "",
"caFilename": "",
"caSha": "",
"crlFileName": "",
"crlSha": ""
},
"sslPassthrough": false,
"endpoints": [
{ "address": "10.160.145.45", "port": "9080" },
{ "address": "10.160.147.16", "port": "9080" }
],
"sessionAffinityConfig": {
"name": "",
"mode": "",
"cookieSessionAffinity": { "name": "" }
},
"upstreamHashByConfig": {
"upstream-hash-by": "$remote_user",
"upstream-hash-by-subset-size": 3
},
"noServer": true,
"trafficShapingPolicy": {
"weight": 0,
"header": "",
"headerValue": "",
"cookie": ""
}
},
{
"name": "ana-api-api-green-9080",
"service": {
"metadata": { "creationTimestamp": null },
"spec": {
"ports": [
{
"name": "api",
"protocol": "TCP",
"port": 9080,
"targetPort": 9080
},
{
"name": "health",
"protocol": "TCP",
"port": 9081,
"targetPort": 9081
}
],
"selector": { "app": "api-green" },
"clusterIP": "172.20.56.127",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": { "loadBalancer": {} }
},
"port": 9080,
"secureCACert": {
"secret": "",
"caFilename": "",
"caSha": "",
"crlFileName": "",
"crlSha": ""
},
"sslPassthrough": false,
"endpoints": [
{ "address": "10.160.144.104", "port": "9080" },
{ "address": "10.160.147.9", "port": "9080" }
],
"sessionAffinityConfig": {
"name": "",
"mode": "",
"cookieSessionAffinity": { "name": "" }
},
"upstreamHashByConfig": { "upstream-hash-by-subset-size": 3 },
"noServer": false,
"trafficShapingPolicy": {
"weight": 0,
"header": "",
"headerValue": "",
"cookie": ""
},
"alternativeBackends": ["ana-api-api-blue-9080"]
},
{
"name": "upstream-default-backend",
"service": {
"metadata": { "creationTimestamp": null },
"spec": {
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": "http"
}
],
"selector": {
"app": "nginx-ingress",
"component": "default-backend",
"release": "nginx-internal"
},
"clusterIP": "172.20.218.76",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": { "loadBalancer": {} }
},
"port": 0,
"secureCACert": {
"secret": "",
"caFilename": "",
"caSha": "",
"crlFileName": "",
"crlSha": ""
},
"sslPassthrough": false,
"endpoints": [{ "address": "10.160.144.105", "port": "8080" }],
"sessionAffinityConfig": {
"name": "",
"mode": "",
"cookieSessionAffinity": { "name": "" }
},
"upstreamHashByConfig": {},
"noServer": false,
"trafficShapingPolicy": {
"weight": 0,
"header": "",
"headerValue": "",
"cookie": ""
}
}
]
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 2
- Comments: 20 (5 by maintainers)
We are also having same issue on the controller version
0.32.0