ingress-nginx: gRPC doesn't work
NGINX Version: nginx -v nginx/1.19.9
NGINX Installation: https://kubernetes.github.io/ingress-nginx/deploy/#azure
Platform: Azure AKS
Problem Description
Hi Everyone, I tried to configure gRPC for multiple services in one ingress configuration. Locally it works perfect, but when I expose my services over ingress then I got the exception
# I use grpc.example.com:443 in my BloomRPC
{
"error": "14 UNAVAILABLE: Trying to connect an http1.x server"
}
This is my ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: default
name: ingress-grpc
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- hosts:
- grpc.example.com
secretName: grpc.example.com
rules:
- host: grpc.example.com
http:
paths:
- path: /p3.protos.appservices.v1.AppService
pathType: Prefix
backend:
service:
name: app-service-svc
port:
name: grpc
kubectl describe ingress ingress-grpc
Name: ingress-grpc
Namespace: default
Address: 20.86.*.*
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
grpc.example.com terminates grpc.example.com
Rules:
Host Path Backends
---- ---- --------
grpc.example.com
/p3.protos.appservices.v1.AppService app-service-svc:grpc (10.0.8.149:50051,10.0.8.59:50051)
Annotations: cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: true
nginx.ingress.kubernetes.io/backend-protocol: GRPC
nginx.ingress.kubernetes.io/force-ssl-redirect: true
nginx.ingress.kubernetes.io/ssl-redirect: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreateCertificate 84m cert-manager Successfully created Certificate "grpc.example.com"
Normal Sync 54m (x55 over 5h47m) nginx-ingress-controller Scheduled for sync
And when I generate traffic with imported protos in BloomRPC you can see logs from NGINX something like this.
95.168.*.* - - [02/Nov/2021:14:13:32 +0000] "PRI * HTTP/2.0" 400 150 "-" "-" 0 0.084 [] [] - - - - a934d1ad915d451cc08e898c2160914e
95.168.*.* - - [02/Nov/2021:14:13:33 +0000] "PRI * HTTP/2.0" 400 150 "-" "-" 0 0.082 [] [] - - - - e1e80df41fe09295e9d2d21e9f581ac3
95.168.*.* - - [02/Nov/2021:14:13:33 +0000] "PRI * HTTP/2.0" 400 150 "-" "-" 0 0.082 [] [] - - - - 4eede56e8f9e43921f12d0385624f158
95.168.*.* - - [02/Nov/2021:14:13:33 +0000] "PRI * HTTP/2.0" 400 150 "-" "-" 0 0.082 [] [] - - - - 203d9672e2d6f04b1f91ae242d6bcd29
95.168.*.* - - [02/Nov/2021:14:13:34 +0000] "PRI * HTTP/2.0" 400 150 "-" "-" 0 0.092 [] [] - - - - a8c923716fda64c085e10ece2323f0e7
95.168.*.* - - [02/Nov/2021:14:13:34 +0000] "PRI * HTTP/2.0" 400 150 "-" "-" 0 0.342 [] [] - - - - 19ac57d82daabe11db0b664034a93719
95.168.*.* - - [02/Nov/2021:14:13:34 +0000] "PRI * HTTP/2.0" 400 150 "-" "-" 0 0.084 [] [] - - - - 0483af11f59e61996a00821cd85aa884
95.168.*.* - - [02/Nov/2021:14:13:35 +0000] "PRI * HTTP/2.0" 400 150 "-" "-" 0 0.352 [] [] - - - - 1e40a9bef77f0305d37e9f0a000dfd5c
My protos:p3.protos.appservices.v1.AppService.LocationStream
Regards
/kind bug
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 30 (14 by maintainers)
I have a similar issue where i just see this in the logs:
35.153.65.211 - - [17/Aug/2022:16:26:49 +0000] "PRI * HTTP/2.0" 400 150 "-" "-" 0 0.001 [] [] - - - - 1a706808c86a6c9c50e454459411b85
Did anyone find a solution?
Yes @SalahBellagnaoui. It was a stupid thing. use-http2 was set to false in the configmap. Changing the value to true resolved it for me.
So I tried setting up ingress-nginx with an ingress object like mentioned in this issue. I was using this helloworld example and by default the client in this case doesn’t have TLS setup by default and I got logs pretty similar to the OP’s description:
A little debugging in nginx revealed that TLS was the problem. So I added TLS to the grpc client like so:
And now the log looks like
which indicates a proper decode on ingress. The returned status is still 503 since the plumbing to the upstream server wasn’t correct in my case, but thats a different problem to solve.
I think need to test non reflection api grpc on non kubernetes ingress, with just nginx as reverse proxy. If it works with non ingress and non kubernetes (vanilla nginx reverse proxy), then we can configure ingress controller potentially. @theunrealgeek for comments.
Thanks, ; Long
On Fri, 19 Nov, 2021, 1:37 PM Yury Kustov, @.***> wrote:
I follow this example, but I didn’t get any results.