kubernetes: Kubernetes blocking UDP requests
issue:
RTSP protocol uses random udp ports to push data every frame.
If I use single port TCP connection it works, but TCP has its own delay. Whenever I try to stream data using UDP in a pod, k8s does not allow udp access.
RTSP-UDP initially connects to 554 TCP port, and gets assigned to an UDP port between ~18000-25000. This port changes every frame. However I cannot get any data using udp.
reproduce: (edited for clarity)
The camera should be in the host network, ie 10.5.5.2 Pod is in the CNI, ie 10.244.10.16, The node running the pod has host IP ie 10.5.5.12
If you dont have access to an rtsp stream, (if you have cctv you probably have one) you can create one using this link -> rtsp with ffmpeg make sure you change the fields tcp to udp.
Rtsp stream you created must be on another computer on host network. Confirm you can get the stream with ffplay command on the node you will run the pod. For example, run
ffplay rtsp://10.5.5.2:8554/stream1
from the node 10.5.5.12
After you confirm it, try to start a basic ubuntu:18.04 or 20.04 pod with sleep, log in the pod shell, install ffmpeg and try the ffplay command again. You can background it and probe with tcddump or any other tool you like.
notes:
Stream will hang unless you use hostNetwork, which is undesirable in my case.
You can use tcpdump to see tcp communications with rtsp server is successful but udp packet size is zero.
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 25 (14 by maintainers)
So the udp communication is initiated from within a POD? If so, as @aojea pointed out, K8s is not involved. It is a CNI-plugin thing, so;
What CNI-plugin are you using? How is it configured? Some overlay such as vxlan?
Please use the issue template next time. It has important fields to be filled, e.g CNI-plugin.
I can’t think of anything in the path that can mangle UDP packets to have lenght=0 so I think the sender sends those packets. A possible reason would be MTU problems. When you run on the node or in a POD with hostNetwork:true you almost certainly have the default MTU=1500. But within a POD, depending on CNI-plugin, you may have a smalles MTU, say 1460 (cause of the network overlay). If the sender uses DF (don’t fragment) there may be a problem. Nothing obvious I can think of though.
A way to check the MTU-problem theory may be to disable network overlays in the CNI-plugin. The MTU inside the pod should become 1500 for eth0.
And practically I don’t see how it could. There are two ways to look at this: static and dynamic.
Static: If we allowed pod port ranges, you could ask kubelet to map N incoming ports on the node to N target ports on the pod. The node only has 64K UDP ports available, so scheduling multiple pods that want this would be difficult and wasteful
Dynamic: We would need to detect a new connection and install/expand rules to forward additional ports on the fly. I’m not saying it’s impossible but I don’t know how to do it off the top of my head.
https://speakerdeck.com/thockin/kubernetes-and-networks-why-is-this-so-dang-hard
If you are using a network mode which requires outgoing NAT to reach the larger network (“island mode”), you are “borrowing” shared resources (ports), and those resources are managed.
This is a case where “flat mode” works much better. If your pod could talk to your camera with no NAT you would literally get sall of the traffic. It might cause other problems to use random UDP ports (conntrack) but we can cross that bridge when we get there.
As such - I don’t think this is a “bug” in kubernetes. I’m happy to keep the discussion going, but we should probably close this issue? The sig-net mailing list might be a better venue.
Disagree?