cilium: Cilium upstream master branch breaks nodeport connection from external client
Bug report
Cilium upstream master branch breaks nodeport connection from external client,
General Information
-
Cilium version (run
cilium version) Client: 1.10.90 c44ff1b37 2021-08-03T00:35:29+05:30 go version go1.16.5 linux/amd64 Daemon: 1.10.90 c44ff1b37 2021-08-03T00:35:29+05:30 go version go1.16.5 linux/amd64 -
Kernel version (run
uname -a) 5.8.1-050801-generic #202008111432 SMP Tue Aug 11 14:34:42 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux -
Orchestration system version in use (e.g.
kubectl version, …)
Client Version: version.Info{Major:“1”, Minor:“20”, GitVersion:“v1.20.5”, GitCommit:“6b1d87acf3c8253c123756b9e61dac642678305f”, GitTreeState:“clean”, BuildDate:“2021-03-18T01:10:43Z”, GoVersion:“go1.15.8”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“20”, GitVersion:“v1.20.8”, GitCommit:“5575935422cc1cf5169dfc8847cb587aa47bac5a”, GitTreeState:“clean”, BuildDate:“2021-06-16T12:53:07Z”, GoVersion:“go1.15.13”, Compiler:“gc”, Platform:“linux/amd64”}
- Link to relevant artifacts (policies, deployments scripts, …)
cat nginx_nodeport.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 1
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
name: nginxservice
name: nginxservice
spec:
ports:
# The port that this service should serve on.
- port: 80
nodePort: 32506
selector:
app: nginx
type: NodePort
- Generate and upload a system zip:
can upload if needed
How to reproduce the issue
- instruction 1
build docker image from upstream master branch
- instruction
deploy cilium with attached cilium yaml file
- instruction
deploy above nodeport service
5, access the nodeport service from external client
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 39 (39 by maintainers)
@Weil0ng yes,
make docker-operator-generic-imageand run the operator image fixed the problem, sorry I did not realize the operator image better match the agent image build 😃yes, the problem still here
kubectl get crd ciliumendpoints.cilium.io -o yaml
I think so, the CRDs are registered by the operator, if it does not restart, I suspect the CRDs were never updated. So eventually you have an outdated CRD (which has
statusas a subresource) with the new agent code which will update the CEP assuming it is using the new CRD (which hasstatusas a plain field), and that’s why we see the object fully populated in the API request but not persisting in the etcd.