kubernetes: Adding an already deleted node port to service shows already acquired error
Kubernetes version (use kubectl version): v1.2.6
Environment:
- Cloud provider or hardware configuration: Virtual Machine with 4GB physical memory and 2 cpu cores
- OS (e.g. from /etc/os-release): Ubuntu 14.04
- Kernel (e.g.
uname -a): Linux ubuntu-m1 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux - Install tools: Manual (self created ansible scripts)
What happened: I created a service as below
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "ultraesbsv-o7dytoto2w",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/ultraesbsv-o7dytoto2w",
"uid": "804fb74a-7cee-11e6-ae64-080027daf78e",
"resourceVersion": "30063",
"creationTimestamp": "2016-09-17T15:50:47Z",
"labels": {
"app": "ultraesb",
"cluster": "TestCluster"
}
},
"spec": {
"ports": [
{
"name": "8281",
"protocol": "TCP",
"port": 8281,
"targetPort": 8281,
"nodePort": 30001
},
{
"name": "8285",
"protocol": "TCP",
"port": 8285,
"targetPort": 8285,
"nodePort": 30005
}
],
"selector": {
"app": "ultraesb",
"cluster": "TestCluster"
},
"clusterIP": "192.168.3.206",
"type": "NodePort",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
}
Then using the REST API I removed a port entry
http://192.168.1.119:8080/api/v1/namespaces/default/services/ultraesbsv-o7dytoto2w
Content-Type: application/json-patch+json
[
{
"op":"remove",
"path":"/spec/ports/1"
}
]
After executing above HTTP request updated service is as below
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "ultraesbsv-o7dytoto2w",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/ultraesbsv-o7dytoto2w",
"uid": "804fb74a-7cee-11e6-ae64-080027daf78e",
"resourceVersion": "30396",
"creationTimestamp": "2016-09-17T15:50:47Z",
"labels": {
"app": "ultraesb",
"cluster": "TestCluster"
}
},
"spec": {
"ports": [
{
"name": "8281",
"protocol": "TCP",
"port": 8281,
"targetPort": 8281,
"nodePort": 30001
}
],
"selector": {
"app": "ultraesb",
"cluster": "TestCluster"
},
"clusterIP": "192.168.3.206",
"type": "NodePort",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
}
As expected port entry 8285 has been removed. I checked in iptables rules for node port 30005 using command ‘sudo iptables-save | grep 30005’ and I could see that it was successfully removed from iptables. Within few seconds I tried to add it back to the service.
http://192.168.1.119:8080/api/v1/namespaces/default/services/ultraesbsv-o7dytoto2w
Content-Type: application/json-patch+json
[
{
"op":"add",
"path":"/spec/ports/1",
"value":{
"name":"8285",
"nodePort":30005,
"port":8285,
"protocol":"TCP"
}
}
]
Then is gave following message
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "Service \"ultraesbsv-o7dytoto2w\" is invalid: spec.ports[1].nodePort: Invalid value: 30005: provided port is already allocated",
"reason": "Invalid",
"details": {
"name": "ultraesbsv-o7dytoto2w",
"kind": "Service",
"causes": [
{
"reason": "FieldValueInvalid",
"message": "Invalid value: 30005: provided port is already allocated",
"field": "spec.ports[1].nodePort"
}
]
},
"code": 422
}
After few minutes, I tried the same HTTP request then it was successfully executed. It seems like Kubernetes api / proxy is some how caching node port 30005 is acquired for some grace period although it is not. Is this an expected behaviour?
What you expected to happen: At the very first HTTP request I send to add port 8285 back into the service, it should be successful.
How to reproduce it (as minimally and precisely as possible):
Anything else do we need to know:
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Reactions: 21
- Comments: 30 (4 by maintainers)
Since, noone is re-opening this issue, I re-raised it here: https://github.com/kubernetes/kubernetes/issues/73140
Still having this issue in 1.8.2. As mentioned above have to wait 5 mins.
Can this issue please be re-opened? And until this issue is fixed, is there a command I can do to check whether a given port has truly been deleted?
More info:
On K8s v1.8.4 I’m using kube-prometheus, which has:
On a cluster that i’ve run deploy, then if I run teardown and immediately after teardown I run deploy, the deploy sometimes gets an error such as:
The Service "alertmanager-main" is invalid: spec.ports[0].nodePort: Invalid value: 30903: provided port is already allocatedWhen I want to re-deploy kube-prometheus, then after running teardown & before running deploy I do the stuff below. But there are times when that
kubectl get svccommand says none of the NodePorts are still in use, but then when I run deploy I still getprovided port is already allocated, hence my question about a command that shows whether a given NodePort is truly/fully deleted.Until this issue is resolved, I could simply sleep (after teardown & before deploy) for a plenty long enough amount of time (e.g. 5 minutes), but I’d prefer to only have to wait for precisely however long is necessary.
This issue occurs on AWS EKS 1.10.3 as well. I had waited approx. 4-5 minutes and could recreate k8s service.