moby: Docker service rm does not clean properly the published ports
Description
When I remove a service, the ports that were published by it are not properly removed from the iptables
Steps to reproduce the issue:
- docker service create --name spark-proxy --network spark-net -p 8080:8080 -p 50070:50070 nyanloutre/spark-proxy (it’s just nginx with custom config file)
- curl http://localhost:8080 return the page
- docker service rm spark-proxy
Describe the results you received:
sudo iptables -t nat -L
...
Chain DOCKER-INGRESS (2 references)
target prot opt source destination
DNAT tcp -- anywhere anywhere tcp dpt:50070 to:172.18.0.2:50070
DNAT tcp -- anywhere anywhere tcp dpt:http-alt to:172.18.0.2:8080
RETURN all -- anywhere anywhere
Describe the results you expected:
sudo iptables -t nat -L
...
Chain DOCKER-INGRESS (2 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Additional information you deem important (e.g. issue happens only occasionally): The issue is coming randomly on my 6 hosts. For example the last time I tried it happened on 3 of my 6 hosts.
Output of docker version
:
Client:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built: Thu Aug 18 05:33:38 2016
OS/Arch: linux/amd64
Server:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built: Thu Aug 18 05:33:38 2016
OS/Arch: linux/amd64
Output of docker info
:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 12
Server Version: 1.12.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 78
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: host overlay bridge null
Swarm: active
NodeID: f308skpi0pxi41dujetjsot64
Is Manager: true
ClusterID: a94pihw2derc4wnf6dzcphcaz
Managers: 3
Nodes: 6
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: 130.79.128.186
Runtimes: runc
Default Runtime: runc
Security Options: apparmor seccomp
Kernel Version: 4.4.0-36-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.58 GiB
Name: cds-stage-ms4
ID: WMUK:RVW6:IULI:5MWL:XIV5:3SDT:A62G:TQHK:7NK7:3RG3:LZKG:O7ES
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
127.0.0.0/8
Additional environment details (AWS, VirtualBox, physical, etc.):
Docker is installed on 6 physical computers
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Reactions: 1
- Comments: 32 (9 by maintainers)
I can confirm this bug
I am able to reproduce this everytime I do a cervice create… servide rm … service create chain. I filed in a new entry as I’m not sure how this is related and to add the details:
https://github.com/docker/docker/issues/26563
I’m seeing this issue as well. But, I’m able create to recreate the ‘docker service’ on the same ports. The problem is, if I try to start a process which requires that port on the host/vm, it fails since dockerd is stilling listening on that port
Steps to reproduce
docker service create --name nginx -p 80:80 -p 443:443 --mode global nginx
docker service rm nginx
netstat -ln --program
Creating the service again works
docker service create --name nginx -p 80:80 -p 443:443 --mode global nginx
docker service ls
Remove the service again -
docker service rm nginx
Try to start container outside of swarm and it fails
docker run -d -p 80:80 -p 443:443 nginx