moby: Cannot start containers: port is already allocated
Description of problem:
I want to start a specific container with port 80 exposed and also with port 80 bound to the hosts port 80. The container is part of an overlay network. This operation fails:
root@uatweb2:~# docker start phr-nginx
Error response from daemon: container cc39fe5306c1ba7633fa14b9bcddb66536153619bf57b84b30673e2328e9295a: endpoint create on GW Network failed: failed to create endpoint gateway_cc39fe5306c1 on network docker_gwbridge: Bind for 0.0.0.0:80 failed: port is already allocated
Error: failed to start containers: phr-nginx
I can confirm that nothing is listening on port 80 on the host (and if I simply start another container bound to port 80, then it works).
docker version
:
Client:
Version: 1.10.1
API version: 1.22
Go version: go1.5.3
Git commit: 9e83765
Built: Thu Feb 11 19:27:08 2016
OS/Arch: linux/amd64
Server:
Version: 1.10.1
API version: 1.22
Go version: go1.5.3
Git commit: 9e83765
Built: Thu Feb 11 19:27:08 2016
OS/Arch: linux/amd64
docker info
:
Client:
Version: 1.10.1
API version: 1.22
Go version: go1.5.3
Git commit: 9e83765
Built: Thu Feb 11 19:27:08 2016
OS/Arch: linux/amd64
Server:
Version: 1.10.1
API version: 1.22
Go version: go1.5.3
Git commit: 9e83765
Built: Thu Feb 11 19:27:08 2016
OS/Arch: linux/amd64
mvarga@uatweb2:~$ docker info
Containers: 13
Running: 4
Paused: 0
Stopped: 9
Images: 211
Server Version: 1.10.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 252
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: bridge overlay null host
Kernel Version: 4.2.0-27-generic
Operating System: Ubuntu 14.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 7.792 GiB
Name: uatweb2
ID: 3D3C:HPUL:BPHZ:HKF4:BB5G:4WDU:TU5S:4RIU:CJG4:4MWB:B54Y:DUAL
Cluster store: consul://104.239.161.67:8500
Cluster advertise: 23.253.235.69:2376
uname -a
:
Linux uatweb2 4.2.0-27-generic #32~14.04.1-Ubuntu SMP Fri Jan 22 15:32:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Environment details (AWS, VirtualBox, physical, etc.): Rackspace VM.
How reproducible: Can’t reproduce.
Actual Results: The container fails to start.
Expected Results: Container starts.
Additional info:
Tail of /var/log/upstart/docker.log (after restarting Docker):
ESC[34mINFOESC[0m[0007] API listen on /var/run/docker.sock
ESC[34mINFOESC[0m[0007] API listen on 23.253.235.69:2376
ESC[33mWARNESC[0m[0026] Failed to allocate and map port 80-80: Bind for 0.0.0.0:80 failed: port is already allocated
ESC[33mWARNESC[0m[0026] Could not rollback container connection to network pkb-phr
ESC[33mWARNESC[0m[0026] failed to cleanup ipc mounts:
failed to umount /var/lib/docker/containers/cc39fe5306c1ba7633fa14b9bcddb66536153619bf57b84b30673e2328e9295a/shm: no such file or directory
ESC[31mERROESC[0m[0026] Handler for POST /v1.22/containers/phr-nginx/start returned error: container cc39fe5306c1ba7633fa14b9bcddb66536153619bf57b84b30673e2328e9295a: endpoint create on GW Network failed: failed to create endpoint gateway_cc39fe5306c1 on network docker_gwbridge: Bind for 0.0.0.0:80 failed: port is already allocated
ESC[33mWARNESC[0m[0847] exit status 1
ESC[31mERROESC[0m[0848] error locating sandbox id a084032ed558ddd147ac2d3087635abbb4e0799c2bf22077a1924d8ea55aa3ea: sandbox a084032ed558ddd147ac2d3087635abbb4e0799c2bf22077a1924d8ea55aa3ea not found
ESC[33mWARNESC[0m[0848] failed to cleanup ipc mounts:
failed to umount /var/lib/docker/containers/9960f5ad7ef902361e2e033a5d4eae41ba866be29901942ee24c5abec5f4422d/shm: invalid argument
ESC[31mERROESC[0m[0848] Error unmounting container 9960f5ad7ef902361e2e033a5d4eae41ba866be29901942ee24c5abec5f4422d: not mounted
ESC[31mERROESC[0m[0848] Handler for POST /v1.22/containers/9960f5ad7ef902361e2e033a5d4eae41ba866be29901942ee24c5abec5f4422d/start returned error: Container command not found or does not exist.
ESC[31mERROESC[0m[0854] Handler for POST /v1.22/containers/create returned error: No such image: net:latest
ESC[33mWARNESC[0m[0931] Failed to allocate and map port 80-80: Bind for 0.0.0.0:80 failed: port is already allocated
ESC[33mWARNESC[0m[0931] Could not rollback container connection to network pkb-phr
ESC[33mWARNESC[0m[0932] failed to cleanup ipc mounts:
failed to umount /var/lib/docker/containers/cd7d5c05a82332496dde3a2499fb7a2d4b49d645a106363ec1a02fd7c8f4e37f/shm: no such file or directory
ESC[31mERROESC[0m[0932] Handler for POST /v1.21/containers/cd7d5c05a82332496dde3a2499fb7a2d4b49d645a106363ec1a02fd7c8f4e37f/start returned error: container cd7d5c05a82332496dde3a2499fb7a2d4b49d645a106363ec1a02fd7c8f4e37f: endpoint create on GW Network failed: failed to create endpoint gateway_cd7d5c05a823 on network docker_gwbridge: Bind for 0.0.0.0:80 failed: port is already allocated
ESC[33mWARNESC[0m[4417] Failed to allocate and map port 80-80: Bind for 0.0.0.0:80 failed: port is already allocated
ESC[33mWARNESC[0m[4417] Could not rollback container connection to network pkb-phr
ESC[33mWARNESC[0m[4418] failed to cleanup ipc mounts:
failed to umount /var/lib/docker/containers/3b6aaa8e169cc736a453557c0bc0bb2843c336113f9194738c61ddde418c3d61/shm: no such file or directory
ESC[31mERROESC[0m[4418] Handler for POST /v1.21/containers/3b6aaa8e169cc736a453557c0bc0bb2843c336113f9194738c61ddde418c3d61/start returned error: container 3b6aaa8e169cc736a453557c
About this issue
- Original URL
- State: open
- Created 8 years ago
- Reactions: 35
- Comments: 107 (18 by maintainers)
Commits related to this issue
- Fix possible ports leak after abnormal restarts. Close #20486 Signed-off-by: saiwl <saiwl@zhihu.com> — committed to saiwl/moby by deleted user 7 years ago
When you run
docker ps
are there other containers running from other projects?only what works for me is stop docker daemon, then delete
/var/lib/docker/network/files/local-kv.db
and after all recreate all containers 😦 .I just confirmed @andreas4all’s work around works! And it definitely saves me time from having to
docker load
all of my images again.I did these and I got it fixed:
just restart your docker. That solves the problem.
It’s look like a have same issue.
Stopped and removed containers not freed ports.
I have container with mapped port 8880 - first works. Then I stop and recreate container and port is used (bind for port 0.0.0.0:8880 failed: port is already allocated. If I check netstat, there is no listening port. Then I change port to 8881 and start container - this work’s, but container listen on 8081.
It’s the standard nginx image (:latest), nothing custom. I can’t provide useful info re. the containers as I had to solve this problem. When I ran the container, I explicitely EXPOSEd port 80, and also bound port 80 to the hosts port 80. After I’ve removed the EXPOSE part, everything started to work – I don’t know why.
The same thing happened to me with no containers running. In CoreOS restarting docker service fixes the problem:
sudo systemctl restart docker.service
+1 Bind for 0.0.0.0:3306 failed: port is already allocated
OSX runs with a native Apache service. When I turned that off, this resolved my problem:
sudo apachectl stop
I’m facing the same issue during spinning up a new container as:
Bind for 0.0.0.0:6379 failed: port is already allocated
While there is no running or exit docker container instance. Can anyone suggest, what should be the solution? This issue is intermittent which doesn’t happen all the time.
Thanks.
It sounds like you are trying to start a container binding to a port that is already bound to. This is expected to error out.
Please check the output of
docker ps -a
.After
service docker restart
, things back to normal.I had the same problem and this occurs maybe because you have containers already using this ports, to stop all containers running so far use the following comand
docker stop $(docker ps -a -q)
or if sudo is required …sudo docker stop $(sudo docker ps -a -q)
+1
Before running
docker-compose up
, all containers were removed i.e. started with a clean state.@hopeseekr Did you stopped the docker first before removing it? Please try the below and it should work:
sudo docker stop $(docker ps -aq) sudo docker rm $(docker ps -aq)
Please attached the log if this doesn’t work.
Thanks.
I’m able to reproduce this consistently:
sudo reboot
with the container in question set torestart=always
I’ll see the container exits with code 1, I’m unable to restart it due to the port being in use by proc
docker-proxy
Yes, you read that right, it’s empty
A reboot almost always fixes the issue, at least in my case. Let me know if logs would be helpful here (and where to find them: ubuntu 16)
Sometimes You only need to write in your terminal: docker restart container-name That´s it!!!
for those errors about port is already allocated:
1.- run docker ps -a 2.- copy the container_id for the process that is running on the port 3.- run docker stop container_id
now run it again and it should work
If anyone’s on Mac and can’t seem to find the file, there’s a “long way around” method you can take.
First, hit
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
That should open a LinuxKit module.
Hit
cd ..
, then you can navigate to/var/lib/docker/network/files/
and delete local-kv.db.May need to restart your computer afterwards.
I have to delete all my containers, all my images, and start from scratch. It is infuriating and time consuming.
Guys,
THERE IS a chance to actually not running anything yet still not being able to start the service at some time if it was already started before. Repeating responsnes like “stop your containers” aren’t helping here anymore - once was enough, maybe this could be a problem for somebody.
And I’m not talking about 80 or other popular port. My application was on hight port, like 4001. No other thing was using that port. Docker like reserved that port for itself (altough it wasn’t visible as listening in the system) [?]. And after stopping container I wasn’t able to start my app again.
I twice-verified that:
netstat
what’s using my port (nothing)I did not try to:
But what helped:
So please: don’t repeat all the same solutions. It’s not helping. It’s more like flood/spam, making it harder to find useful information in this thread to find out where the problem lies in the Docker itself.
I had the same problem trying to start my MySQL Container on my Mac.
It said I couldnt use 0.0.0.0:3306 because something was using that port.
I tried reseting docker, etc etc and nothing worked.
Then I realized its much simpler… I went to my Systems Preferences and clicked on “MySQL” button that starts and stops your server. I stopped the server and created the container with no problems.
Cheers!
This is great stuff, but for stupid people like me. Just check that you turn off mysql from your host device. I still had MySql running as a service on my Mac.
There’s no need to change the dockerfile, just mapping the port when running the image works;
Keep in mind that you should only publish a port if you want the port to be externally accessible. Publishing the port makes it accessible from “outside” the docker host.
If the port only has to be accessible to services/container connecting to the mysql server, just make sure they are connected to the same custom network. No need to “publish” a port then
Thanks @withinboredom for the valuable info and @andreas4all for confirming. Then it looks like the delayed save to store for the driver network endpoints is the culprit in causing this issue.
Will look at the code and see if anything can be done faster, or in a way to be more tolerant to ungraceful shutdowns.
@rutsky I only asked because I stumbled on this issue with a similar error and it was because I had another container on the machine from another project which docker ps revealed (docker-compose did not).
Realise this is an old issue - however is it not the same as what is described here: https://github.com/moby/libnetwork/issues/1790
That issue has 2 open PRs. Any chance the latter can be merged to address this issue? 🙏 https://github.com/moby/libnetwork/pull/1794 https://github.com/moby/libnetwork/pull/1805
Running into this and is a shame to have a fix worked on but not in place.
I was getting the same error tried removing all the docker images and the docker networks still it didn’t work then I tried stopping the docker and then starting it again it worked for me you can do so by
systemctl stop docker
— to stop the dockersystemctl status docker
— to see the docker statussystemctl start docker
— to again start the dockerMy main resolution was to restart docker and try again. I found out If I force it to kill all my containers using
docker kill $(docker ps -q)
(taken from here then this frees the desired port.I quit Docker, then restarted Docker, and it worked again. Make sure you don’t have another running container or app listening on that port
I think number 4 is needed because the PID becomes zombie 😄. Sadly I can’t restart it if I’m using a shared VPS server.
@cirocosta docker/libnetwork#1504 fixes one bug which yes would cause the problem tracked by this issue, but in a secific case: When the external conenctivity needs to be reprogrammed for the container (see description for more info).
From the reports on this issue I was not able to confirm reporters had a container being connected to multiple bridge networks.
I also having trouble with this. So we don’t have another solutions besides restarting docker services?
How exactly you solved this I am also facing same issues. @mdtomo
@edisonagamba This issue is about a specific issue when there is no container running on the port but docker-proxy keeps the port open. In that case restart doesn’t help as there is no running container that’s causing the issue!
We had the same issue on CentOS 7.7 with docker 19.03.4. What we did: shutdown server -> boot server -> docker gets automatically started -> several containers “died” due to our RabbitMQ not yet being ready. For any reason they where unable to start after that due to the port being in use. I “fixed” it by restarting docker. I could see that
docker-proxy
was holding a certain port but only on ipv6 (“tcp6” when doingnetstat -tulpn
)This is also a problem in 18.03.1-ce on arm64v8 Linux.
Containers running as a service under docker swarm are continuously restarting because the port is unavailable:
The port is held by a rogue instance of docker-proxy:
@jks3462 That is exactly what I did in the end.
But this is a guaranteed downtime.
That’s an interesting work around but a fix would be better.
On Thu, 17 Aug 2017 at 1:22 pm, jks3462 notifications@github.com wrote:
I’m on macOS 10.12.4, and have the same issue:
Error starting userland proxy: Bind for 0.0.0.0:5432 failed: port is already allocated
My Docker version is
Docker version 17.03.1-ce, build c6d412e
After restarting Docker from the menu, I got it to work. YMMV.
same ubuntu
I used this command: #!/bin/bash docker rm $(docker ps -aq) sudo rm /var/lib/docker/network/files/local-kv.db
and now while running my docker container I am getting this error: docker: Error response from daemon: failed to update store for object type *libnetwork.endpointCnt: Key not found in store.
I was having this issue with port 5432, upgrading to Version 1.12.3-beta29.3 (13640) fixed this for me.
i just updated to 1.12.1 without knowing about this issue. well i confirm this on archlinux. docker build: 23cf638
Has the same problem as @withinboredom .
docker-proxy
hangs on 1935 and I can do nothing to free it from it. If I just killingdocker-proxy
by PID, it does nothing, I anyway see message aboutport is already allocated
. Doingsudo service docker restart
revokesdocker-proxy
on 1935 port. I can’t work in that case. And afraid next production deploy will cause the same problems.May be there are some ways at least to fix it manually?
UPD: Tried to temporaly change 1935 to 1936, so it said that 80 port is allocated. It seems like it has allocated all ports I need.
I have problem when I have a classic disc in RAID 5 (one was failed, temporary solution), but when I change this disc to SSD (now I have all SSD disc’s in this RAID), everything looks good.
ping @aboch please have a look