moby: container failed to start due to network issue
docker version
Client version: 1.7.0
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 0baf609
OS/Arch (client): linux/amd64
Server version: 1.7.0
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 0baf609
OS/Arch (server): linux/amd64
docker info
Containers: 5
Images: 71
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 81
Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-32-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 2
Total Memory: 3.859 GiB
uname -a
Linux node001.d.nexttao.com 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
I am not sure how to reproduce this issue, because it doesn’t occur on another docker host. I used docker-py to do operation like run/rm container.
I could start new container as long as the number of running containers is less than 5. However I cannot start any new container when there are five containers running. docker run -d --name nginx nginx It will give “Cannot start container xxxxx: no available ip addresses on network”
After service docker restart, it can start new containers successfully. I found this in syslog
Jul 21 14:30:48 node001 kernel: [1346023.941627] device veth180e67d entered promiscuous mode
Jul 21 14:30:48 node001 kernel: [1346023.941702] docker0: port 6(veth180e67d) entered forwarding state
Jul 21 14:30:48 node001 kernel: [1346023.941709] docker0: port 6(veth180e67d) entered forwarding state
Jul 21 14:30:48 node001 kernel: [1346023.942526] docker0: port 6(veth180e67d) entered disabled state
Jul 21 14:30:48 node001 kernel: [1346023.944388] device veth180e67d left promiscuous mode
Jul 21 14:30:48 node001 kernel: [1346023.944397] docker0: port 6(veth180e67d) entered disabled state
It seems that new veth failed to start properly caused this issue. What else should I check in this docker host? ----------END REPORT ---------
About this issue
- Original URL
- State: closed
- Created 9 years ago
- Reactions: 3
- Comments: 28 (15 by maintainers)
I’m seeing this on 1.11.2 and 1.13.0
root@myhost1# docker info Containers: 11 Running: 8 Paused: 0 Stopped: 3 Images: 34 Server Version: 1.11.2 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 36761 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: null host bridge Kernel Version: 3.19.0-79-generic Operating System: Ubuntu 14.04.5 LTS OSType: linux Architecture: x86_64 CPUs: 8 Total Memory: 15.67 GiB Name: dctalcmsrv2.mdanderson.edu ID: 6SW6:OYDJ:CVGL:HWG6:UYCG:OK6Z:J32P:Z7IA:WNHK:MM4G:BC4F:42X5 Docker Root Dir: /var/lib/docker Debug mode (client): false Debug mode (server): false Registry: https://index.docker.io/v1/ WARNING: No swap limit support
root@myhost2# docker info Containers: 22 Running: 12 Paused: 0 Stopped: 10 Images: 17 Server Version: 1.13.0 Storage Driver: btrfs Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e runc version: 2f7393a47307a16f8cee44a37b262e8b81021e3e init version: 949e6fa Security Options: apparmor Kernel Version: 3.19.0-80-generic Operating System: Ubuntu 14.04.5 LTS OSType: linux Architecture: x86_64 CPUs: 8 Total Memory: 15.67 GiB Name: d1palscmserv2.mdanderson.edu ID: JVNS:MFV5:ZAKY:WIUG:SWSI:GKTK:TFNH:YOGA:B2J5:4HY7:UMCL:TQ3Z Docker Root Dir: /cs/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ WARNING: No swap limit support Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false
It happens frequently during a crash situation, but is happening at background levels unassociated with noticeable degradation at a rate of 6 events over a 5 minute period every few days. Then when there is a serious event, like
docker ps
hangs, there are thousands of them.Our environment is vmware running rancher. We also see it in aws running rancher.