moby: External access from containers not working in docker 1.13 rc7 with swarm overlay network
Description
Only random containers manage to have external connectivity (can access docker_gwbridge) in a swar mode cluster 1.13 rc7.
Steps to reproduce the issue:
- docker network create --driver overlay --subnet=10.1.0.0/24 test_network
- docker service create -t --network test_network --name console busybox sleep 100d
- docker exec -ti $(docker ps | awk ‘/console/ { print $1}’ | head -1) ping 172.18.0.1 -c3 -W3 -w5
Describe the results you received:
PING 172.18.0.1 (172.18.0.1): 56 data bytes
--- 172.18.0.1 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
Pinging the attached network (test_network) containers after a scale out is working OK.
Describe the results you expected: Containers should be able to build their gateway on docker_gwbridge.
Additional information you deem important (e.g. issue happens only occasionally): Trying to use external DNS resolution or any other communication fail as well.
It randomly work : we’ve tried to scale the service to 6 and 4 of the 6 containers were able to ping their gateway to docker_gwbridge and access external networks. Another container on the same host deployed during the same scale out is unable to access the external networks.
Output of docker version
:
Client:
Version: 1.13.0-rc7
API version: 1.25
Go version: go1.7.3
Git commit: 48a9e53
Built: Fri Jan 13 21:41:57 2017
OS/Arch: linux/amd64
Server:
Version: 1.13.0-rc7
API version: 1.25 (minimum version 1.12)
Go version: go1.7.3
Git commit: 48a9e53
Built: Fri Jan 13 21:41:57 2017
OS/Arch: linux/amd64
Experimental: false
Output of docker info
:
Containers: 3
Running: 3
Paused: 0
Stopped: 0
Images: 5
Server Version: 1.13.0-rc7
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: active
NodeID: uak4sulx0tsq2ljaq9gmztr32
Is Manager: true
ClusterID: lxu8sz1ehjdny2itlpssp0g45
Managers: 3
Nodes: 3
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: 10.0.1.191
Manager Addresses:
10.0.1.156:2377
10.0.1.191:2377
10.0.1.228:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
runc version: 2f7393a47307a16f8cee44a37b262e8b81021e3e
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.7.3-coreos-r2
Operating System: Container Linux by CoreOS 1235.6.0 (Ladybug)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.956 GiB
Name: ip-10-0-1-191.mydomain
ID: C6B4:YGDN:NVJC:SVP3:4AMM:QZDD:7MOT:235Q:5TAA:W74T:LGH7:MLSA
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
nexus.mydomain:5000
nexus.mydomain:5001
127.0.0.0/8
Live Restore Enabled: false
Additional environment details (AWS, VirtualBox, physical, etc.): I’m on AWS EC2 using the latest coreos stable (1235.6.0). My VPC ip range is 10.0.0.0/16 (that’s why I specify the subnet during network creation).
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 16 (9 by maintainers)
@ThinkBriK also because things are starting to become tricky, can you post the output of the
check-config.sh
script?You can find a copy of it in the contrib folder https://github.com/docker/docker/blob/master/contrib/check-config.sh