moby: Ingress network MTU is not configurable
Output of docker version
:
Client:
Version: 1.12.0-rc4
API version: 1.24
Go version: go1.6.2
Git commit: e4a0dbc
Built:
OS/Arch: linux/amd64
Server:
Version: 1.12.0-rc4
API version: 1.24
Go version: go1.6.2
Git commit: e4a0dbc
Built:
OS/Arch: linux/amd64
Output of docker info
:
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 5
Server Version: 1.12.0-rc4
Storage Driver: devicemapper
Pool Name: docker-253:1-176165418-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 987.5 MB
Data Space Total: 107.4 GB
Data Space Available: 40.39 GB
Metadata Space Used: 1.88 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.146 GB
Thin Pool Minimum Free Space: 10.74 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2015-10-14)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: overlay bridge null host
Swarm: active
NodeID: bhhed9vizh7mjdxve52lup55v
IsManager: Yes
Managers: 1
Nodes: 3
CACertHash: sha256:bbc6df07cdf4131d1cf5153b2edc95244c4f7966fa59cb7b6ea11515731d15d7
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 3.10.0-327.el7.x86_64
Operating System: Red Hat Enterprise Linux Server 7.2 (Maipo)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 5.671 GiB
Name: jf-a.localdomain
ID: FWPJ:7UNB:OLBN:23Y6:JFNN:ARRB:7MW6:RWIC:RG3P:I36U:RU47:3KE6
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 45
Goroutines: 139
System Time: 2016-07-21T15:19:10.688153846-04:00
EventsListeners: 1
Registry: https://index.docker.io/v1/
Insecure Registries:
127.0.0.0/8
Additional environment details (AWS, VirtualBox, physical, etc.): VMs on OpenStack
Steps to reproduce the issue:
- Spin up 3 node swarm mode cluster, including specifying the --listen-addr param
- Create 1 instance of tomcat:
docker service create --replicas 1 --publish 8080:8080 --name first_tomcat tomcat:8.0.36-jre8-alpine
- Create 1 instance of tomcat (publish a different port):
docker service create --replicas 1 --publish 8081:8080 --name second_tomcat tomcat:8.0.36-jre8-alpine
docker exec -it <container id of first_tomcat> sh
wget -O - second_tomcat:8080
(using wget as alpine doesn’t come with curl, sorry haha)
Describe the results you received: Try using service VIP:
/usr/local/tomcat # wget -O - second_tomcat:8080
Connecting to second_tomcat:8080 (10.255.0.7:8080)
<timeout>
Try using fixed IP:
/usr/local/tomcat # wget -O - tasks.second_tomcat:8080
Connecting to tasks.second_tomcat:8080 (10.255.0.8:8080)
<timeout>
Describe the results you expected: Should be able to reach the other service’s instance!
Additional information you deem important (e.g. issue happens only occasionally): Always happens. Happens with both an in-house Tomcat container and the public Tomcat container.
Docker inspect of the second_tomcat container:
# docker inspect 2f539696a2e0
[
{
"Id": "2f539696a2e0a228558cd1b25286d6637c88c6f124b3bd72d2f1b8737574b878",
"Created": "2016-07-21T19:15:30.727331171Z",
"Path": "catalina.sh",
"Args": [
"run"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 17356,
"ExitCode": 0,
"Error": "",
"StartedAt": "2016-07-21T19:15:31.380976163Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:b0ca4b09f74556a997ab35150b7995cd722a3d5aa98255404684341b8a5cd6a9",
"ResolvConfPath": "/var/lib/docker/containers/2f539696a2e0a228558cd1b25286d6637c88c6f124b3bd72d2f1b8737574b878/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/2f539696a2e0a228558cd1b25286d6637c88c6f124b3bd72d2f1b8737574b878/hostname",
"HostsPath": "/var/lib/docker/containers/2f539696a2e0a228558cd1b25286d6637c88c6f124b3bd72d2f1b8737574b878/hosts",
"LogPath": "/var/lib/docker/containers/2f539696a2e0a228558cd1b25286d6637c88c6f124b3bd72d2f1b8737574b878/2f539696a2e0a228558cd1b25286d6637c88c6f124b3bd72d2f1b8737574b878-json.log",
"Name": "/second_tomcat.1.b5u1k8vbr9zs7rcouuscht0d6",
"RestartCount": 0,
"Driver": "devicemapper",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": [
"810f39444f184440e75361a2739178539a7d1c40410dbd1a2451dbad82660e4e"
],
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": null,
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": -1,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0
},
"GraphDriver": {
"Name": "devicemapper",
"Data": {
"DeviceId": "37",
"DeviceName": "docker-253:1-192941280-3a2635d8b7239a777fe02df652422d11fa7bb403fe040871b0997660e6fa284b",
"DeviceSize": "10737418240"
}
},
"Mounts": [],
"Config": {
"Hostname": "2f539696a2e0",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"8080/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/tomcat/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-1.8-openjdk/jre/bin:/usr/lib/jvm/java-1.8-openjdk/bin",
"LANG=C.UTF-8",
"JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk/jre",
"JAVA_VERSION=8u92",
"JAVA_ALPINE_VERSION=8.92.14-r1",
"CATALINA_HOME=/usr/local/tomcat",
"TOMCAT_NATIVE_LIBDIR=/usr/local/tomcat/native-jni-lib",
"LD_LIBRARY_PATH=/usr/local/tomcat/native-jni-lib",
"TOMCAT_MAJOR=8",
"TOMCAT_VERSION=8.0.36",
"TOMCAT_TGZ_URL=https://www.apache.org/dist/tomcat/tomcat-8/v8.0.36/bin/apache-tomcat-8.0.36.tar.gz"
],
"Cmd": [
"catalina.sh",
"run"
],
"Image": "tomcat:8.0.36-jre8-alpine",
"Volumes": null,
"WorkingDir": "/usr/local/tomcat",
"Entrypoint": null,
"OnBuild": null,
"Labels": {
"com.docker.swarm.node.id": "70vxyod2eboxcxd8a0wo0ds7n",
"com.docker.swarm.service.id": "5d01jg29r59l7d9rs7o0y2u25",
"com.docker.swarm.service.name": "second_tomcat",
"com.docker.swarm.task": "",
"com.docker.swarm.task.id": "b5u1k8vbr9zs7rcouuscht0d6",
"com.docker.swarm.task.name": "second_tomcat.1"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "6e37aebafcb111ca2e1fae9e377e0ff9f60857be123c45851e69719aee285d80",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"8080/tcp": null
},
"SandboxKey": "/var/run/docker/netns/6e37aebafcb1",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"ingress": {
"IPAMConfig": {
"IPv4Address": "10.255.0.8"
},
"Links": null,
"Aliases": [
"2f539696a2e0"
],
"NetworkID": "dzsgwnyjip8yqi4893da420md",
"EndpointID": "0146171a88c911c30d8fc54abf87e51672cb9f4c4ed4146ff2b701d3751c0913",
"Gateway": "",
"IPAddress": "10.255.0.8",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:0a:ff:00:08"
}
}
}
}
]
Docker service inspect output:
# docker service inspect second_tomcat
[
{
"ID": "5d01jg29r59l7d9rs7o0y2u25",
"Version": {
"Index": 1043
},
"CreatedAt": "2016-07-21T19:13:14.020746042Z",
"UpdatedAt": "2016-07-21T19:13:14.029082747Z",
"Spec": {
"Name": "second_tomcat",
"TaskTemplate": {
"ContainerSpec": {
"Image": "tomcat:8.0.36-jre8-alpine"
},
"Resources": {
"Limits": {},
"Reservations": {}
},
"RestartPolicy": {
"Condition": "any",
"MaxAttempts": 0
},
"Placement": {}
},
"Mode": {
"Replicated": {
"Replicas": 1
}
},
"UpdateConfig": {},
"EndpointSpec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 8080,
"PublishedPort": 8081
}
]
}
},
"Endpoint": {
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 8080,
"PublishedPort": 8081
}
]
},
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 8080,
"PublishedPort": 8081
}
],
"VirtualIPs": [
{
"NetworkID": "dzsgwnyjip8yqi4893da420md",
"Addr": "10.255.0.7/16"
}
]
}
}
]
Anything else you need or want me to try, let me know!
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 42 (21 by maintainers)
Commits related to this issue
- Workaround docker swarm MTU bug See https://github.com/docker/docker/issues/24906#issuecomment-235894478 — committed to manics-archive/ome-infrastructure by manics 8 years ago
- Workaround docker swarm MTU bug See https://github.com/docker/docker/issues/24906#issuecomment-235894478 — committed to manics-archive/ome-infrastructure by manics 8 years ago
- Workaround docker swarm MTU bug See https://github.com/docker/docker/issues/24906#issuecomment-235894478 — committed to manics-archive/ome-infrastructure by manics 8 years ago
- Workaround docker swarm MTU bug See https://github.com/docker/docker/issues/24906#issuecomment-235894478 — committed to manics-archive/ome-infrastructure by manics 8 years ago
- Workaround docker swarm MTU bug See https://github.com/docker/docker/issues/24906#issuecomment-235894478 — committed to manics-archive/ome-infrastructure by manics 8 years ago
- Workaround docker swarm MTU bug See https://github.com/docker/docker/issues/24906#issuecomment-235894478 — committed to manics-archive/ome-infrastructure by manics 8 years ago
- Workaround docker swarm MTU bug See https://github.com/docker/docker/issues/24906#issuecomment-235894478 — committed to manics-archive/ome-infrastructure by manics 8 years ago
- Workaround docker swarm MTU bug See https://github.com/docker/docker/issues/24906#issuecomment-235894478 — committed to manics-archive/ome-infrastructure by manics 8 years ago
- Workaround docker swarm MTU bug See https://github.com/docker/docker/issues/24906#issuecomment-235894478 — committed to manics-archive/ome-infrastructure by manics 8 years ago
- Workaround docker swarm MTU bug See https://github.com/docker/docker/issues/24906#issuecomment-235894478 — committed to manics-archive/ome-infrastructure by manics 8 years ago
- Workaround docker swarm MTU bug See https://github.com/docker/docker/issues/24906#issuecomment-235894478 — committed to manics/deployment by manics 8 years ago
@aluzzardi we dont have any options today to configure ingress network. That is infact a problem not just for MTU. But also for all other network settings. I think we should allow the user to remove the automatically created ingress network & create it again (or) have ability to change the network configuration.
@sprohaska
eth1
is the interface that is connected to thedefault_gwbridge
network. You can see that network when you dodocker network ls
. This is an automatically created network when overlay network is used. When a service is exposed with a port, the container belongs to Routing-mesh overlay network (called ingress).Since
default_gwbridge
is automatically created, we use the default mtu (not from --mtu). But fortunately, we have a workaround for this.docker service rm xxxx
all the services with-p
.docker network rm docker_gwbridge
docker network create -o com.docker.network.driver.mtu=xxx docker_gwbridge
This should create a
docker_gwbridge
with a desired mtu and from then on your container eth1 mtu should work. Please give it a try.Yikes. Yep, that did it. I actually only had to change it on the “source” end. Left the other end to 1450 and now I can transfer the large payload.