moby: Docker swarm init does not enable ipv6 networking even with ipv6 listening address

Output of docker version:

Client:
 Version:      1.12.0-rc3
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   91e29e8
 Built:        Sat Jul  2 00:38:44 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.0-rc3
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   91e29e8
 Built:        Sat Jul  2 00:38:44 2016
 OS/Arch:      linux/amd64

Output of docker info:

Client:
 Version:      1.12.0-rc3
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   91e29e8
 Built:        Sat Jul  2 00:38:44 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.0-rc3
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   91e29e8
 Built:        Sat Jul  2 00:38:44 2016
 OS/Arch:      linux/amd64
calcheng@calvin-ncs-4:~$ docker info
Containers: 1
 Running: 0
 Paused: 0
 Stopped: 1
Images: 58
Server Version: 1.12.0-rc3
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 96
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: host bridge null overlay
Swarm: active
 NodeID: avbpgt1p6xuv6re6hs6v4yoal
 IsManager: Yes
 Managers: 1
 Nodes: 1
 CACertHash: sha256:34bfe064e9150756e1e247f778b26fc94e6059b1a06999100bbcb6b3f54c6f1d
Runtimes: runc
Default Runtime: runc
Security Options: apparmor seccomp
Kernel Version: 4.4.0-28-generic
Operating System: Ubuntu 16.04 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 15.67 GiB
Name: calvin-ncs-4
ID: 3VXD:YYFZ:N4T3:AQPK:HHAE:J2S4:KTT4:KXA2:WVHP:XYLC:OW3I:6BE7
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
 127.0.0.0/8

Additional environment details (AWS, VirtualBox, physical, etc.):

dockerd is started with

/usr/bin/dockerd -H fd:// --ipv6 --fixed-cidr-v6=2a00:1450::/56

Steps to reproduce the issue:

  1. Start swarm with
docker swarm init --listen-addr [2003:8:8:3::8888]:2377
  1. Start a new service with
docker service create --name test -p 80:80 httpd
  1. run wget

Using IPv4 localhost address, it works.

$ wget localhost
--2016-07-06 11:50:35--  http://localhost/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 45 [text/html]
Saving to: \u2018index.html\u2019

index.html          100%[===================>]      45  --.-KB/s    in 0s      

2016-07-06 11:50:35 (6.95 MB/s) - \u2018index.html\u2019 saved [45/45]

But using local IPv6 address, the get fails

$ wget http://[::1]:80/
--2016-07-06 11:52:02--  http://[::1]/
Connecting to [::1]:80... connected.
HTTP request sent, awaiting response... 

Describe the results you received: Connection to published IPv6 network port hangs:

$wget http://[::1]:80/
--2016-07-06 11:52:02--  http://[::1]/
Connecting to [::1]:80... connected.
HTTP request sent, awaiting response... 

Describe the results you expected:

Same result for wget with IPv4.

HTTP request sent, awaiting response... 200 OK
Length: 45 [text/html]
Saving to: \u2018index.html\u2019

index.html          100%[===================>]      45  --.-KB/s    in 0s      

2016-07-06 11:50:35 (6.95 MB/s) - \u2018index.html\u2019 saved [45/45]

Additional information you deem important (e.g. issue happens only occasionally): I do not see any IPv6 address endpoint created for the service.

$ docker service inspect test
[
    {
        "ID": "8fdc1f4c664t81tmawu5xd6xk",
        "Version": {
            "Index": 14
        },
        "CreatedAt": "2016-07-06T15:48:00.433823697Z",
        "UpdatedAt": "2016-07-06T15:48:00.500092187Z",
        "Spec": {
            "Name": "test",
            "TaskTemplate": {
                "ContainerSpec": {
                    "Image": "httpd"
                },
                "Resources": {
                    "Limits": {},
                    "Reservations": {}
                },
                "RestartPolicy": {
                    "Condition": "any",
                    "MaxAttempts": 0
                },
                "Placement": {}
            },
            "Mode": {
                "Replicated": {
                    "Replicas": 1
                }
            },
            "UpdateConfig": {},
            "EndpointSpec": {
                "Mode": "vip",
                "Ports": [
                    {
                        "Protocol": "tcp",
                        "TargetPort": 80,
                        "PublishedPort": 80
                    }
                ]
            }
        },
        "Endpoint": {
            "Spec": {
                "Mode": "vip",
                "Ports": [
                    {
                        "Protocol": "tcp",
                        "TargetPort": 80,
                        "PublishedPort": 80
                    }
                ]
            },
            "Ports": [
                {
                    "Protocol": "tcp",
                    "TargetPort": 80,
                    "PublishedPort": 80
                }
            ],
            "VirtualIPs": [
                {
                    "NetworkID": "05wwhc7t3o5pu9q81vwt1x9vz",
                    "Addr": "10.255.0.4/16"
                }
            ]
        }
    }
]

About this issue

  • Original URL
  • State: open
  • Created 8 years ago
  • Reactions: 63
  • Comments: 59 (5 by maintainers)

Most upvoted comments

Pardon the bump, but any news on this issue? It really is quite frustrating that I can access containers via IPv6 with docker run but can’t with docker service.

@MatthieuBarthel actually, you don’t even need a socat for this. SystemD can do it out of the box 😉

$ cat /etc/systemd/system/swarm-v6@.service
[Unit]
Description=IPv6 kludge-proxy for docker swarm (%i)
Before=docker.service
Requires=swarm-v6@%i.socket
After=swarm-v6@%i.socket

[Service]
PrivateTmp=yes
ExecStart=/lib/systemd/systemd-socket-proxyd 127.0.0.1:%i
Restart=on-failure

[Install]
WantedBy=multi-user.target
$ cat /etc/systemd/system/swarm-v6@.socket
[Socket]
ListenStream=%i
Accept=no
Service=swarm-v6@%i.service
BindIPv6Only=ipv6-only
ReusePort=true
#MaxConnectionsPerSource=15
FreeBind=true
[Install]
WantedBy=sockets.target

Any updates on this? 😕

Hello,

Any news about this?

I still cannot connected to services running on docker swarm via IPv6 on 17.04.0.

Swarm is a dead product, it is better use of time to start migrating.

@thaJeztah @aluzzardi could you give us any information what is going wrong here and what you are doing to address this?

It is very inconvenient that you can’t provide services hosted in docker swarms with ipv6. We are going to move back to docker-compose, which does not have this problem. Could you at least add it to a milestone in the more or less near future?

I’ve just hit this myself & as I run a primarily IPv6 network it’s a pain.

One additional thing, when a service has a port exposed then that port is accessible via IPv4 but a connection via IPv6 connects as docker is listening to that port but then hangs as it cannot connect to the underlying container as that’s not listening on IPv6

Wanted to know if there is someone working on this, if yes we appreciate getting info on the progress done. Many thanks

IMHO, accessing exposed service ports via IPv6 addresses of the hosts should even work without the --ipv6 flag. At least this is the behaviour for published ports of conventional containers, and not supporting this leads to strange behaviour like curling localhost not working on some systems (see #24847).

@dElogics

The ipv6nat solution won’t work due to this limitation:

Only networks with driver bridge are supported; this includes Docker’s default network (“bridge”), as well as user-defined bridge networks

Since swarm services only work with overlay network this is no solutin.

I would also appreciate to have full ipv6 support in swarm.

Update. Actually, ipv6 didn’t work. Sorry for the trouble and disappointment.

I created the same environment again, and this time I couldn’t curl cross-regionally over overlay network. Probably the last time I thought I made it was some sort of misunderstanding.

I tried curl -v and it showed 10.0.2.2:8080, so swarm with ipv6 advertised was not using ipv6 to connect containers together.

I’ll spend some more time to figure it out and tweak because swarm is so easy to use and I love using it.

Anyways, thank you @bluikko, @sudo-bmitch and @benz0li for helping me!

What a workaround.

It boggles the mind that it is almost 2020 and Docker still does not have adequate IPv6 support while proudly stating in https://docs.docker.com/config/daemon/ipv6/ to “just” enable IPv6 and all will work. You’ll only find out the truth after wasting so much time - looking at tickets as with so many other problems with Docker.

Not that anyone cares, but I’m done with this and the oh so numerous other bugs, documentation that seems to be written for/by programmers or kids, problematic design choices, inadequacies. K8s is where it’s at, the blue whaleboat is listing precariously.

Any updates on this??

Is this issue’s 9th position in https://github.com/moby/moby/projects/2#card-4820539 a representation of its priority? Does someone understand the code enough to have an idea the general area that would need work? There are a few broken links in the CONTRIBUTING.md file.

If you don’t need ingress for example in one Manager setup: use host mode when publishing ports. This working same as docker-compose but you dont lose another swarm benefits. Compose file version v3.2+

    ports:
      - target: 80
        published: 80
        protocol: tcp
        mode: host
      - target: 443
        published: 443
        protocol: tcp
        mode: host

The first paragraph on https://docs.docker.com/config/daemon/ipv6/ now implies Swarm will work with IPv6:

Before you can use IPv6 in Docker containers or swarm services, you need to enable IPv6 support in the Docker daemon. Afterward, you can choose to use either IPv4 or IPv6 (or both) with any container, service, or network.

Yes, it never worked out actually. So my solution was to do ipv6 to ipv4 translation before sending the request to the swarm.

As a workarround, I use socat to forward IPV4/IPV6 requests to traefik, this is quite simple (working on latest Ubuntu/Debian at least) :

  1. Install socat with your package manager
apt install socat
  1. Expose traefik on differents ports than defaults 80 and 443 (here I am using 8080 and 8443), in docker-compose.yml file :
...
    ports:
      - 8080:80
      - 8443:443
....
  1. Create two socat services to listen on ports 80 and 443 (here using systemd), and redirect requests to traefik on IPV4 :
pico /etc/systemd/system/socat-tcp-80.service
# Copy/paste this :
[Unit]
Description=Socat TCP:80
After=network.target

[Service]
Type=simple
User=root
ExecStart=/usr/bin/socat TCP6-LISTEN:80,,su=nobody,fork,reuseaddr TCP4:127.0.0.1:8080
Restart=on-failure

[Install]
WantedBy=multi-user.target
pico /etc/systemd/system/socat-tcp-443.service
# Copy/paste this :
[Unit]
Description=Socat TCP:443
After=network.target

[Service]
Type=simple
User=root
ExecStart=/usr/bin/socat TCP6-LISTEN:443,,su=nobody,fork,reuseaddr TCP4:127.0.0.1:8443
Restart=on-failure

[Install]
WantedBy=multi-user.target
  1. Start the services and enable them at boot :
systemctl start socat-tcp-80
systemctl start socat-tcp-443
systemctl enable socat-tcp-80
systemctl enable socat-tcp-443

You can look at the service logs using journalctl :

journalctl -feu socat-tcp-80
journalctl -feu socat-tcp-443

You can check that your website is working with both IPV4 and IPV6 here: https://ready.chair6.net

Going on almost three years with this bug. As @peter-mount commented, running an IPv6-only network is a pain with this issue. Disabling the routing mesh entirely is one way to overcome the problem (use --publish …,mode=host --mode global), but that somewhat defeats the whole purpose of a swarm.

let me try and get one more 😃 ping @sanimej

Nope, as you can see people asked this for the last 3 years… I use the haproxy workaround (port published with mode: host, then using the docker ingress dns server to resolve service names) to listen on ipv6 sockets (it also allow me to get the real client ip for backend services supporting the haproxy protocol).

conf example
frontend http-in
  bind *:80
  mode tcp
  no option http-server-close
  default_backend apache-http

frontend https-in
  bind *:443
  mode tcp
  no option http-server-close
  default_backend apache-https

backend apache-http
  mode tcp
  no option http-server-close
  server server tasks.apache2:80  send-proxy

backend apache-https
  mode tcp
  no option http-server-close
  server server tasks.apache2:443  send-proxy