moby: docker swarm mode - can't reach containers by link alias

Hi,

I have the following docker-compose file, deployed with docker swam deploy --compose-file to a swam cluster using Docker version 1.13.0, build 49bf474:

version: '3'
services:
  ### SERVER LAYER ###
  nginx:
    hostname: "nginx"
    domainname: "docker"
    container_name: "mtc_nginx"
    image: radmas/mtc-plus-nginx
    restart: always
    networks:
      - server
    depends_on:
      - fpm
    links:
      - "fpm:fpm.docker"
    expose:
      - "80"
      - "443"
    ports:
      - "0.0.0.0:80:80"
      - "0.0.0.0:443:443"
    deploy:
      placement:
        constraints: [node.hostname == mtc-plus-prod-balancer]

  ### APPLICATION LAYER ###
  fpm:
    image: radmas/mtc-plus-fpm
    restart: always
    container_name: "mtc_fpm"
    hostname: "fpm"
    domainname: "docker"
    restart: always
    networks:
      - server
      - application
      - data
    expose:
      - "9000"
    deploy:
      placement:
        constraints: [node.hostname == mtc-plus-prod-applications]

networks:
  server:
    driver: overlay
  application:
    driver: overlay
  data:
    driver: overlay

From inside the NGINX container, I can reach the FPM container using fpm as hostname, but fpm.docker is not valid:

root@nginx:/# ping fpm.docker
ping: unknown host

root@nginx:/# ping fpm
PING fpm (10.0.0.4): 56 data bytes
64 bytes from 10.0.0.4: icmp_seq=0 ttl=64 time=0.055 ms
64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms
64 bytes from 10.0.0.4: icmp_seq=2 ttl=64 time=0.060 ms

Is this the expected behaviour?

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 15 (5 by maintainers)

Most upvoted comments

Hi @simdrouin @jmcalvar

Finally I found a workaround for Docker Swarm-Mode. You must use network-level aliases. Here is an example:

  elasticsearch:
    image: radmas/mtc-plus-elasticsearch
    volumes:
      - ./persistent-data/elasticsearch:/usr/share/elasticsearch/data
      - ./persistent-data/elasticsearch-backups:/backups
    command: elasticsearch --default.path.repo=/backups
    networks:
      data:
        aliases:
          - elasticsearch.docker
    logging:
      driver: syslog
      options:
        syslog-address: "udp://10.129.26.80:5514"
        tag: "docker[elasticsearch]"
    deploy:
      placement:
        constraints: [node.labels.purpose == extra-data]

With this setup, any container in the data network can reach the ElasticSearch container using its network alias “elasticsearch.docker”. Please note that “elasticsearch.docker” is a “service” DNS, not a container name, and, if you scale your service, your requests to elasticsearch.docker will be load-balanced.

When you run docker stack deploy you should see

Ignoring unsupported options: links

Links (container-to-container alaises) are not supported by swarm mode.