moby: [1.12.0-rc2] Joining a remote node without --manager doesn't require acceptance

Output of docker version:

Client (laptop, OSX)

$> docker version
Client:
 Version:      1.12.0-rc2
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   906eacd
 Built:        Fri Jun 17 20:35:33 2016
 OS/Arch:      darwin/amd64
 Experimental: true

Server:
 Version:      1.12.0-rc2
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   a7119de
 Built:        Fri Jun 17 22:09:20 2016
 OS/Arch:      linux/amd64
 Experimental: true

Server:

Client:
 Version:      1.12.0-rc2
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   906eacd-unsupported
 Built:        Fri Jun 17 21:21:56 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.0-rc2
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   906eacd-unsupported
 Built:        Fri Jun 17 21:21:56 2016
 OS/Arch:      linux/amd64

Output of docker info:

Client:

Containers: 78
 Running: 0
 Paused: 0
 Stopped: 78
Images: 51
Server Version: 1.12.0-rc2
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 344
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: overlay bridge null host
Swarm: inactive
Runtimes: default
Default Runtime: default
Security Options: seccomp
Kernel Version: 4.4.13-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 1.954 GiB
Name: moby
ID: AX3N:B5KC:6TY6:V5UW:5MAH:JSD3:PXBN:HC3H:GF3L:GC7L:Z35R:KAIW
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 33
 Goroutines: 95
 System Time: 2016-06-20T22:09:37.819574454Z
 EventsListeners: 1
No Proxy: *.local, 169.254/16
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
 127.0.0.0/8

Server (swarm cluster):

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 1.12.0-rc2
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 0
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: overlay host null bridge
Swarm: inactive
Runtimes: default
Default Runtime: default
Security Options: apparmor seccomp
Kernel Version: 4.4.0-24-generic
Operating System: Ubuntu 16.04 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 991.1 MiB
Name: docker1
ID: LOR7:TUJW:PXCF:SGKX:L5LU:GQCT:UTRN:G72M:JU32:I24J:JN3R:BN34
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Insecure Registries:
 127.0.0.0/8

Additional environment details (AWS, VirtualBox, physical, etc.):

Steps to reproduce the issue:

  1. Set up a remote swarm cluster - mine is 4 remote nodes, 2 leaders
  2. On your laptop, run:
$> docker swarm join swarmIP:2377
This node joined a Swarm as a worker.
  1. On the leader:
docker node ls
ID                         NAME     MEMBERSHIP  STATUS  AVAILABILITY  MANAGER STATUS
09y7cdu03xmuwhta0ykvd72gb  docker4  Accepted    Ready   Active        Reachable
1jkzrkblwbdig3l5u29997n00  docker1  Accepted    Ready   Active        Leader
41znz1ntskwvvvea8up52wu0m  docker2  Accepted    Ready   Active        
5bt1bcak4tuzm9a39a7ou9upz  docker3  Accepted    Ready   Active        
7ljlfo1zqcid0mc8cmir64i6g  moby     Accepted    Ready   Active 

Describe the results you received:

My ‘rogue’ laptop was able to join the swarm cluster as a worker and shows up in the manager list as ready and active.

Describe the results you expected:

I expected the join command to require approval from a manager node as it does in manager mode

Additional information you deem important (e.g. issue happens only occasionally):

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Comments: 21 (15 by maintainers)

Commits related to this issue

Most upvoted comments

@diogomonica From an ease of use standpoint I agree with you. This totally makes things easy to use. I remember having this same debate at Puppet Labs back in the day. Puppet uses a similar method to handle membership and the decision was made to be secure by default and make auto-accept opt-in.

What Puppet Labs found back then is that some users were unaware of the dangers behind auto-accept until it was too late. Given how well the UX is around the initial setup I think requiring a single flag to opt-in to auto-accept only slightly diminishes the user experience and helps people do the right thing out of the box.

If it’s determined that auto-accept should be on by default, then I would like to suggest that a warning be printed that you are running in auto-accept mode for worker nodes, and also print the commands and/or link to documentation that people can use to lock things down and disable auto-accept

I’d rather change the default, but if we don’t then at least a warning with link to docs about securing the swarm. The current situation is worse than any of those 2 IMHO.

Correct, this works as intended. The default policy has managers on manual-approval, and workers on auto-approval. This can be changed by passing parameters to docker swarm init that change that behavior, or using update after the fact.

This seems like bad behavior by default (even if intended). Unless you know what you’re doing and using a private / encrypted network for your entire swarm, I could just join as a worker and be assigned work/containers/etc.

It would be fairly trivial to port scan a block for the open swarm/manager ports and join up. IMO the default should be to always manual approve unless the manager/init is set to ‘auto approve workers’