moby: Scheduler limits the # of ctn at 40 per nodes worker (overlay network limit is 252 ctn) | Swarm 1.12.1

Description

It looks like Swarm can only schedule 38 to 40 container per worker nodes.

Steps to reproduce the issue: Create then scale …

At this point nothing is scheduled anymore:

6sccj9n4vbry  g99999003-h            200/300   devmtl/ngx-kyle_hp_g99999003:latest
ak0jrutnzc34  g99999005-h            7/7       devmtl/ngx-kyle_hp_g99999005:latest
elb3soar53el  g99999004-h            41/200    devmtl/ngx-kyle_hp_g99999004:latest

Describe the results you received:

node ls:

root@cloud-a:~/deploy-setup# docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
0ao6e2z6w2u5ri4r9z1hya9v1    cloud-02  Ready   Active
0e024xl25q697lhgutvs5211n    cloud-03  Ready   Active
2x974kfq8jcqlyug6paepuhbw    cloud-01  Ready   Active
7iy1byo84ukvz9n691nyzaxh6    cloud-05  Ready   Active
7lnigzqa1v8d2h56kkvgnpuo2    cloud-04  Ready   Active
7tss7n27nnangdt9rehgq0mcw    cloud-06  Ready   Active
aoe97nq38369l7zf0w9wvjmkw    cloud-07  Ready   Active
c9xiessquo34aq0sgrfbtgkpm *  cloud-a   Ready   Active        Leader

docker service ps g99999003-h (extract of the results)

167kjjtg32bgwe8q4a8i03mkj  g99999003-h.195      devmtl/ngx-kyle_hp_g99999003:latest  cloud-06  Running        Running 4 minutes ago
1exro6x4uzq5zjsjihu5eoeba  g99999003-h.196      devmtl/ngx-kyle_hp_g99999003:latest  cloud-03  Running        Running 4 minutes ago
6qx32o052c8xi0e4vfizlc0fw  g99999003-h.197      devmtl/ngx-kyle_hp_g99999003:latest  cloud-05  Running        Running 4 minutes ago
5fkr4plclethhnmn6wxvet543  g99999003-h.198      devmtl/ngx-kyle_hp_g99999003:latest  cloud-01  Running        Running 5 minutes ago
3dzeyvo6xlu3r1w4j98ezh5p9  g99999003-h.199      devmtl/ngx-kyle_hp_g99999003:latest  cloud-06  Running        Running 5 minutes ago
cd9xeekrgpvd0job60y3tgkix  g99999003-h.200      devmtl/ngx-kyle_hp_g99999003:latest  cloud-06  Running        Running 4 minutes ago
00o1jwrhsrh5pc44qjb02o53x  g99999003-h.201      devmtl/ngx-kyle_hp_g99999003:latest            Running        New 3 minutes ago
5w7uhetxwiauxq9h847te0a32  g99999003-h.202      devmtl/ngx-kyle_hp_g99999003:latest            Running        New 3 minutes ago
8rgbqvbhirxzbta6c3eb7kyn0  g99999003-h.203      devmtl/ngx-kyle_hp_g99999003:latest            Running        New 3 minutes ago
5emzvj8rglg290cznx75ulhd4  g99999003-h.204      devmtl/ngx-kyle_hp_g99999003:latest            Running        New 3 minutes ago
cp7fybsl540rlfh5a1nw6s6t3  g99999003-h.205      devmtl/ngx-kyle_hp_g99999003:latest            Running        New 3 minutes ago
93g12degpgzafi14z4flk1fkk  g99999003-h.206      devmtl/ngx-kyle_hp_g99999003:latest            Running        New 3 minutes ago

Describe the results you expected:

That each node max out their memory or CPU limits. My workers have 2Go of RAM. They can handle much more containers.

Additional information you deem important (e.g. issue happens only occasionally):

Output of docker version:

Client:
 Version:      1.12.1
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   23cf638
 Built:        Thu Aug 18 05:33:38 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.1
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   23cf638
 Built:        Thu Aug 18 05:33:38 2016
 OS/Arch:      linux/amd64

Output of docker info:

Containers: 2
 Running: 2
 Paused: 0
 Stopped: 0
Images: 2
Server Version: 1.12.1
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 20
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: host bridge overlay null
Swarm: active
 NodeID: c9xiessquo34aq0sgrfbtgkpm
 Is Manager: true
 ClusterID: 5m2c8uzz9v75xyythzvb6vgh1
 Managers: 1
 Nodes: 8
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: 10.2.33.137
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.5.7-docker-4
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.954 GiB
Name: cloud-a
ID: WZDW:GVT3:DIOD:B7JJ:56IH:QNB3:SC7T:XBPW:WW36:B4ES:6G62:JLZH
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
 provider=scaleway
Insecure Registries:
 127.0.0.0/8

Additional environment details (AWS, VirtualBox, physical, etc.):

physical, each vps created from the official image ‘docker 1.12.1’ from scaleway. No firewall while testing.

Cheers! Pascal

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Comments: 27 (13 by maintainers)

Most upvoted comments

screen shot 2016-09-22 at 2 17 13 pm

Service small is attached to a network called test-small created like

docker network create --driver overlay test-small

Service big is attached to a network called test-big created like

docker network create --driver overlay --subnet=10.1.0.0/16 test-big

@pascalandy Creating a network with larger subnet is simple. Just create one like this docker network create --subnet=10.1.0.0/16 -d overlay foo.