moby: docker service ls | Error response from daemon: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5097134 vs. 4194304)
Description
docker service ls
fails with
Error response from daemon: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5097134 vs. 4194304)
after upgrading from docker 18.04 to 18.06.1-ce
Output of docker version
:
Client:
Version: 18.06.1-ce
API version: 1.38
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:24:56 2018
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.06.1-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:23:21 2018
OS/Arch: linux/amd64
Experimental: false
Output of docker info
:
Containers: 102
Running: 102
Paused: 0
Stopped: 0
Images: 2636
Server Version: 18.06.1-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 1422
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
NodeID: bcmt7u1pea2jsixbgik4rsro3
Is Manager: true
ClusterID: f95tt2krxxhhmxli0mnj1umlx
Managers: 9
Nodes: 9
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 192.168.51.107
Manager Addresses:
192.168.51.107:2377
192.168.51.108:2377
192.168.51.109:2377
192.168.51.110:2377
192.168.51.121:2377
192.168.51.122:2377
192.168.51.123:2377
192.168.51.124:2377
192.168.51.125:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-127-generic
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 12
Total Memory: 62.81GiB
Name: pf-us1-sm2
ID: OZK4:WPBM:L64X:6WXK:QDB7:UBZF:YTKA:MC3X:GNDK:DWQM:KGJ4:M24E
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
192.168.51.107:5000
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
Logs
Oct 09 08:57:00 dockerd[30784]: time="2018-10-09T08:57:00.134884681-07:00" level=error msg="Error getting tasks: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5097134 vs. 4194304)"
Oct 09 08:57:00 dockerd[30784]: time="2018-10-09T08:57:00.134971152-07:00" level=error msg="Handler for GET /tasks returned error: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5097134 vs. 4194304)"
About this issue
- Original URL
- State: open
- Created 6 years ago
- Comments: 18 (8 by maintainers)
Commits related to this issue
- Add streaming API for tasks As documented in moby/moby#37997, if there are too many tasks, then attempting to list them all may cause the size of the response to exceed the gRPC max message size. To... — committed to dperny/swarmkit-1 by dperny 6 years ago
- Add streaming API for tasks As documented in moby/moby#37997, if there are too many tasks, then attempting to list them all may cause the size of the response to exceed the gRPC max message size. To... — committed to dperny/swarmkit-1 by dperny 6 years ago
- Add streaming API for tasks As documented in moby/moby#37997, if there are too many tasks, then attempting to list them all may cause the size of the response to exceed the gRPC max message size. To... — committed to dperny/swarmkit-1 by dperny 6 years ago
- Use swarmkit ListTasksStream API Uses a new swarmkit ListTasksStream gRPC endpoint to avoid a bug where having too many tasks could cause the gRPC message size to exceed the max and prevent the task ... — committed to dperny/docker by dperny 6 years ago
- Bump SwarmKit to 8d8689d5a94ac42406883a4cef89b3a5eaec3d11 Changes included; - docker/swarmkit#2735 Assign secrets individually to each task - docker/swarmkit#2759 Adding a new `Deallocator` componen... — committed to thaJeztah/docker by thaJeztah 6 years ago
- Bump SwarmKit to 8d8689d5a94ac42406883a4cef89b3a5eaec3d11 Changes included; - docker/swarmkit#2735 Assign secrets individually to each task - docker/swarmkit#2759 Adding a new `Deallocator` componen... — committed to docker/docker-ce by thaJeztah 6 years ago
- Bump SwarmKit to 8d8689d5a94ac42406883a4cef89b3a5eaec3d11 Changes included; - docker/swarmkit#2735 Assign secrets individually to each task - docker/swarmkit#2759 Adding a new `Deallocator` componen... — committed to adhulipa/docker by thaJeztah 6 years ago
@fabiopedrosa I’m not Docker maintainer so I cannot say anything about when or how this will be fixed. But I just try ask rights questions to get as much as possible useful information to here and tell about possible workarounds as much I understand this stuff.
Ok. You probably can use
--filter
as workaround or downgrade to 18.03.1: https://docs.docker.com/engine/reference/commandline/service_ls/#filteringLooks like this regression is possibly caused by a change in grpc to set the message size limit to 4MB from a very high limit (math.MaxInt32). See the referenced PR above.
Although huge message sizes seems like another thing that needs to be resolved.
@olljanat No, that’s not the case for us. We had no containers stop/fail recently, and even made sure to run
docker prune container -a -y
on all nodes before creating this ticket.