compose: Getting UnixHTTPConnectionPool read timeout
I get this error message intermittently:
ERROR: for testdb-data UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
- Docker: 1.13.1
- Compose: 1.10.1
We run around 20 testing jobs that execute docker-compose up
in around 14 Jenkins agents. There is a weak correlation between running many jobs at the same time and getting this error.
I have the output from docker-compose --verbose up
but cannot apart the relevant parts of it yet. Some potentially useful excepts:
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=rosetta', u'com.docker.compose.service=testdb-data', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
...
compose.project._get_convergence_plans: other-container has upstream changes (testdb-data, some-container)
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=some-project', u'com.docker.compose.service=other-container', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
...
(more similar lines)
...
compose.parallel.feed_queue: Starting producer thread for <Service: testdb-data>
...
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=rosetta', u'com.docker.compose.service=testdb-data', u'com.docker.compose.oneoff=False']})
...
compose.service.create_container: Creating testdb-data
compose.cli.verbose_proxy.proxy_callable: docker create_container <- (name='testdb-data', image='docker-registry.bln.int.planetromeo.com:5000/pr-testdb:master_master', labels={u'com.docker.compose.service': u'testdb-data', u'com.docker.compose.project': u'rosetta', u'com.docker.compose.config-hash': 'c336deb9e460cd8f979029d54975bc936fee5ff573d9698c65ca479c6a7ed507', u'com.docker.compose.version': u'1.10.1', u'com.docker.compose.oneoff': u'False', u'com.docker.compose.container-number': '1'}, host_config={'NetworkMode': u'rosetta_default', 'Links': [], u'Isolation': None, 'PortBindings': {}, 'Binds': [], 'LogConfig': {'Type': u'', 'Config': {}}, 'VolumesFrom': []}, environment=[], entrypoint=['tail', '-f', '/dev/null'], volumes={u'/var/lib/mysql': {}, u'/var/www/dynamic/caches': {}, u'/var/www/pics': {}, u'/data/elastic-profilesearch': {}, u'/var/www/files/lib/_test': {}, u'/data/elastic-activitystream': {}, u'/var/www/dynamic/world': {}}, detach=True, networking_config={u'EndpointsConfig': {u'rosetta_default': {u'IPAMConfig': {}, u'Aliases': ['testdb-data']}}})
...
compose.parallel.parallel_execute_iter: Failed: <Service: testdb-data>
compose.parallel.feed_queue: Pending: set([<Service: service1>, <Service: service2>, <Service: service3>, ...)
compose.parallel.feed_queue: <Service: service1> has upstream errors - not processing
compose.parallel.feed_queue: <Service: service2> has upstream errors - not processing
compose.parallel.feed_queue: <Service: service3> has upstream errors - not processing
...
- What kind of HTTP connection is this relevant to? Is it docker internals, or bad lines in dockerfiles / docker-compose files, or was it likely caused by code particular to our application?
- Do you think this is indeed related to our servers being overloaded hence we have this timeout?
- Or did we run into a bug?
Can I help providing more details?
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 12
- Comments: 26
This issue and error while pulling official images from dockerhub has practically made Docker for MacOS unusable. For me, even docker restart doesn’t work.
I get a similar error when running
docker-compose up
:There is very little load on the docker daemon.
17.03.0-ce
1.11.2, build dfed245
@Carracer66
COMPOSE_HTTP_TIMEOUT
is an environment variable, not a parameter e.g.Docker has it’s own limits, if you go into Preferences > Advanced I had to turn Memory from 2GB to 6GB this solved my issue running 1 hub and 6 nodes. Some of use are getting timeout because the Hub or node is dying on you, increasing timeouts doesn’t help. Others are probably getting the same issue and it can be fixed with an env variable situations.
We are seeing this error as well, but also are getting it along with this warning:
WARNING: Connection pool is full, discarding connection: localhost
.In our case, we’re trying to use the
--scale
option. What I believe is the problem is that Python has a default connection pool of 10 - which matches up with what we see when we try to launch >10 containers - only the first 10 actually get work done. See https://stackoverflow.com/a/55253192I also found a workaround here, but it’s less than ideal, and it seems to me that one ought to be able to set the connection pool size from docker-compose somehow.
Currently, am using the docker-compose command with setting large timeouts (10mins) and it is working for me. Below is the command which am using
DOCKER_CLIENT_TIMEOUT=600 COMPOSE_HTTP_TIMEOUT=600 docker-compose up
+1 am also facing this issue. And, either of the timeout or the restarting docker solution helped. Does a final fix available for this?
I’m seeing this as well. I’ve tried increasing the COMPOSE_HTTP_TIMEOUT to 120 when building images and running containers. Where is that parameter being used? That is, where does it need to be specified? Build time or runtime? What to try next?
Is anyone looking at this? It seems numerous people are seeing it and the suggested workarounds are nominally effective (at best).
Docker version 17.03.1-ce, build c6d412e docker-compose version 1.11.2, build dfed245