compose: Getting UnixHTTPConnectionPool read timeout

I get this error message intermittently:

ERROR: for testdb-data  UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
  • Docker: 1.13.1
  • Compose: 1.10.1

We run around 20 testing jobs that execute docker-compose up in around 14 Jenkins agents. There is a weak correlation between running many jobs at the same time and getting this error.

I have the output from docker-compose --verbose up but cannot apart the relevant parts of it yet. Some potentially useful excepts:

compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=rosetta', u'com.docker.compose.service=testdb-data', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
...
compose.project._get_convergence_plans: other-container has upstream changes (testdb-data, some-container)
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=some-project', u'com.docker.compose.service=other-container', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
...
(more similar lines)
...
compose.parallel.feed_queue: Starting producer thread for <Service: testdb-data>
...
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=rosetta', u'com.docker.compose.service=testdb-data', u'com.docker.compose.oneoff=False']})
...
compose.service.create_container: Creating testdb-data
compose.cli.verbose_proxy.proxy_callable: docker create_container <- (name='testdb-data', image='docker-registry.bln.int.planetromeo.com:5000/pr-testdb:master_master', labels={u'com.docker.compose.service': u'testdb-data', u'com.docker.compose.project': u'rosetta', u'com.docker.compose.config-hash': 'c336deb9e460cd8f979029d54975bc936fee5ff573d9698c65ca479c6a7ed507', u'com.docker.compose.version': u'1.10.1', u'com.docker.compose.oneoff': u'False', u'com.docker.compose.container-number': '1'}, host_config={'NetworkMode': u'rosetta_default', 'Links': [], u'Isolation': None, 'PortBindings': {}, 'Binds': [], 'LogConfig': {'Type': u'', 'Config': {}}, 'VolumesFrom': []}, environment=[], entrypoint=['tail', '-f', '/dev/null'], volumes={u'/var/lib/mysql': {}, u'/var/www/dynamic/caches': {}, u'/var/www/pics': {}, u'/data/elastic-profilesearch': {}, u'/var/www/files/lib/_test': {}, u'/data/elastic-activitystream': {}, u'/var/www/dynamic/world': {}}, detach=True, networking_config={u'EndpointsConfig': {u'rosetta_default': {u'IPAMConfig': {}, u'Aliases': ['testdb-data']}}})
...
compose.parallel.parallel_execute_iter: Failed: <Service: testdb-data>
compose.parallel.feed_queue: Pending: set([<Service: service1>, <Service: service2>, <Service: service3>, ...)
compose.parallel.feed_queue: <Service: service1> has upstream errors - not processing
compose.parallel.feed_queue: <Service: service2> has upstream errors - not processing
compose.parallel.feed_queue: <Service: service3> has upstream errors - not processing
...
  • What kind of HTTP connection is this relevant to? Is it docker internals, or bad lines in dockerfiles / docker-compose files, or was it likely caused by code particular to our application?
  • Do you think this is indeed related to our servers being overloaded hence we have this timeout?
  • Or did we run into a bug?

Can I help providing more details?

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 12
  • Comments: 26

Most upvoted comments

This issue and error while pulling official images from dockerhub has practically made Docker for MacOS unusable. For me, even docker restart doesn’t work.

I get a similar error when running docker-compose up:

Exception in thread Thread-5:
Traceback (most recent call last):
  File "/usr/lib64/python2.7/threading.py", line 804, in __bootstrap_inner
    self.run()
  File "/usr/lib64/python2.7/threading.py", line 757, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/home/vernica/.local/lib/python2.7/site-packages/compose/cli/log_printer.py", line 197, in watch_events
    for event in event_stream:
  File "/home/vernica/.local/lib/python2.7/site-packages/compose/project.py", line 356, in events
    decode=True
  File "/home/vernica/.local/lib/python2.7/site-packages/docker/api/client.py", line 290, in _stream_helper
    for chunk in json_stream(self._stream_helper(response, False)):
  File "/home/vernica/.local/lib/python2.7/site-packages/docker/utils/json_stream.py", line 66, in split_buffer
    for data in stream_as_text(stream):
  File "/home/vernica/.local/lib/python2.7/site-packages/docker/utils/json_stream.py", line 22, in stream_as_text
    for data in stream:
  File "/home/vernica/.local/lib/python2.7/site-packages/docker/api/client.py", line 296, in _stream_helper
    data = reader.read(1)
  File "/home/vernica/.local/lib/python2.7/site-packages/requests/packages/urllib3/response.py", line 324, in read
    flush_decoder = True
  File "/usr/lib64/python2.7/contextlib.py", line 35, in __exit__
    self.gen.throw(type, value, traceback)
  File "/home/vernica/.local/lib/python2.7/site-packages/requests/packages/urllib3/response.py", line 237, in _error_catcher
    raise ReadTimeoutError(self._pool, None, 'Read timed out.')
ReadTimeoutError: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out.

There is very little load on the docker daemon.


  • Docker Engine: 17.03.0-ce
  • Docker Compose: 1.11.2, build dfed245

@Carracer66 COMPOSE_HTTP_TIMEOUT is an environment variable, not a parameter e.g.

COMPOSE_HTTP_TIMEOUT=120 docker-compose up

Docker has it’s own limits, if you go into Preferences > Advanced I had to turn Memory from 2GB to 6GB this solved my issue running 1 hub and 6 nodes. Some of use are getting timeout because the Hub or node is dying on you, increasing timeouts doesn’t help. Others are probably getting the same issue and it can be fixed with an env variable situations.

We are seeing this error as well, but also are getting it along with this warning: WARNING: Connection pool is full, discarding connection: localhost.

In our case, we’re trying to use the --scale option. What I believe is the problem is that Python has a default connection pool of 10 - which matches up with what we see when we try to launch >10 containers - only the first 10 actually get work done. See https://stackoverflow.com/a/55253192

I also found a workaround here, but it’s less than ideal, and it seems to me that one ought to be able to set the connection pool size from docker-compose somehow.

Currently, am using the docker-compose command with setting large timeouts (10mins) and it is working for me. Below is the command which am using

DOCKER_CLIENT_TIMEOUT=600 COMPOSE_HTTP_TIMEOUT=600 docker-compose up

+1 am also facing this issue. And, either of the timeout or the restarting docker solution helped. Does a final fix available for this?

I’m seeing this as well. I’ve tried increasing the COMPOSE_HTTP_TIMEOUT to 120 when building images and running containers. Where is that parameter being used? That is, where does it need to be specified? Build time or runtime? What to try next?

Is anyone looking at this? It seems numerous people are seeing it and the suggested workarounds are nominally effective (at best).

Docker version 17.03.1-ce, build c6d412e docker-compose version 1.11.2, build dfed245