testcontainers-java: Can't connect to Ryuk container when using remote Docker host

Version 1.6.0 Not sure if this came from multiple threads but never seen it with single thread. Many tests use several types of containers.

java.lang.ExceptionInInitializerError
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:58)
	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:49)
	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:69)
	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:48)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
	at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
	at org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
	at org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:105)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
	at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
	at org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:355)
	at org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:64)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: Can not connect to Ryuk
	at org.testcontainers.utility.ResourceReaper.start(ResourceReaper.java:158)
	at org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:116)
	at org.testcontainers.containers.GenericContainer.<init>(GenericContainer.java:126)
	at org.testcontainers.containers.JdbcDatabaseContainer.<init>(JdbcDatabaseContainer.java:35)
	at org.testcontainers.containers.PostgreSQLContainer.<init>(PostgreSQLContainer.java:28)
	at com.tests.container.DbContainer.<init>()
	at com.tests.MyTest.<clinit>()
	... 27 more

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 7
  • Comments: 43 (13 by maintainers)

Commits related to this issue

Most upvoted comments

Maybe I’m wrong but this could be a ‘known docker bug’. I experienced several issues with this before.

Long story short: Everyone can access docker-proxied port, except for container in the same host. Basically two containers with ip 172.17.0.2 and 172.17.0.3 with exposed ports can’t connect to each other on these ports using docker host ip 172.17.0.1:(exposedport) You get ‘No route to host’.

It is a docker/firewall/iptables issue.

You have to permit it on iptables in filter/INPUT chain an exact ip or docker network. FirewallD example

firewall-cmd --direct --add-rule ipv4 filter INPUT_direct 0 -s 172.17.0.0/24 -j ACCEPT firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT_direct 0 -s 172.17.0.0/24 -j ACCEPT

Or iptables example: iptables -t filter -I INPUT 1 -s 172.17.0.0/24 -j ACCEPT

Where 172.17.0.0/24 is default docker network out of box.

This will INSERT rule in filter/INPUT chain as FIRST one. To delete: iptables t filter -D INPUT 1

To test it, just run two containers like centos (install telnet there) and nginx with exposed port. And try to connect via telnet to your nginx server via docker-host ip and nginx exposed port. Keep in mind, ping always works, don’t rely on that.

Just FYI my docker-compose.yml, config.toml and gitlab-ci.yml in attached zip file.

1.zip

Please see #1274 . It is very likely that the local UNIX socket is not opened on your docker host. Adding -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375 when running the docker server should solve your issue.

I’m experiencing a similar issue when running on Jenkins, with a DOCKER_HOST pointing to some other host:

  2018-04-03 17:59:19.719  WARN --- [containers-ryuk[] o.testcontainers.utility.ResourceReaper  : Can not connect to Ryuk at 10.17.48.167:32800
  java.net.ConnectException: Connection refused
  	at java.net.PlainSocketImpl.socketConnect(Native Method)
  	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
  	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
  	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
  	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
  	at java.net.Socket.connect(Socket.java:589)
  	at java.net.Socket.connect(Socket.java:538)
  	at java.net.Socket.<init>(Socket.java:434)
  	at java.net.Socket.<init>(Socket.java:211)
  	at org.testcontainers.utility.ResourceReaper.lambda$start$2(ResourceReaper.java:111)
  	at org.testcontainers.utility.ResourceReaper$$Lambda$41/1921545475.run(Unknown Source)
  	at java.lang.Thread.run(Thread.java:745)

Downgrading to 1.5.1 fixes the issue.

@skyline75489 It is not just Ryuk, but every container we start will be listening on a random port.

@Wosin no, sorry. As I said - it is not just Ryuk, but any container we start.

@kiview I tested with a single job in a fairly up to date jenkins install.

We did not see any regressions when running a single build, I also ran multiple builds in parallel (not sure this matters) and didn’t see any regressions in that case either.

We are tentatively upgrading from 1.7.1 to 1.9.1. If I see regressions so I continue to report them to this ticket?

@bearrito Would be great if you can test with 1.9.1.

We are seeing a similar issue on our jenkins agents running centos with selinux.

14:22:41.936 [testcontainers-ryuk] WARN  org.testcontainers.utility.ResourceReaper - Can not connect to Ryuk at localhost:32797
java.net.SocketException: Connection reset
	at java.net.SocketInputStream.read(SocketInputStream.java:210) ~[na:1.8.0_131]
	at java.net.SocketInputStream.read(SocketInputStream.java:141) ~[na:1.8.0_131]
	at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) ~[na:1.8.0_131]
	at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) ~[na:1.8.0_131]
	at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) ~[na:1.8.0_131]
	at java.io.InputStreamReader.read(InputStreamReader.java:184) ~[na:1.8.0_131]
	at java.io.BufferedReader.fill(BufferedReader.java:161) ~[na:1.8.0_131]
	at java.io.BufferedReader.readLine(BufferedReader.java:324) ~[na:1.8.0_131]
	at java.io.BufferedReader.readLine(BufferedReader.java:389) ~[na:1.8.0_131]
	at org.testcontainers.utility.ResourceReaper.lambda$start$2(ResourceReaper.java:139) ~[testcontainers-1.6.0.jar:na]
	at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_131]

When starting the ryuk container manually, we see a socket permission problem.

$ docker run -p8080:8080 -v /var/run/docker.sock:/var/run/docker.sock --name ryuk -P -d quay.io/testcontainers/ryuk:0.2.2
$ docker logs ryuk
panic: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/_ping: dial unix /var/run/docker.sock: connect: permission denied

The access is blocked by selinux for security reasons, see https://danwalsh.livejournal.com/74095.html We could allow this globally on the agents, but given the outlined security issues, this is probably not a good idea.

@alvarosanchez With version 1.6.0 we’ve introduced a new implementation for the ResourceReaper component, which will now run outside of the JVM in its own Docker container (the container is called Ryuk and the error appears when trying to connect to this container). Since we don’t have an automated CI build for this scenario (remote DOCKER_HOST), I’m not sure if this implementation broke the compatibility with some remote scenarios.

I’ve changed the title of the issue to better reflect the current state of knowledge we have, I hope it’s okay for everyone involved?

@alexcase52 Were you able to find out if this issue is related to multiple Gradle threads?