testcontainers-java: Can't connect to Ryuk container when using remote Docker host
Version 1.6.0 Not sure if this came from multiple threads but never seen it with single thread. Many tests use several types of containers.
java.lang.ExceptionInInitializerError
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:58)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:49)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:69)
at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:355)
at org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: Can not connect to Ryuk
at org.testcontainers.utility.ResourceReaper.start(ResourceReaper.java:158)
at org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:116)
at org.testcontainers.containers.GenericContainer.<init>(GenericContainer.java:126)
at org.testcontainers.containers.JdbcDatabaseContainer.<init>(JdbcDatabaseContainer.java:35)
at org.testcontainers.containers.PostgreSQLContainer.<init>(PostgreSQLContainer.java:28)
at com.tests.container.DbContainer.<init>()
at com.tests.MyTest.<clinit>()
... 27 more
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 7
- Comments: 43 (13 by maintainers)
Maybe I’m wrong but this could be a ‘known docker bug’. I experienced several issues with this before.
Long story short: Everyone can access docker-proxied port, except for container in the same host. Basically two containers with ip 172.17.0.2 and 172.17.0.3 with exposed ports can’t connect to each other on these ports using docker host ip 172.17.0.1:(exposedport) You get ‘No route to host’.
It is a docker/firewall/iptables issue.
You have to permit it on iptables in filter/INPUT chain an exact ip or docker network. FirewallD example
firewall-cmd --direct --add-rule ipv4 filter INPUT_direct 0 -s 172.17.0.0/24 -j ACCEPT firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT_direct 0 -s 172.17.0.0/24 -j ACCEPTOr iptables example:
iptables -t filter -I INPUT 1 -s 172.17.0.0/24 -j ACCEPTWhere 172.17.0.0/24 is default docker network out of box.
This will INSERT rule in filter/INPUT chain as FIRST one. To delete:
iptables t filter -D INPUT 1To test it, just run two containers like centos (install telnet there) and nginx with exposed port. And try to connect via telnet to your nginx server via docker-host ip and nginx exposed port. Keep in mind, ping always works, don’t rely on that.
Just FYI my docker-compose.yml, config.toml and gitlab-ci.yml in attached zip file.
1.zip
Please see #1274 . It is very likely that the local UNIX socket is not opened on your docker host. Adding
-H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375when running the docker server should solve your issue.I’m experiencing a similar issue when running on Jenkins, with a
DOCKER_HOSTpointing to some other host:Downgrading to 1.5.1 fixes the issue.
@skyline75489 It is not just Ryuk, but every container we start will be listening on a random port.
@Wosin no, sorry. As I said - it is not just Ryuk, but any container we start.
@kiview I tested with a single job in a fairly up to date jenkins install.
We did not see any regressions when running a single build, I also ran multiple builds in parallel (not sure this matters) and didn’t see any regressions in that case either.
We are tentatively upgrading from 1.7.1 to 1.9.1. If I see regressions so I continue to report them to this ticket?
@bearrito Would be great if you can test with 1.9.1.
We are seeing a similar issue on our jenkins agents running centos with selinux.
When starting the ryuk container manually, we see a socket permission problem.
The access is blocked by selinux for security reasons, see https://danwalsh.livejournal.com/74095.html We could allow this globally on the agents, but given the outlined security issues, this is probably not a good idea.
@alvarosanchez With version 1.6.0 we’ve introduced a new implementation for the
ResourceReapercomponent, which will now run outside of the JVM in its own Docker container (the container is calledRyukand the error appears when trying to connect to this container). Since we don’t have an automated CI build for this scenario (remoteDOCKER_HOST), I’m not sure if this implementation broke the compatibility with some remote scenarios.I’ve changed the title of the issue to better reflect the current state of knowledge we have, I hope it’s okay for everyone involved?
@alexcase52 Were you able to find out if this issue is related to multiple Gradle threads?