mockserver: Access to mockserver by container name hangs mockserver

Hello, in my scenario I am trying to put mockserver behind nginx and proxy requests to it. But this issue is not about nginx at all. I encountered interesting issue when I try to access mockserver using container name. Here is example how to achieve that: Firstly, I create network

docker network create mynetwork

Then I start mockserver in that network and install curl in new container

docker run -d --net=mynetwork --name mockserver jamesdbloom/mockserver:mockserver-5.3.0
docker exec -ti mockserver apk add --no-cache curl

Then I add watcher in the second ssh session to ensure mockserver container is responsive:

watch docker exec -ti mockserver curl -v localhost:1080

Output:

Every 2.0s: docker exec -ti mockserver curl -v localhost:1080                                                                                                                                                       Tue Mar 13 14:46:33 2018

* Rebuilt URL to: localhost:1080/
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 1080 (#0)
> GET / HTTP/1.1
> Host: localhost:1080
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< connection: keep-alive
< content-length: 0
<
* Connection #0 to host localhost left intact

I also see that time (14:46:33) is changing every 2 seconds as expected. Then I start another container in the same network, for example alpine:

docker run -ti --net=mynetwork alpine:3.6
apk add --no-cache curl
curl -v mockserver:1080

Instantly after request to mockserver:1080 time in watcher stops changing meaning that request is hanging. Also, request to mockserver:1080 hangs himself for about 2 minutes. Output is:

* Rebuilt URL to: mockserver:1080/
*   Trying 172.20.0.2...
* TCP_NODELAY set
* Connected to mockserver (172.20.0.2) port 1080 (#0)
> GET / HTTP/1.1
> Host: mockserver:1080
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 400 Bad Request
< connection: keep-alive
< content-length: 202
<
* Connection #0 to host mockserver left intact

This is what I see in logs:

2018-03-13 14:53:33,177 ERROR o.m.m.HttpStateHandler Exception processing {
  "method" : "GET",
  "path" : "/",
  "headers" : {
    "User-Agent" : [ "curl/7.58.0" ],
    "Accept" : [ "*/*" ],
    "host" : [ "mockserver:1080" ],
    "accept-encoding" : [ "gzip,deflate" ],
    "content-length" : [ "0" ],
    "connection" : [ "keep-alive" ]
  },
  "keepAlive" : true,
  "secure" : false
}
org.mockserver.client.netty.SocketCommunicationException: Response was not received after 120000 milliseconds, to make the proxy wait longer please use "mockserver.maxSocketTimeout" system property or ConfigurationProperties.maxSocketTimeout(long milliseconds)
        at org.mockserver.client.netty.NettyHttpClient.sendRequest(NettyHttpClient.java:89) ~[mockserver-netty-jar-with-dependencies.jar:na]
        at org.mockserver.mock.action.ActionHandler.processAction(ActionHandler.java:111) ~[mockserver-netty-jar-with-dependencies.jar:na]
        at org.mockserver.mockserver.MockServerHandler.channelRead0(MockServerHandler.java:107) [mockserver-netty-jar-with-dependencies.jar:na]
        at org.mockserver.mockserver.MockServerHandler.channelRead0(MockServerHandler.java:37) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:108) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [mockserver-netty-jar-with-dependencies.jar:na]
        at org.mockserver.unification.PortUnificationHandler.switchToHttp(PortUnificationHandler.java:188) [mockserver-netty-jar-with-dependencies.jar:na]
        at org.mockserver.unification.PortUnificationHandler.channelRead0(PortUnificationHandler.java:91) [mockserver-netty-jar-with-dependencies.jar:na]
        at org.mockserver.unification.PortUnificationHandler.channelRead0(PortUnificationHandler.java:37) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [mockserver-netty-jar-with-dependencies.jar:na]
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) [mockserver-netty-jar-with-dependencies.jar:na]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]

And my watcher continues to be unresponsive. So at this moment any request from any container (including mockserver) hangs. Additional information: Docker version 18.02.0-ce, build fc4de44

docker network inspect mynetwork
[
    {
        "Name": "mynetwork",
        "Id": "43d1571ce8d484507a852b3df6b250f36df9a0058bbf17ee187632375eca7fa4",
        "Created": "2018-03-13T14:50:46.942848193Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.20.0.0/16",
                    "Gateway": "172.20.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "44c9276d690bf968991b46fcd3c6266a2b18411c175ae2baa3ffa85b0d8e8e57": {
                "Name": "mockserver",
                "EndpointID": "51532c23bf06c21d8cf7d4e08bc53d5da3c4e8c3731511bfdf2faf488754aaa4",
                "MacAddress": "02:42:ac:14:00:02",
                "IPv4Address": "172.20.0.2/16",
                "IPv6Address": ""
            },
            "5276a0cf89297a67186907779c4aa410d773e007b3a2ac1308e02592a5487514": {
                "Name": "sleepy_lewin",
                "EndpointID": "b7c0f62d816e003cb89e264d64c6e982e6c56078ffd391636d8fa55fcdbeb54e",
                "MacAddress": "02:42:ac:14:00:03",
                "IPv4Address": "172.20.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 13
  • Comments: 16

Most upvoted comments

So I made it work by using an older version - mockserver-5.2.3.

I think we may be seeing the same or a very similar issue. We recently upgraded to mockserver 5.3.0. We also run the mockserver in a Docker container. We run on Macs and use docker-machine to initialize a virtual machine running on top of VirtualBox. (Note, we have experimented with using Docker 4 Mac, but it still has a host of issues and is not reliable enough).

We see the same stack trace when running some of our automated system tests. After much searching, I think I have tracked the issue down. Certain tests ultimately end up trying to match against expectations that don’t exist. The code in the 5.3.0 mockserver for handling non-matched expectations will check to see if the request appears to be a proxy request that should be forwarded on to some other host.

What I think is happening is that the code is getting confused and trying to forward requests on to a proxy when it should be returning not found errors instead (which is what happened in the 4.x mockserver code we were previously using). The problem is that, the Docker container has an external IP address (usually 192.168.99.100) and a different internal IP address (something in the 172.18.x.x range). The proxy check compares the local IP addresses of the running mockserver instance to the destination address in the request and, if there is no match, assumes that the mockserver is configured as a proxy and tries to forward the request on to the proxy address. The mockserver does not know that it is running in a Docker container and therefore does not realize that the 192.168.99.100 address in the request actually refers to the local running mockserver instance. It forwards the request on (although it isn’t clear how it is calculating the actual remote address to send to since no proxy is actually configured). Since there is nothing listening for the request, the request eventually times out. The 120000 milliseconds appears to be a default 2 minute socket timeout. And, since the processing loop is single threaded, any requests that come in while the mockserver is waiting for the proxy request to return get queued up behind the proxy request and appear to hang until the 2 minute timeout completes.

I may have some of the details wrong, but that appears to be what is happening to us right now with the 5.3.0 mockserver code. Unfortunately, I have not found a suitable workaround for this yet. The best thing to do would be to somehow get the mockserver code to recognize the external IP address as a local address. I haven’t figured out a way to do that yet (I may experiment with putting the IP address in the container’s /etc/hosts file to see if that works).

If anyone knows a workaround/configuration to fix this, please let me know as I am somewhat stuck at the moment.

So I looked the source code and it looked like this issue has already been resolved. The problem I was having was I was running off the default container jamesdbloom/mockserver:latest

But it looks like latest is a development version not the latest stable version.

Changing to jamesdbloom/mockserver:5.4.1 (the latest stable at the time of this comment) resolved the issue.

On a separate note however, the documentation on the website says that both the serverPort and proxyPort are set by default, however, only the serverPort is set and exposed by default.

We just spent a whole day debugging exactly this.

All our existing tests started to break because earlier when no expectations were met we would get a 404, and now mockserver is forwarding the request, and timing out. This is a backwards incompatible change that is going to catch lots of people off guard.

I tried a bunch of things to try and stop mockserver from forwarding requests, like explicitly setting a different proxy port, OR setting ‘-DproxySet=true’ but nothing worked. The only workaround that works is the suggestion by @mangatmodi to downgrade to 5.2.3. But thats a bad workaround because you lose the new features since the 5.2 release.

@jamesbloomnektan this is currently the highest rated defect on this project.

This has caught me too.

I am running mockserver as a pod in a kubernetes cluster to support tests. The difference between the address of the pod in the cluster and the address used by the service under test to access it is triggering mockserver to attempt to proxy. When the tests are passing and all expectations are correct this is not a problem. But when an expectation is not found then mockerserver attempts to proxy and locks up for 2 minutes causing the test to time out and subsequent tests to fail to set up their expectations correctly.

Code inspection shows that line 244 of ActionHandler in method processAction() is choosing whether to proxy the request using the following test:

if (proxyThisRequest || (!StringUtils.isEmpty(request.getFirstHeader(HOST.toString())) && !localAddresses.contains(request.getFirstHeader(HOST.toString()))))

and when called from MockServerServlet the variables are set as follows:

proxyThisRequest = false;
localAddresses = ImmutableSet.of(
                        httpServletRequest.getLocalAddr() + portExtension,
                        "localhost" + portExtension,
                        "127.0.0.1" + portExtension
                    );

When running under kubernetes the host header is set but not to any of the addresses in localAddresses, hence the problem.

To fix it my first thought was to make the condition checked proxyThisRequest && (...) but I guess this might have an effect on other uses of mockserver.

A second option, useful for me at least, would be to have a configuration parameter that prevents mockserver ever proxying requests. That would be simple to implement and simple to use.