istio: Elasticsearch not working with istio

Hello, We are trying to deploy elasticsearch with istio-1.1.7 manual sidecar injection on kubernetes-1.14 cluster. We followed this Link for elasticsearch files to deploy in kubernetes. When we deploy elasticsearch without istio it works well, but with sidecar elastic pod gives following error

starting ...
[2019-06-10T11:43:38,735][INFO ][o.e.t.TransportService   ] [0ecd769b-025b-495b-bdd2-bf8f00c37121] publish_address {xxx.xx.x.xxx:9300}, bound_addresses {xxx.xx.x.xxx:9300}
[2019-06-10T11:43:38,758][INFO ][o.e.b.BootstrapChecks    ] [0ecd769b-025b-495b-bdd2-bf8f00c37121] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2019-06-10T11:43:39,166][WARN ][o.e.t.n.Netty4Transport  ] [0ecd769b-025b-495b-bdd2-bf8f00c37121] exception caught on transport layer [[id: 0x7e3cbeb6, L:/xxx.xx.x.xxx:39088 - R:elasticsearch/10.3.0.252:9300]], closing connection
io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-codec-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-codec-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.13.Final.jar:4.1.13.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)
	at org.elasticsearch.transport.TcpTransport.validateMessageHeader(TcpTransport.java:1271) ~[elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.transport.netty4.Netty4SizeHeaderFrameDecoder.decode(Netty4SizeHeaderFrameDecoder.java:36) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[?:?]
	... 15 more
[2019-06-10T11:43:39,186][WARN ][o.e.t.n.Netty4Transport  ] [0ecd769b-025b-495b-bdd2-bf8f00c37121] exception caught on transport layer [[id: 0x7e3cbeb6, L:/xxx.xx.x.xxx:39088 ! R:elasticsearch/10.3.0.252:9300]], closing connection
io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-codec-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:392) ~[netty-codec-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:359) ~[netty-codec-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:342) ~[netty-codec-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1329) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:908) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannel$AbstractUnsafe$7.run(AbstractChannel.java:744) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) [netty-common-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403) [netty-common-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:462) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.13.Final.jar:4.1.13.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)
	at org.elasticsearch.transport.TcpTransport.validateMessageHeader(TcpTransport.java:1271) ~[elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.transport.netty4.Netty4SizeHeaderFrameDecoder.decode(Netty4SizeHeaderFrameDecoder.java:36) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[?:?]
	... 16 more
[2019-06-10T11:43:39,857][WARN ][o.e.t.n.Netty4Transport  ] [0ecd769b-025b-495b-bdd2-bf8f00c37121] exception caught on transport layer [[id: 0x050a4381, L:/xxx.xx.x.xxx:39098 - R:elasticsearch/10.3.0.252:9300]], closing connection
io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-codec-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-codec-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.13.Final.jar:4.1.13.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)
	at org.elasticsearch.transport.TcpTransport.validateMessageHeader(TcpTransport.java:1271) ~[elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.transport.netty4.Netty4SizeHeaderFrameDecoder.decode(Netty4SizeHeaderFrameDecoder.java:36) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[?:?]
	... 15 more
[2019-06-10T11:43:39,866][WARN ][o.e.t.n.Netty4Transport  ] [0ecd769b-025b-495b-bdd2-bf8f00c37121] exception caught on transport layer [[id: 0x050a4381, L:/xxx.xx.x.xxx:39098 ! R:elasticsearch/10.3.0.252:9300]], closing connection
io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-codec-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:392) ~[netty-codec-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:359) ~[netty-codec-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:342) ~[netty-codec-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1329) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:908) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannel$AbstractUnsafe$7.run(AbstractChannel.java:744) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) [netty-common-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403) [netty-common-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:462) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.13.Final.jar:4.1.13.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)
	at org.elasticsearch.transport.TcpTransport.validateMessageHeader(TcpTransport.java:1271) ~[elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.transport.netty4.Netty4SizeHeaderFrameDecoder.decode(Netty4SizeHeaderFrameDecoder.java:36) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[?:?]
	... 16 more
[2019-06-10T11:43:40,869][WARN ][o.e.t.n.Netty4Transport  ] [0ecd769b-025b-495b-bdd2-bf8f00c37121] exception caught on transport layer [[id: 0x01493be0, L:/xxx.xx.x.xxx:39112 - R:elasticsearch/10.3.0.252:9300]], closing connection
io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-codec-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-codec-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.13.Final.jar:4.1.13.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)
	at org.elasticsearch.transport.TcpTransport.validateMessageHeader(TcpTransport.java:1271) ~[elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.transport.netty4.Netty4SizeHeaderFrameDecoder.decode(Netty4SizeHeaderFrameDecoder.java:36) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[?:?]
	... 15 more
[2019-06-10T11:43:40,876][WARN ][o.e.t.n.Netty4Transport  ] [0ecd769b-025b-495b-bdd2-bf8f00c37121] exception caught on transport layer [[id: 0x01493be0, L:/xxx.xx.x.xxx:39112 ! R:elasticsearch/10.3.0.252:9300]], closing connection
io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-codec-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:392) ~[netty-codec-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:359) ~[netty-codec-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:342) ~[netty-codec-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1329) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:908) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.AbstractChannel$AbstractUnsafe$7.run(AbstractChannel.java:744) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) [netty-common-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403) [netty-common-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:462) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.13.Final.jar:4.1.13.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)
	at org.elasticsearch.transport.TcpTransport.validateMessageHeader(TcpTransport.java:1271) ~[elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.transport.netty4.Netty4SizeHeaderFrameDecoder.decode(Netty4SizeHeaderFrameDecoder.java:36) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[?:?]
	... 16 more
[2019-06-10T11:43:41,864][INFO ][o.e.c.s.ClusterService   ] [0ecd769b-025b-495b-bdd2-bf8f00c37121] new_master {0ecd769b-025b-495b-bdd2-bf8f00c37121}{JR7v0wgbTC-9euNrX0dEhQ}{9_kKJjxxQCWpfPo0j2fxUw}{xxx.xx.x.xxx}{xxx.xx.x.xxx:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2019-06-10T11:43:41,918][INFO ][o.e.h.n.Netty4HttpServerTransport] [0ecd769b-025b-495b-bdd2-bf8f00c37121] publish_address {xxx.xx.x.xxx:9200}, bound_addresses {xxx.xx.x.xxx:9200}
[2019-06-10T11:43:41,918][INFO ][o.e.n.Node               ] [0ecd769b-025b-495b-bdd2-bf8f00c37121] started
[2019-06-10T11:43:42,202][INFO ][o.e.g.GatewayService     ] [0ecd769b-025b-495b-bdd2-bf8f00c37121] recovered [0] indices into cluster_state

And proxy container of elastic pod gives following logs

2019-06-10T11:42:56.547611Z	info	FLAG: --applicationPorts="[9200,9300]"
2019-06-10T11:42:56.547706Z	info	FLAG: --binaryPath="/usr/local/bin/envoy"
2019-06-10T11:42:56.547722Z	info	FLAG: --concurrency="2"
2019-06-10T11:42:56.547734Z	info	FLAG: --configPath="/etc/istio/proxy"
2019-06-10T11:42:56.547750Z	info	FLAG: --connectTimeout="10s"
2019-06-10T11:42:56.547762Z	info	FLAG: --controlPlaneAuthPolicy="NONE"
2019-06-10T11:42:56.547776Z	info	FLAG: --controlPlaneBootstrap="true"
2019-06-10T11:42:56.547789Z	info	FLAG: --customConfigFile=""
2019-06-10T11:42:56.547799Z	info	FLAG: --datadogAgentAddress=""
2019-06-10T11:42:56.547810Z	info	FLAG: --disableInternalTelemetry="false"
2019-06-10T11:42:56.547822Z	info	FLAG: --discoveryAddress="istio-pilot.istio-system:15010"
2019-06-10T11:42:56.547836Z	info	FLAG: --domain="ns-beta.svc.cluster.local"
2019-06-10T11:42:56.547886Z	info	FLAG: --drainDuration="45s"
2019-06-10T11:42:56.547898Z	info	FLAG: --envoyMetricsServiceAddress=""
2019-06-10T11:42:56.547909Z	info	FLAG: --help="false"
2019-06-10T11:42:56.547918Z	info	FLAG: --id=""
2019-06-10T11:42:56.547949Z	info	FLAG: --ip=""
2019-06-10T11:42:56.547961Z	info	FLAG: --lightstepAccessToken=""
2019-06-10T11:42:56.547971Z	info	FLAG: --lightstepAddress=""
2019-06-10T11:42:56.547982Z	info	FLAG: --lightstepCacertPath=""
2019-06-10T11:42:56.548011Z	info	FLAG: --lightstepSecure="false"
2019-06-10T11:42:56.548022Z	info	FLAG: --log_as_json="false"
2019-06-10T11:42:56.548048Z	info	FLAG: --log_caller=""
2019-06-10T11:42:56.548060Z	info	FLAG: --log_output_level="default:info"
2019-06-10T11:42:56.548070Z	info	FLAG: --log_rotate=""
2019-06-10T11:42:56.548081Z	info	FLAG: --log_rotate_max_age="30"
2019-06-10T11:42:56.548113Z	info	FLAG: --log_rotate_max_backups="1000"
2019-06-10T11:42:56.548125Z	info	FLAG: --log_rotate_max_size="104857600"
2019-06-10T11:42:56.548137Z	info	FLAG: --log_stacktrace_level="default:none"
2019-06-10T11:42:56.548195Z	info	FLAG: --log_target="[stdout]"
2019-06-10T11:42:56.548213Z	info	FLAG: --parentShutdownDuration="1m0s"
2019-06-10T11:42:56.548229Z	info	FLAG: --proxyAdminPort="15000"
2019-06-10T11:42:56.548240Z	info	FLAG: --proxyLogLevel="warning"
2019-06-10T11:42:56.548250Z	info	FLAG: --serviceCluster="pod-data-es.ns-beta"
2019-06-10T11:42:56.548261Z	info	FLAG: --serviceregistry="Kubernetes"
2019-06-10T11:42:56.548271Z	info	FLAG: --statsdUdpAddress=""
2019-06-10T11:42:56.548281Z	info	FLAG: --statusPort="15020"
2019-06-10T11:42:56.548292Z	info	FLAG: --templateFile=""
2019-06-10T11:42:56.548302Z	info	FLAG: --trust-domain=""
2019-06-10T11:42:56.548312Z	info	FLAG: --zipkinAddress="zipkin.istio-system:9411"
2019-06-10T11:42:56.548365Z	info	Version root@341b3bf0-76ac-11e9-b644-0a580a2c0404-docker.io/istio-1.1.7-eec7a74473deee98cad0a996f41a32a47dd453c2-Clean
2019-06-10T11:42:56.548739Z	info	Obtained private IP [xxx.xx.x.xxx]
2019-06-10T11:42:56.548835Z	info	Proxy role: &model.Proxy{ClusterID:"", Type:"sidecar", IPAddresses:[]string{"xxx.xx.x.xxx", "xxx.xx.x.xxx"}, ID:"pod-data-es-0.ns-beta", Locality:(*core.Locality)(nil), DNSDomain:"ns-beta.svc.cluster.local", ConfigNamespace:"", TrustDomain:"cluster.local", Metadata:map[string]string{}, SidecarScope:(*model.SidecarScope)(nil), ServiceInstances:[]*model.ServiceInstance(nil), WorkloadLabels:model.LabelsCollection(nil)}
2019-06-10T11:42:56.548862Z	info	PilotSAN []string(nil)
2019-06-10T11:42:56.549731Z	info	Effective config: binaryPath: /usr/local/bin/envoy
concurrency: 2
configPath: /etc/istio/proxy
connectTimeout: 10s
discoveryAddress: istio-pilot.istio-system:15010
drainDuration: 45s
parentShutdownDuration: 60s
proxyAdminPort: 15000
serviceCluster: pod-data-es.ns-beta
statNameLength: 189
tracing:
  zipkin:
    address: zipkin.istio-system:9411

2019-06-10T11:42:56.549767Z	info	Monitored certs: []string{"/etc/certs/cert-chain.pem", "/etc/certs/key.pem", "/etc/certs/root-cert.pem"}
2019-06-10T11:42:56.549805Z	info	PilotSAN []string(nil)
2019-06-10T11:42:56.550269Z	info	Opening status port 15020

2019-06-10T11:42:56.550545Z	info	Starting proxy agent
2019-06-10T11:42:56.551016Z	info	Received new config, resetting budget
2019-06-10T11:42:56.551112Z	info	watching /etc/certs/cert-chain.pem for changes
2019-06-10T11:42:56.551287Z	info	Reconciling retry (budget 10)
2019-06-10T11:42:56.551316Z	info	watching /etc/certs/key.pem for changes
2019-06-10T11:42:56.551362Z	info	watching /etc/certs/root-cert.pem for changes
2019-06-10T11:42:56.551465Z	info	Epoch 0 starting
2019-06-10T11:42:56.553535Z	info	Envoy command: [-c /etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster pod-data-es.ns-beta --service-node sidecar~xxx.xx.x.xxx~pod-data-es-0.ns-beta~ns-beta.svc.cluster.local --max-obj-name-len 189 --allow-unknown-fields -l warning --concurrency 2]
[2019-06-10 11:42:56.584][19][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Cluster.hosts' from file cds.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2019-06-10 11:42:56.584][19][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Cluster.hosts' from file cds.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2019-06-10 11:42:56.584][19][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Cluster.hosts' from file cds.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2019-06-10 11:42:56.592][19][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 14, no healthy upstream
[2019-06-10 11:42:56.592][19][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:49] Unable to establish new stream
[2019-06-10 11:42:57.759][19][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Listener.use_original_dst' from file lds.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
2019-06-10T11:42:59.435143Z	info	Envoy proxy is ready
[2019-06-10 11:42:59.691][19][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Listener.use_original_dst' from file lds.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2019-06-10 11:43:07.328][19][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Listener.use_original_dst' from file lds.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2019-06-10 11:43:11.599][19][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Listener.use_original_dst' from file lds.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.

Following is deployment file for elasticsearch,

apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  namespace: ns-test
  labels:
    component: elasticsearch
    name: elasticsearch
    role: svc-es
spec:
  selector:
    instance: pod-es
  ports:
  - name: http-es-ui
    port: 9200
    protocol: TCP
  - name: http-es-transport
    port: 9300
    protocol: TCP
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: pod-data-es
  namespace: ns-test
spec:  
  serviceName: elasticsearch
  replicas: 1
  template:
    metadata:
      labels:
        role: pod-es
        instance: pod-es
    spec:
      serviceAccount: elasticsearch
      initContainers:
      - name: init-sysctl
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      volumes:
      - name: esdatavol
        hostPath:
          path: /var/data/es
      containers:
      - name: es
        securityContext:
          capabilities:
            add:
              - IPC_LOCK
        image: "quay.io/pires/docker-elasticsearch-kubernetes:5.6.2"
        ports:
        - containerPort: 9200
        - containerPort: 9300
        env:
        - name: KUBERNETES_CA_CERTIFICATE_FILE
          value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        - name: "CLUSTER_NAME"
          value: "TESTES"
        - name: "DISCOVERY_SERVICE"
          value: "elasticsearch"
        - name: NODE_MASTER
          value: "true"
        - name: NODE_DATA
          value: "true"
        - name: HTTP_ENABLE
          value: "true"
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
          - name: esdatavol
            mountPath: "/data"

we deployed file with command…

kubectl create -f <(istioctl kube-inject -f deployment.yaml )

After googling we came across this issue, but we didn’t fnd any solution there too.

So What are we missing here? Is our configuration wrong here or is there any issue in istio with elasticsearch?

thanks in advance 😃

Pavan Makadia

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 7
  • Comments: 23 (16 by maintainers)

Most upvoted comments

Maybe all of the “well known” apps that don’t work out-of-the-box should have a FAQ entry. That would be a good place to say that it works but just needs “some special config …”

If network.host is set to 0.0.0.0 which is the default everything should work out of the box, meaning there is no need to explicitly set network.bind_host and network.publish_host. Elasticsearch picks the localhost as the bind address and finds the appropriate address (the pod address) to use as the publish address.

and this is the full working yaml using docker.elastic.co/elasticsearch/elasticsearch:7.2.0 image:

#kubectl create namespace elastic #kubectl label namespace elastic istio-injection=enabled

kind: Service
apiVersion: v1
metadata:
  name: elasticsearch
  namespace: elastic
  labels:
    app: elasticsearch
spec:
  selector:
    app: elasticsearch
  clusterIP: None
  ports:
    - port: 9200
      name: rest
    - port: 9300
      name: inter-node
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es-cluster
  namespace: elastic
spec:
  serviceName: elasticsearch
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - name: elasticsearch
        image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
        resources:
            limits:
              cpu: 1000m
            requests:
              cpu: 100m
        ports:
        - containerPort: 9200
          name: rest
          protocol: TCP
        - containerPort: 9300
          name: inter-node
          protocol: TCP
        env:
          - name: cluster.name
            value: elastic
          - name: node.name
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: network.bind_host
            value: 127.0.0.1
          - name: network.publish_host
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
          - name: discovery.seed_hosts
            value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
          - name: cluster.initial_master_nodes
            value: "es-cluster-0,es-cluster-1,es-cluster-2"
          - name: ES_JAVA_OPTS
            value: "-Xms512m -Xmx512m"
        volumeMounts:
          - name: data
            mountPath: "/data"
      initContainers:
      - name: increase-vm-max-map
        image: busybox
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      volumes:
      - name: data
        hostPath:
          path: /var/data/es
  volumeClaimTemplates:
  - metadata:
      name: data
      labels:
        app: elasticsearch
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: do-block-storage
      resources:
        requests:
          storage: 1Gi

I still need to test a bit more, but it seems like the bind_host and publish_host trick is working indeed, even without adding any ServinceEntry manually.

However, it only started working for me once I added all three pairs of them:

  • network.bind_host / network.publish_host (on the Pod IP)
  • http.bind_host / http.publish_host (on the Pod IP)
  • transport.bind_host / transport.publish_host (on the Pod IP)

I’m assuming the problem is that both the http and transport configuration fall back to network.host if left unconfigured, which is the piece that breaks it.

This is at least with Elasticsearch 7.4.2 on Istio 1.3.4.

Elasticsearch requires the use of statefulsets. It needs to listen on local:port for it to work with Istio. In particular, there are two config params that need to be set appropriately: network.bind_host and network.publish_host. If these are not set individually they are both set to network.host by default which we do not want to have. In order to have ES working with Istio network.bind_host where ES listens for incoming requests must be set to 127.0.0.1. and the network.publish_host which is the address that is advertised to others for communication must be set to the pod IP.

Here are the config paramters added to the standard yaml used for ES:

          - name: network.bind_host
            value: 127.0.0.1
          - name: network.publish_host
            valueFrom:
              fieldRef:
		fieldPath: status.podIP

This pilot_conflict_outbound_listener_tcp_over_current_http is saying some service defined port 9300 as HTTP in the mesh while another service define port 9300 as TCP at istio config. @clyang82 could you find another service(s) listening on port 9300 and the name? The below is the one defines istio consider 9300 in the mesh is considered as HTTP. See Named service ports And the limitation is on the same page A pod must belong to at least one Kubernetes service even if the pod does NOT expose any port. If a pod belongs to multiple Kubernetes services, the services cannot use the same port number for different protocols, for instance HTTP and TCP.

  - name: http-es-transport
    port: 9300
    protocol: TCP

My current work is on the inbound listener and traffic interception, allowing less user configuration and exploring pod ip access It might enable some app configuration combination, e.g. listener on pod ip. I am pretty my work doesn’t involve allowing TCP and HTTP traffic on the same port.

@crazyxy is integrating protocol sniffer on istio and that should relax the explicit port protocol declaration in istio config.

I met the same issue, there is pilot_conflict_outbound_listener_tcp_over_current_http reported.

2019-07-18T16:19:26.353714683Z     "ProxyStatus": {
2019-07-18T16:19:26.353720293Z         "pilot_conflict_outbound_listener_tcp_over_current_http": {
2019-07-18T16:19:26.353725672Z             "0.0.0.0:9300": {
2019-07-18T16:19:26.353731024Z                 "proxy": "key-management-api-5568447486-29lp5.kube-system",
2019-07-18T16:19:26.35373652Z                 "message": "Listener=0.0.0.0:9300 AcceptedTCP=elasticsearch-data.kube-system.svc.cluster.local RejectedHTTP=elasticsearch-transport.kube-system.svc.cluster.local TCPServices=1"
2019-07-18T16:19:26.353742666Z             }
2019-07-18T16:19:26.353747552Z         }
2019-07-18T16:19:26.353752391Z     },

and envoy is listen the port 9300 https://github.com/envoyproxy/envoy/blob/bdd6788f1e01787d015eabd9902f4b565e5dea98/configs/envoy_double_proxy_v2.template.yaml#L91

As @hzxuzhonghu mentioned that @silentdai is working on that. will my reported issue be fixed as well? Thanks.

Searching for “java.io.StreamCorruptedException: invalid internal transport message format” on the web led me to these issues/discussions.

It is clear that elasticsearch API port is 9200, while the port for elasticsearch’s inter-node communication is 9300.

These logs:

[2019-06-10T11:43:39,166][WARN ][o.e.t.n.Netty4Transport  ] [0ecd769b-025b-495b-bdd2-bf8f00c37121] exception caught on transport layer [[id: 0x7e3cbeb6, L:/xxx.xx.x.xxx:39088 - R:elasticsearch/10.3.0.252:9300]], closing connection
io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)

are indicating that elasticsearch is closing a connection from client port 39088 which is sending an unrecognized/invalid internal transport message. 48,54,54,50 hex values indicate an ASCII of ‘H’,‘T’,‘T’,‘P’ which seems to corroborate with what is being answered here.

Now, from what I infer, elasticsearch is merely closing the client connection from remote address:39088. This is not necessarily saying that elasticsearch is not working or that port 9300 is not being LISTEN’ed thereafter.

@PavanMakadia Are you no longer seeing that port 9300 is in LISTEN state?

From these logs below (which follow the java.io exceptions), it is not apparent to me if indeed elasticsearch has either failed or not working.

[2019-06-10T11:43:41,864][INFO ][o.e.c.s.ClusterService   ] [0ecd769b-025b-495b-bdd2-bf8f00c37121] new_master {0ecd769b-025b-495b-bdd2-bf8f00c37121}{JR7v0wgbTC-9euNrX0dEhQ}{9_kKJjxxQCWpfPo0j2fxUw}{xxx.xx.x.xxx}{xxx.xx.x.xxx:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2019-06-10T11:43:41,918][INFO ][o.e.h.n.Netty4HttpServerTransport] [0ecd769b-025b-495b-bdd2-bf8f00c37121] publish_address {xxx.xx.x.xxx:9200}, bound_addresses {xxx.xx.x.xxx:9200}
[2019-06-10T11:43:41,918][INFO ][o.e.n.Node               ] [0ecd769b-025b-495b-bdd2-bf8f00c37121] started
[2019-06-10T11:43:42,202][INFO ][o.e.g.GatewayService     ] [0ecd769b-025b-495b-bdd2-bf8f00c37121] recovered [0] indices into cluster_state

Can you please tell what exactly is not working?

Also, you have referenced this issue, which IMHO is a completely different issue. You should perhaps run into those logs after the initial set of logs from elasticsearch (that you have posted here).

Istio doesn’t work with most StatefulSet pods https://github.com/istio/istio/issues/10659

I am diagnosing a similar problem with CockroachDB which also uses StatefulSets.

Hello @hzxuzhonghu We don’t know exactly what is the issue but I guess TCP connection fails on port 9300 when sidecar is attached.