kroki: Kubernetes automatically sets KROKI_PORT to tcp://ip:port causing a ClassCastException

Hi,

i tried to start kroki (0.10 from docker) on out openshift 3.11 (kubernetes 1.11) instance.

The container just raise a ClassCastException on startup. On a local docker environment, everything works fine.

java.lang.ClassCastException: class java.lang.String cannot be cast to class java.lang.Number (java.lang.String and java.lang.Number are in module java.base of loader 'bootstrap')
	at io.vertx.core.json.JsonObject.getInteger(JsonObject.java:365)
	at io.kroki.server.Server.start(Server.java:127)
	at io.kroki.server.Server.lambda$start$1(Server.java:52)
	at io.vertx.config.impl.ConfigRetrieverImpl.lambda$getConfig$2(ConfigRetrieverImpl.java:182)
	at io.vertx.config.impl.ConfigRetrieverImpl.lambda$compute$9(ConfigRetrieverImpl.java:296)
	at io.vertx.core.impl.FutureImpl.dispatch(FutureImpl.java:105)
	at io.vertx.core.impl.FutureImpl.onComplete(FutureImpl.java:83)
	at io.vertx.core.impl.CompositeFutureImpl.onComplete(CompositeFutureImpl.java:131)
	at io.vertx.core.impl.CompositeFutureImpl.onComplete(CompositeFutureImpl.java:25)
	at io.vertx.core.Future.setHandler(Future.java:126)
	at io.vertx.core.CompositeFuture.setHandler(CompositeFuture.java:184)
	at io.vertx.config.impl.ConfigRetrieverImpl.compute(ConfigRetrieverImpl.java:275)
	at io.vertx.config.impl.ConfigRetrieverImpl.getConfig(ConfigRetrieverImpl.java:175)
	at io.kroki.server.Server.start(Server.java:48)
	at io.vertx.core.impl.DeploymentManager.lambda$doDeploy$9(DeploymentManager.java:556)
	at io.vertx.core.impl.ContextImpl.executeTask(ContextImpl.java:366)
	at io.vertx.core.impl.EventLoopContext.lambda$executeAsync$0(EventLoopContext.java:38)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Unknown Source)
{"timestamp":"1612175857778","level":"ERROR","thread":"vert.x-eventloop-thread-1","logger":"io.vertx.core.impl.launcher.commands.VertxIsolatedDeployer","message":"Failed in deploying verticle","context":"default","exception":"java.lang.ClassCastException: class java.lang.String cannot be cast to class java.lang.Number (java.lang.String and java.lang.Number are in module java.base of loader 'bootstrap')\n\tat io.vertx.core.json.JsonObject.getInteger(JsonObject.java:365)\n\tat io.kroki.server.Server.start(Server.java:127)\n\tat io.kroki.server.Server.lambda$start$1(Server.java:52)\n\tat io.vertx.config.impl.ConfigRetrieverImpl.lambda$getConfig$2(ConfigRetrieverImpl.java:182)\n\tat io.vertx.config.impl.ConfigRetrieverImpl.lambda$compute$9(ConfigRetrieverImpl.java:296)\n\tat io.vertx.core.impl.FutureImpl.dispatch(FutureImpl.java:105)\n\tat io.vertx.core.impl.FutureImpl.onComplete(FutureImpl.java:83)\n\tat io.vertx.core.impl.CompositeFutureImpl.onComplete(CompositeFutureImpl.java:131)\n\tat io.vertx.core.impl.CompositeFutureImpl.onComplete(CompositeFutureImpl.java:25)\n\tat io.vertx.core.Future.setHandler(Future.java:126)\n\tat io.vertx.core.CompositeFuture.setHandler(CompositeFuture.java:184)\n\tat io.vertx.config.impl.ConfigRetrieverImpl.compute(ConfigRetrieverImpl.java:275)\n\tat io.vertx.config.impl.ConfigRetrieverImpl.getConfig(ConfigRetrieverImpl.java:175)\n\tat io.kroki.server.Server.start(Server.java:48)\n\tat io.vertx.core.impl.DeploymentManager.lambda$doDeploy$9(DeploymentManager.java:556)\n\tat io.vertx.core.impl.ContextImpl.executeTask(ContextImpl.java:366)\n\tat io.vertx.core.impl.EventLoopContext.lambda$executeAsync$0(EventLoopContext.java:38)\n\tat io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)\n\tat io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)\n\tat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n"}
java.lang.ClassCastException: class java.lang.String cannot be cast to class java.lang.Number (java.lang.String and java.lang.Number are in module java.base of loader 'bootstrap')
	at io.vertx.core.json.JsonObject.getInteger(JsonObject.java:365)
	at io.kroki.server.Server.start(Server.java:127)
	at io.kroki.server.Server.lambda$start$1(Server.java:52)
	at io.vertx.config.impl.ConfigRetrieverImpl.lambda$getConfig$2(ConfigRetrieverImpl.java:182)
	at io.vertx.config.impl.ConfigRetrieverImpl.lambda$compute$9(ConfigRetrieverImpl.java:296)
	at io.vertx.core.impl.FutureImpl.dispatch(FutureImpl.java:105)
	at io.vertx.core.impl.FutureImpl.onComplete(FutureImpl.java:83)
	at io.vertx.core.impl.CompositeFutureImpl.onComplete(CompositeFutureImpl.java:131)
	at io.vertx.core.impl.CompositeFutureImpl.onComplete(CompositeFutureImpl.java:25)
	at io.vertx.core.Future.setHandler(Future.java:126)
	at io.vertx.core.CompositeFuture.setHandler(CompositeFuture.java:184)
	at io.vertx.config.impl.ConfigRetrieverImpl.compute(ConfigRetrieverImpl.java:275)
	at io.vertx.config.impl.ConfigRetrieverImpl.getConfig(ConfigRetrieverImpl.java:175)
	at io.kroki.server.Server.start(Server.java:48)
	at io.vertx.core.impl.DeploymentManager.lambda$doDeploy$9(DeploymentManager.java:556)
	at io.vertx.core.impl.ContextImpl.executeTask(ContextImpl.java:366)
	at io.vertx.core.impl.EventLoopContext.lambda$executeAsync$0(EventLoopContext.java:38)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Unknown Source)

Cause:

Kubernetes (in some situations, Docker, too). Set the en var KROKI_PORT to tcp://172.30.18.91:8000 if there is a service called kroki inside the same namespace:

/ $ echo $KROKI_PORT
tcp://172.30.18.91:8000

KROKI_PORT is also used by kroki itself to define the listing port.

I would recommended to change the name to avoid conflicts in the feature, e.g. KROKI_SERVER_PORT or KROKI_LISTEN_PORT.

If you, it would great to document this somewhere.

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 1
  • Comments: 15 (13 by maintainers)

Most upvoted comments

@Mogztter

Sure, base on the service kroki-blockdiag kubernetes expose environment variables like KROKI_BLOCKDIAG_PORT, too.

Its described here (https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables) for the service redis-master.

There are two ways to resolve that issue in general:

  • Ignore the environment variable, if it does not contain a integer (maybe dangerous)
  • Do not use _PORT environment at all.

@obitech Your workaround would be something like define KROKI_BLOCKDIAG_PORT=8001 in your deployment, too.

@jkroepke thanks for your quick reply! Setting the ports explicitly via the env vars KROKI_BLOCKDIAG_PORT and KROKI_MERMAID_PORT fixed the issue 🙏

Explicitly set KROKI_PORT=8000 (not sure that’s working, I guess Kubernetes/Docker will override it)

I can confirm it works in Kubernetes. It looks like user provided environment variables will be preferred. I used this workaround to run kroki on our kubernetes.

update the environment variable name to avoid conflicts

About about KROKI_LISTEN? Like KROKI_LISTEN=0.0.0.0:8000 or KROKI_LISTEN=:8000 It may also help peoples running kroki outside a container. e.g. bind services to localhost and running them behind a reverse proxy is old pattern on virtual machines.

Since kubernetes use predefined suffixes this will be not conflicted kubernetes anymore.

I don’t think that Docker will automatically define KROKI_PORT as an environment variable.

https://docs.docker.com/network/links/#environment-variables

It’s a rarely known feature. In the early days, Kubernetes wants to be docker compatible. Since docker links is deprecated, it might be not happen on docker anymore.

https://docs.kroki.io/kroki/setup/configuration/ I saw the docs, that was the hint why it could break on Kubernetes. What I mean with documentation was to document this case. Like if you are running on Kubernetes, you have to override the environment variable.

An another idea:

kroki could catch the exception here, add a log line like Could not parse KROKI_PORT, ignoring ... and start the server on the default port 8000.

This change is still non breaking, but it provides a “works out of the box setup” on platforms like Kubernetes.