snowflake-kafka-connector: SNOW-989387 Connectors errored out after updating to v2.1.2

Updated to v2.1.2 from v2.1.0 and connectors are erroring out with following error:

com.snowflake.kafka.connector.internal.SnowflakeKafkaConnectorException: [SF_KAFKA_CONNECTOR] Exception: Failure in Streaming Channel Offset Migration Response
Error Code: 5023
Detail: Streaming Channel Offset Migration from Source to Destination Channel has no/invalid response, please contact Snowflake Support
Message: Migrating OffsetToken for a SourceChannel:MyEvent_MyEvent_0 in table:MY_DB.MY_SCHEMA.MY_TABLE failed due to exceptionMessage:JDBC driver internal error: exception creating result java.lang.NoClassDefFoundError: Could not initialize class net.snowflake.client.jdbc.internal.apache.arrow.memory.RootAllocator at net.snowflake.client.jdbc.SnowflakeResultSetSerializableV1.create(SnowflakeResultSetSerializableV1.java:586). and stackTrace:[net.snowflake.client.jdbc.SnowflakeStatementV1.executeQueryInternal(SnowflakeStatementV1.java:268), net.snowflake.client.jdbc.SnowflakePreparedStatementV1.executeQuery(SnowflakePreparedStatementV1.java:117), com.snowflake.kafka.connector.internal.SnowflakeConnectionServiceV1.migrateStreamingChannelOffsetToken(SnowflakeConnectionServiceV1.java:1035), com.snowflake.kafka.connector.internal.streaming.TopicPartitionChannel.<init>(TopicPartitionChannel.java:287), com.snowflake.kafka.connector.internal.streaming.SnowflakeSinkServiceV2.createStreamingChannelForTopicPartition(SnowflakeSinkServiceV2.java:254), com.snowflake.kafka.connector.internal.streaming.SnowflakeSinkServiceV2.lambda$startPartitions$1(SnowflakeSinkServiceV2.java:222), java.base/java.lang.Iterable.forEach(Iterable.java:75), com.snowflake.kafka.connector.internal.streaming.SnowflakeSinkServiceV2.startPartitions(SnowflakeSinkServiceV2.java:217), com.snowflake.kafka.connector.SnowflakeSinkTask.open(SnowflakeSinkTask.java:259), org.apache.kafka.connect.runtime.WorkerSinkTask.openPartitions(WorkerSinkTask.java:644), org.apache.kafka.connect.runtime.WorkerSinkTask.access$1200(WorkerSinkTask.java:73), org.apache.kafka.connect.runtime.WorkerSinkTask$HandleRebalance.onPartitionsAssigned(WorkerSinkTask.java:741), org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.invokePartitionsAssigned(ConsumerCoordinator.java:324), org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:473), org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:478), org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:389), org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:559), org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1288), org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1247), org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1227), org.apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.java:479), org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:331), org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:237), org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:206), org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:204), org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259), org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181), java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539), java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264), java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136), java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635), java.base/java.lang.Thread.run(Thread.java:833)]
	at com.snowflake.kafka.connector.internal.SnowflakeErrors.getException(SnowflakeErrors.java:367)
	at com.snowflake.kafka.connector.internal.SnowflakeConnectionServiceV1.migrateStreamingChannelOffsetToken(SnowflakeConnectionServiceV1.java:1077)
	at com.snowflake.kafka.connector.internal.streaming.TopicPartitionChannel.<init>(TopicPartitionChannel.java:287)
	at com.snowflake.kafka.connector.internal.streaming.SnowflakeSinkServiceV2.createStreamingChannelForTopicPartition(SnowflakeSinkServiceV2.java:254)
	at com.snowflake.kafka.connector.internal.streaming.SnowflakeSinkServiceV2.lambda$startPartitions$1(SnowflakeSinkServiceV2.java:222)
	at java.base/java.lang.Iterable.forEach(Iterable.java:75)
	at com.snowflake.kafka.connector.internal.streaming.SnowflakeSinkServiceV2.startPartitions(SnowflakeSinkServiceV2.java:217)
	at com.snowflake.kafka.connector.SnowflakeSinkTask.open(SnowflakeSinkTask.java:259)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.openPartitions(WorkerSinkTask.java:644)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.access$1200(WorkerSinkTask.java:73)
	at org.apache.kafka.connect.runtime.WorkerSinkTask$HandleRebalance.onPartitionsAssigned(WorkerSinkTask.java:741)
	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.invokePartitionsAssigned(ConsumerCoordinator.java:324)
	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:473)
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:478)
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:389)
	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:559)
	at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1288)
	at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1247)
	at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1227)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.java:479)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:331)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:237)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:206)
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:204)
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259)
	at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)

About this issue

  • Original URL
  • State: open
  • Created 7 months ago
  • Reactions: 1
  • Comments: 17

Most upvoted comments

@sfc-gh-wfateem , Thank you for the support. Currently, it seems like the issue is resolved permanently. However, we are in the process of migrating our connectors to Snowpipe Streaming, so we will monitor them to see if the issue reappears. If it does, we will open a support ticket.

Hey @cchandurkar, Glad that worked out for you. I’m just going to reopen the case so I can see if there’s a bit more we can do to handle this gracefully. I imagine others are going to run into this problem.

@sfc-gh-wfateem

Thank you again for the help with this. Our issue is resolved (including 400s) without looping Snowflake support. I pointed Confluent’s CS folks at this thread/your replies. From them (after I asked what the fix was):

Thanks for the confirmation and your patience on this.
 
We override the config (enable.streaming.client.optimization=false) to temporarily workaround the issue due to the code freeze for the holidays. 
 
We will deploy a fix on this after the code freeze. Please let me know if you have further questions. 

Appreciate the help!