hudi: [SUPPORT] Hudi not working with Spark 3.0.0

Describe the problem you faced

Trying to run hudi with spark 3.0.0, and getting an error

To Reproduce

Steps to reproduce the behavior:

Expected behavior

Environment Description

  • Hudi version :

0.5.3

  • Spark version :

3.0.0

  • Hive version :

2.3.7

  • Hadoop version :

3.2.0

  • Storage (HDFS/S3/GCS…) :

  • Running on Docker? (yes/no) :

yes

Additional context

Add any other context about the problem here.

Stacktrace

Caused by: java.lang.NoSuchMethodError: 'java.lang.Object org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.fromRow(org.apache.spark.sql.catalyst.InternalRow)'
at org.apache.hudi.AvroConversionUtils$.$anonfun$createRdd$1(AvroConversionUtils.scala:42)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
at scala.collection.Iterator$SliceIterator.next(Iterator.scala:271)
at scala.collection.Iterator.foreach(Iterator.scala:941)
at scala.collection.Iterator.foreach$(Iterator.scala:941)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:315)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:313)
at scala.collection.AbstractIterator.to(Iterator.scala:1429)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1429)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1429)
at org.apache.spark.rdd.RDD.$anonfun$take$2(RDD.scala:1423)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2133)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)```

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 5
  • Comments: 23 (14 by maintainers)

Most upvoted comments

LOL. I’ve never thought that I would get a help from my own comment that I left here.

For someone who gets here due to the issue with AWS Glue 3.0 + Hudi Connector.

“”“# 2021-11-13 status”“” AWS Glue 3.0 fails to load Image from ECR for Hudi Connector dependencies. If you don’t need Hudi Schema Evolution, then go with AWS Glue 2.0. If you need Hudi Schema Evolution, then you have to use AWS Glue 3.0 otherwise you will see the issue related to Glue Catalog caused by out-dated EMRFS.

You can still use AWS Glue 3.0 + Hudi by adding Hudi JARs dependencies by yourself instead Glue Connector does it for you.

you need four dependencies.

I hope this can help someone. I was stuck in this for a day.

@dude0001

.option("hoodie.datasource.hive_sync.use_jdbc", "false")
.option("hoodie.datasource.hive_sync.mode": "hms"),

did the trick for me

Spark 3.0 is a new version ( just release on 2020.6.18… ). It is quite different from 2.0, so that it is not surprising that there are some conflicts. At present, it is not recommended to use hudi by spark 3.0

I did not run into this when testing #1760 myself, I think it might be because we have internal changes for hive3.

I just checked and it looks like we have calcite added and shaded in hudi-hive-bundle <dependency>
<groupId>org.apache.calcite</groupId>
<artifactId>calcite-core</artifactId>
<version>1.16.0</version>
</dependency>

I think you might also need to add and shade libfb303 for hive-sync <dependency> <groupId>org.apache.thrift</groupId> <artifactId>libfb303</artifactId> <version>0.9.3</version> </dependency>

We had these dependency changes for hive3 compatibility, however, didn’t realize this was needed for spark3 also. I will try to update #1760 with what is needed.

lets wait for @lyogev to chime in… I think @n3nash did explicitly test Spark 3 and confirmed it working as of 0.5.1/0.5.2