Add following dependency to your maven project.
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.11</artifactId>
<version>2.0.0</version>
</dependency>
More Related Contents:
- How to handle this scenario in spark? [closed]
- How to do aggregate functions, may columns and extract back
- How to flatten a struct in a Spark dataframe?
- Spark sql how to explode without losing null values
- Spark Strutured Streaming automatically converts timestamp to local time
- spark broadcast variable Map giving null value
- How to set timezone to UTC in Apache Spark?
- Change output filename prefix for DataFrame.write()
- How do I call a UDF on a Spark DataFrame using JAVA?
- Retain keys with null values while writing JSON in spark
- How to use Column.isin in Java?
- Why does SparkSession execute twice for one action?
- Add JAR files to a Spark job – spark-submit
- Spark: How to map Python with Scala or Java User Defined Functions?
- How to control partition size in Spark SQL
- Spark read file from S3 using sc.textFile (“s3n://…)
- Spark Error – Unsupported class file major version
- How can I update a broadcast variable in spark streaming?
- How to save DataFrame directly to Hive?
- Apache Spark – foreach Vs foreachPartition When to use What?
- Matrix Multiplication in Apache Spark [closed]
- Use collect_list and collect_set in Spark SQL
- How to convert .txt file to Hadoop’s sequence file format
- TaskSchedulerImpl: Initial job has not accepted any resources;
- java.lang.NoClassDefFoundError: org/apache/spark/Logging
- Is gzip format supported in Spark?
- How can I force Spark to execute code?
- How to save models from ML Pipeline to S3 or HDFS?
- Converting mysql table to spark dataset is very slow compared to same from csv file
- Running custom Java class in PySpark