This should work in Spark 1.6 or later:
df.select(df.col("data.*"))
or
df.select(df.col("data.id"), df.col("data.keyNote"), df.col("data.details"))
More Related Contents:
- How to set timezone to UTC in Apache Spark?
- How to handle this scenario in spark? [closed]
- How to do aggregate functions, may columns and extract back
- Spark: How to map Python with Scala or Java User Defined Functions?
- How to create SparkSession with Hive support (fails with “Hive classes are not found”)?
- Spark sql how to explode without losing null values
- Pyspark: Exception: Java gateway process exited before sending the driver its port number
- Spark Error – Unsupported class file major version
- Spark Strutured Streaming automatically converts timestamp to local time
- spark broadcast variable Map giving null value
- PySpark: java.lang.OutofMemoryError: Java heap space
- Change output filename prefix for DataFrame.write()
- How do I call a UDF on a Spark DataFrame using JAVA?
- Retain keys with null values while writing JSON in spark
- How to use Column.isin in Java?
- Why does SparkSession execute twice for one action?
- Running custom Java class in PySpark
- How to run independent transformations in parallel using PySpark?
- Spark Window Functions – rangeBetween dates
- ‘PipelinedRDD’ object has no attribute ‘toDF’ in PySpark
- Spark add new column to dataframe with value from previous row
- Why does Spark think this is a cross / Cartesian join
- Pyspark: Replacing value in a column by searching a dictionary
- Pivot String column on Pyspark Dataframe
- Spark spark-submit –jars arguments wants comma list, how to declare a directory of jars?
- How to get element by Index in Spark RDD (Java)
- How do I convert an array (i.e. list) column to Vector
- Add column sum as new column in PySpark dataframe
- Difference between SparkContext, JavaSparkContext, SQLContext, and SparkSession?
- How to drop all columns with null values in a PySpark DataFrame?