Spark 2.1 should have native support for this use case (see #15354).
import org.apache.spark.sql.functions.to_json
df.select(to_json(struct($"c1", $"c2", $"c3")))
More Related Contents:
- Read multiline JSON in Apache Spark
- how to convert json string to dataframe on spark
- Reading JSON with Apache Spark – `corrupt_record`
- Spark 2.0 Dataset vs DataFrame
- Spark – load CSV file as DataFrame?
- What JSON library to use in Scala? [closed]
- How to define partitioning of DataFrame?
- How to parse JSON in Scala using standard Scala classes?
- SparkSQL: How to deal with null values in user defined function?
- How to aggregate values into collection after groupBy?
- Dropping a nested column from Spark DataFrame
- How to read records in JSON format from Kafka using Structured Streaming?
- Spark Dataframe :How to add a index Column : Aka Distributed Data Index
- Filling gaps in timeseries Spark
- How to use COGROUP for large datasets
- How to turn json to case class when case class has only one field
- Spark Dataframes UPSERT to Postgres Table
- What are possible reasons for receiving TimeoutException: Futures timed out after [n seconds] when working with Spark [duplicate]
- get TopN of all groups after group by using Spark DataFrame
- Why does join fail with “java.util.concurrent.TimeoutException: Futures timed out after [300 seconds]”?
- Create new Dataframe with empty/null field values
- Perform a typed join in Scala with Spark Datasets
- DataFrame-ified zipWithIndex
- Derive multiple columns from a single column in a Spark DataFrame
- Spark: Add column to dataframe conditionally
- What is going wrong with `unionAll` of Spark `DataFrame`?
- How to sort by column in descending order in Spark SQL?
- Access Array column in Spark
- Why is join not possible after show operator?
- Difference between two rows in Spark dataframe