Use apply
:
import org.apache.spark.sql.functions.col
df.select(
col("id") +: (0 until 3).map(i => col("DataArray")(i).alias(s"col$i")): _*
)
More Related Contents:
- Encoder error while trying to map dataframe row to updated row
- Spark Scala: How to convert Dataframe[vector] to DataFrame[f1:Double, …, fn: Double)]
- How to define schema for custom type in Spark SQL?
- Specifying the filename when saving a DataFrame as a CSV [duplicate]
- NullPointerException in Scala Spark, appears to be caused be collection type?
- What does setMaster `local[*]` mean in spark?
- Spark unionAll multiple dataframes
- How can I pass extra parameters to UDFs in Spark SQL?
- How to split a dataframe into dataframes with same column values?
- Spark performance for Scala vs Python
- How to use regex to include/exclude some input files in sc.textFile?
- How to define a custom aggregation function to sum a column of Vectors?
- Spark losing println() on stdout
- How to write spark streaming DF to Kafka topic
- Stackoverflow due to long RDD Lineage
- Better way to convert a string field into timestamp in Spark
- Spark UDAF with ArrayType as bufferSchema performance issues
- Apache Spark how to append new column from list/array to Spark dataframe
- How to find spark RDD/Dataframe size?
- How to get ID of a map task in Spark?
- Spark: produce RDD[(X, X)] of all possible combinations from RDD[X]
- Encode an ADT / sealed trait hierarchy into Spark DataSet column
- Provide schema while reading csv file as a dataframe in Scala Spark
- Spark RDD default number of partitions
- Why does Spark RDD partition has 2GB limit for HDFS?
- Spark : how to run spark file from spark shell
- How can I connect to a postgreSQL database into Apache Spark using scala?
- Spark Dataframe Random UUID changes after every transformation/action
- Spark column string replace when present in other column (row)
- How to vectorize DataFrame columns for ML algorithms?