No need to use an UDF, you can do it using an SQL expression:
val newDF = df.withColumn("new_date", expr("date_add(current_date,days)"))
More Related Contents:
- How to query JSON data column using Spark DataFrames?
- How to define partitioning of DataFrame?
- How do I detect if a Spark DataFrame has a column
- Caused by: java.lang.NullPointerException at org.apache.spark.sql.Dataset
- How to split a dataframe into dataframes with same column values?
- Provide schema while reading csv file as a dataframe
- Dropping a nested column from Spark DataFrame
- Append a column to Data Frame in Apache Spark 1.3
- Spark Dataframe :How to add a index Column : Aka Distributed Data Index
- Defining a UDF that accepts an Array of objects in a Spark DataFrame?
- How to zip two (or more) DataFrame in Spark
- Joining Spark dataframes on the key
- Apache Spark how to append new column from list/array to Spark dataframe
- Create new Dataframe with empty/null field values
- Provide schema while reading csv file as a dataframe in Scala Spark
- Renaming column names of a DataFrame in Spark Scala
- Derive multiple columns from a single column in a Spark DataFrame
- Replace missing values with mean – Spark Dataframe
- What is going wrong with `unionAll` of Spark `DataFrame`?
- Apache Spark, add an “CASE WHEN … ELSE …” calculated column to an existing DataFrame
- Replace null values in Spark DataFrame
- How to get keys and values from MapType column in SparkSQL DataFrame
- Spark Dataframe Random UUID changes after every transformation/action
- Spark DataFrames when udf functions do not accept large enough input variables
- Filter spark DataFrame on string contains
- Automatically and Elegantly flatten DataFrame in Spark SQL
- How do I skip a header from CSV files in Spark?
- Spark / Scala: forward fill with last observation
- Apache Spark: Get number of records per partition
- How to save a spark DataFrame as csv on disk?