Here it is using Scala DataFrame functions: from_unixtime and to_date
// NOTE: divide by 1000 required if milliseconds
// e.g. 1446846655609 -> 2015-11-06 21:50:55 -> 2015-11-06
mr.select(to_date(from_unixtime($"ts" / 1000)))
More Related Contents:
- Apache Spark – Scala – ReduceByKey – with keys repeating up to twice only
- How MapReduce work in Apache Spark and Scala?
- Write to multiple outputs by key Spark – one Spark job
- Spark 2.0 Dataset vs DataFrame
- Spark – load CSV file as DataFrame?
- How to define partitioning of DataFrame?
- Case class equality in Apache Spark
- Why accesing DataFrame from UDF results in NullPointerException?
- Dropping a nested column from Spark DataFrame
- How to read records in JSON format from Kafka using Structured Streaming?
- Spark dataframe write method writing many small files
- Filling gaps in timeseries Spark
- java.lang.NoClassDefFoundError: org/apache/spark/streaming/twitter/TwitterUtils$ while running TwitterPopularTags
- Perform a typed join in Scala with Spark Datasets
- Apache Spark: Get number of records per partition
- Left Anti join in Spark?
- How to Define Custom partitioner for Spark RDDs of equally sized partition where each partition has equal number of elements?
- DataFrame-ified zipWithIndex
- Derive multiple columns from a single column in a Spark DataFrame
- How to save a spark DataFrame as csv on disk?
- How to process multi line input records in Spark
- How to sort by column in descending order in Spark SQL?
- How to explode an array into multiple columns in Spark
- Why is join not possible after show operator?
- How to sort an RDD in Scala Spark?
- Difference between two rows in Spark dataframe
- Run a scala code jar appear NoSuchMethodError:scala.Predef$.refArrayOps
- Spark – Error “A master URL must be set in your configuration” when submitting an app
- Spark UDF with varargs
- How to transpose an RDD in Spark