SPARK 1.5.0 has a date_format
function that accepts a format as an argument. This format returns a name of a week day from a timestamp:
select date_format(my_timestamp, 'EEEE') from ....
Result: e.g. ‘Tuesday’
More Related Contents:
- Spark – repartition() vs coalesce()
- Find maximum row per group in Spark DataFrame
- How do I split an RDD into two or more RDDs?
- Integrating Spark Structured Streaming with the Confluent Schema Registry
- Partitioning in spark while reading from RDBMS via JDBC
- How to access element of a VectorUDT column in a Spark DataFrame?
- What is the difference between map and flatMap and a good use case for each?
- Python Spark Cumulative Sum by Group Using DataFrame
- Best Practice to launch Spark Applications via Web Application?
- Link Spark with iPython Notebook
- How to save/insert each DStream into a permanent table
- Default Partitioning Scheme in Spark
- How DAG works under the covers in RDD?
- Spark yarn cluster vs client – how to choose which one to use?
- Spark on YARN + Secured hbase
- Spark load data and add filename as dataframe column
- PySpark: how to resample frequencies
- Any performance issues forcing eager evaluation using count in spark?
- Explode array data into rows in spark [duplicate]
- Spark: how to get the number of written rows?
- Why are Spark Parquet files for an aggregate larger than the original?
- How to find count of Null and Nan values for each column in a PySpark dataframe efficiently?
- Apache spark dealing with case statements
- When are accumulators truly reliable?
- Serialize a custom transformer using python to be used within a Pyspark ML pipeline
- Not able to cat dbfs file in databricks community edition cluster. FileNotFoundError: [Errno 2] No such file or directory:
- How to optimize shuffle spill in Apache Spark application
- Spark + s3 – error – java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found
- Spark: disk I/O on stage boundaries explanation
- spark.ml StringIndexer throws ‘Unseen label’ on fit()