You have an extrat upper case try this :
rddUnion.foreach(println)
More Related Contents:
- Spark – repartition() vs coalesce()
- How do I split an RDD into two or more RDDs?
- What does “Stage Skipped” mean in Apache Spark web UI?
- Spark: subtract two DataFrames
- Which operations preserve RDD order?
- Is groupByKey ever preferred over reduceByKey
- Default Partitioning Scheme in Spark
- How DAG works under the covers in RDD?
- Spark parquet partitioning : Large number of files
- Apache Spark: What is the equivalent implementation of RDD.groupByKey() using RDD.aggregateByKey()?
- Why does sortBy transformation trigger a Spark job?
- Apache spark dealing with case statements
- What is the difference between cache and persist?
- How spark read a large file (petabyte) when file can not be fit in spark’s main memory
- Does a join of co-partitioned RDDs cause a shuffle in Apache Spark?
- Spark ALS predictAll returns empty
- Why is the fold action necessary in Spark?
- How to melt Spark DataFrame?
- Spark SQL replacement for MySQL’s GROUP_CONCAT aggregate function
- How does Spark partition(ing) work on files in HDFS?
- How to check if spark dataframe is empty?
- Spark Transformation – Why is it lazy and what is the advantage?
- How to connect to remote hive server from spark [duplicate]
- Adding a group count column to a PySpark dataframe
- NoClassDefFoundError com.apache.hadoop.fs.FSDataInputStream when execute spark-shell
- Spark: Best practice for retrieving big data from RDD to local machine
- How do I add an persistent column of row ids to Spark DataFrame?
- How does Spark aggregate function – aggregateByKey work?
- How to assign unique contiguous numbers to elements in a Spark RDD
- How to get Kafka offsets for structured query for manual and reliable offset management?