Case class equality in Apache Spark

This is a known issue with Spark REPL. You can find more details in SPARK-2620. It affects multiple operations in Spark REPL including most of transformations on the PairwiseRDDs. For example:

case class Foo(x: Int)

val foos = Seq(Foo(1), Foo(1), Foo(2), Foo(2))
foos.distinct.size
// Int = 2

val foosRdd = sc.parallelize(foos, 4)
foosRdd.distinct.count
// Long = 4  

foosRdd.map((_, 1)).reduceByKey(_ + _).collect
// Array[(Foo, Int)] = Array((Foo(1),1), (Foo(1),1), (Foo(2),1), (Foo(2),1))

foosRdd.first == foos.head
// Boolean = false

Foo.unapply(foosRdd.first) == Foo.unapply(foos.head)
// Boolean = true

What makes it even worse is that the results depend on the data distribution:

sc.parallelize(foos, 1).distinct.count
// Long = 2

sc.parallelize(foos, 1).map((_, 1)).reduceByKey(_ + _).collect
// Array[(Foo, Int)] = Array((Foo(2),2), (Foo(1),2))

The simplest thing you can do is to define and package required case classes outside REPL. Any code submitted directly using spark-submit should work as well.

In Scala 2.11+ you can create a package directly in the REPL with paste -raw.

scala> :paste -raw
// Entering paste mode (ctrl-D to finish)

package bar

case class Bar(x: Int)


// Exiting paste mode, now interpreting.

scala> import bar.Bar
import bar.Bar

scala> sc.parallelize(Seq(Bar(1), Bar(1), Bar(2), Bar(2))).distinct.collect
res1: Array[bar.Bar] = Array(Bar(1), Bar(2))

Leave a Comment