It’s a bit late, but here’s the fundamental reason: count
does not act the same on RDD
and DataFrame
.
In DataFrame
s there’s an optimization, as in some cases you do not require to load data to actually know the number of elements it has (especially in the case of yours where there’s no data shuffling involved). Hence, the DataFrame
materialized when count
is called will not load any data and will not pass into your exception throwing. You can easily do the experiment by defining your own DefaultSource
and Relation
and see that calling count
on a DataFrame
will always end up in the method buildScan
with no requiredColumns
no matter how many columns you did select (cf. org.apache.spark.sql.sources.interfaces
to understand more). It’s actually a very efficient optimization 😉
In RDD
s though, there’s no such optimizations (that’s why one should always try to use DataFrame
s when possible). Hence the count
on RDD
executes all the lineage and returns the sum of all sizes of the iterators composing any partitions.
Calling dataframe.count
goes into the first explanation, but calling dataframe.rdd.count
goes into the second as you did build an RDD
out of your DataFrame
. Note that calling dataframe.cache().count
forces the dataframe
to be materialized as you required Spark to cache the results (hence it needs to load all the data and transform it). But it does have the side-effect of caching your data…