What should be the optimal value for spark.sql.shuffle.partitions or how do we increase partitions when using Spark SQL?

If you’re running out of memory on the shuffle, try setting spark.sql.shuffle.partitions to 2001.

Spark uses a different data structure for shuffle book-keeping when the number of partitions is greater than 2000:

private[spark] object MapStatus {

  def apply(loc: BlockManagerId, uncompressedSizes: Array[Long]): MapStatus = {
    if (uncompressedSizes.length > 2000) {
      HighlyCompressedMapStatus(loc, uncompressedSizes)
    } else {
      new CompressedMapStatus(loc, uncompressedSizes)
    }
  }
...

I really wish they would let you configure this independently.

By the way, I found this information in a Cloudera slide deck.

Leave a Comment