How do you control the size of the output file?

It’s impossible for Spark to control the size of Parquet files, because the DataFrame in memory needs to be encoded and compressed before writing to disks. Before this process finishes, there is no way to estimate the actual file size on disk.

So my solution is:

  • Write the DataFrame to HDFS, df.write.parquet(path)
  • Get the directory size and calculate the number of files

    val fs = FileSystem.get(sc.hadoopConfiguration)
    val dirSize = fs.getContentSummary(path).getLength
    val fileNum = dirSize/(512 * 1024 * 1024)  // let's say 512 MB per file
    
  • Read the directory and re-write to HDFS

    val df = sqlContext.read.parquet(path)
    df.coalesce(fileNum).write.parquet(another_path)
    

    Do NOT reuse the original df, otherwise it will trigger your job two times.

  • Delete the old directory and rename the new directory back

    fs.delete(new Path(path), true)
    fs.rename(new Path(newPath), new Path(path))
    

This solution has a drawback that it needs to write the data two times, which doubles disk IO, but for now this is the only solution.

Leave a Comment