How do you control the size of the output file?
It’s impossible for Spark to control the size of Parquet files, because the DataFrame in memory needs to be encoded and compressed before writing to disks. Before this process finishes, there is no way to estimate the actual file size on disk. So my solution is: Write the DataFrame to HDFS, df.write.parquet(path) Get the directory … Read more