sc._jsc.hadoopConfiguration().set('my.mapreduce.setting', 'someVal')
should work
More Related Contents:
- Specifying the filename when saving a DataFrame as a CSV [duplicate]
- Spark performance for Scala vs Python
- How to bucket the range of values from a column and count how many values fall into each interval in scala?
- How to store custom objects in Dataset?
- Spark: How to map Python with Scala or Java User Defined Functions?
- How to avoid duplicate columns after join?
- Flattening Rows in Spark
- Split 1 column into 3 columns in spark scala
- Provide schema while reading csv file as a dataframe
- How to flatmap a nested Dataframe in Spark
- How to force DataFrame evaluation in Spark
- Joining Spark dataframes on the key
- How to use a Scala class inside Pyspark
- call of distinct and map together throws NPE in spark library
- Modify collection inside a Spark RDD foreach
- How to write unit tests in Spark 2.0+?
- Customize SparkContext using sparkConf.set(..) when using spark-shell
- Replace missing values with mean – Spark Dataframe
- Spark: Transpose DataFrame Without Aggregating
- How to calculate the size of dataframe in bytes in Spark?
- Why does partition parameter of SparkContext.textFile not take effect?
- Exploding nested Struct in Spark dataframe
- How do I iterate RDD’s in apache spark (scala)
- How to compute cumulative sum using Spark
- Spark-Monotonically increasing id not working as expected in dataframe?
- Reading in multiple files compressed in tar.gz archive into Spark [duplicate]
- Add Number of days column to Date Column in same dataframe for Spark Scala App
- Spark : how to run spark file from spark shell
- Spark Dataframe Random UUID changes after every transformation/action
- Spark ML VectorAssembler returns strange output