Change spark-submit
command line adding three options:
--files <location_to_your_app.conf>
--conf 'spark.executor.extraJavaOptions=-Dconfig.resource=app'
--conf 'spark.driver.extraJavaOptions=-Dconfig.resource=app'
More Related Contents:
- Write single CSV file using spark-csv
- How to convert rdd object to dataframe in spark
- How to define and use a User-Defined Aggregate Function in Spark SQL?
- (Why) do we need to call cache or persist on a RDD
- Automatically and Elegantly flatten DataFrame in Spark SQL
- How do I skip a header from CSV files in Spark?
- Spark extracting values from a Row
- Caused by: java.lang.NullPointerException at org.apache.spark.sql.Dataset
- Provide schema while reading csv file as a dataframe
- How to save DataFrame directly to Hive?
- About how to add a new column to an existing DataFrame with random values in Scala
- Joining Spark dataframes on the key
- How to use Column.isin with list?
- Modify collection inside a Spark RDD foreach
- Function returns an empty List in Spark
- How to write unit tests in Spark 2.0+?
- How to compare two dataframe and print columns that are different in scala
- Operate on neighbor elements in RDD in Spark
- Customize SparkContext using sparkConf.set(..) when using spark-shell
- Renaming column names of a DataFrame in Spark Scala
- What is going wrong with `unionAll` of Spark `DataFrame`?
- Spark: Transpose DataFrame Without Aggregating
- Exploding nested Struct in Spark dataframe
- Spark: long delay between jobs
- How to get keys and values from MapType column in SparkSQL DataFrame
- Addition of two RDD[mllib.linalg.Vector]’s
- How to prevent java.lang.OutOfMemoryError: PermGen space at Scala compilation?
- How to create DataFrame from Scala’s List of Iterables?
- How to create a Dataset of Maps?
- Spark ML VectorAssembler returns strange output