In command line, you can use
spark-shell -i file.scala
to run code which is written in file.scala
More Related Contents:
- Apache Spark – Scala – ReduceByKey – with keys repeating up to twice only
- How MapReduce work in Apache Spark and Scala?
- Spark – load CSV file as DataFrame?
- How to define partitioning of DataFrame?
- Case class equality in Apache Spark
- SparkSQL: How to deal with null values in user defined function?
- How to aggregate values into collection after groupBy?
- Dropping a nested column from Spark DataFrame
- How to read records in JSON format from Kafka using Structured Streaming?
- Spark Dataframe :How to add a index Column : Aka Distributed Data Index
- Scala spark, listbuffer is empty
- Spark dataframe write method writing many small files
- Filling gaps in timeseries Spark
- Change nullable property of column in spark dataframe
- java.lang.NoClassDefFoundError: org/apache/spark/streaming/twitter/TwitterUtils$ while running TwitterPopularTags
- How to use COGROUP for large datasets
- What are possible reasons for receiving TimeoutException: Futures timed out after [n seconds] when working with Spark [duplicate]
- Parsing multiline records in Scala
- Why does join fail with “java.util.concurrent.TimeoutException: Futures timed out after [300 seconds]”?
- Create new Dataframe with empty/null field values
- How to Define Custom partitioner for Spark RDDs of equally sized partition where each partition has equal number of elements?
- Spark: Add column to dataframe conditionally
- Explanation of fold method of spark RDD
- How to explode an array into multiple columns in Spark
- Why is join not possible after show operator?
- How to sort an RDD in Scala Spark?
- Difference between two rows in Spark dataframe
- Run a scala code jar appear NoSuchMethodError:scala.Predef$.refArrayOps
- Spark UDF with varargs
- How to transpose an RDD in Spark