[*]
From the doc:
./bin/spark-shell --master local[2]
The
--master
option specifies the master URL for a distributed
cluster, orlocal
to run locally with one thread, orlocal[N]
to run
locally with N threads. You should start by using local for testing.
And from here:
local[*]
Run Spark locally with as many worker threads as logical
cores on your machine.