Instead of:
df2 = df1.filter("Status=2" || "Status =3")
Try:
df2 = df1.filter($"Status" === 2 || $"Status" === 3)
More Related Contents:
- Finding duplicates from large data set using Apache Spark
- Find maximum row per group in Spark DataFrame
- How to connect Spark SQL to remote Hive metastore (via thrift protocol) with no hive-site.xml?
- While writing to hdfs path getting error java.io.IOException: Failed to rename
- Multiple Aggregate operations on the same column of a spark dataframe
- Spark SQL – load data with JDBC using SQL statement, not table name
- How to access element of a VectorUDT column in a Spark DataFrame?
- Spark Dataframe validating column names for parquet writes
- How to optimize partitioning when migrating data from JDBC source?
- Filtering a spark dataframe based on date
- How to save/insert each DStream into a permanent table
- Pyspark : forward fill with last observation for a DataFrame
- Spark DataFrame Schema Nullable Fields
- Spark load data and add filename as dataframe column
- PySpark: how to resample frequencies
- Why does Spark think this is a cross / Cartesian join
- How to find count of Null and Nan values for each column in a PySpark dataframe efficiently?
- Spark SQL broadcast hash join
- What is the difference between Apache Spark SQLContext vs HiveContext?
- What should be the optimal value for spark.sql.shuffle.partitions or how do we increase partitions when using Spark SQL?
- pyspark: count distinct over a window
- Spark lists all leaf node even in partitioned data
- Array Intersection in Spark SQL
- PySpark: How to fillna values in dataframe for specific columns?
- How to get Kafka offsets for structured query for manual and reliable offset management?
- When to use Spark DataFrame/Dataset API and when to use plain RDD?
- How to calculate Median in spark sqlContext for column of data type double
- Spark: disk I/O on stage boundaries explanation
- spark.ml StringIndexer throws ‘Unseen label’ on fit()
- PySpark error: AttributeError: ‘NoneType’ object has no attribute ‘_jvm’