You can do it the same way SQLContext.createDataFrame
does it:
import org.apache.spark.sql.catalyst.ScalaReflection
val schema = ScalaReflection.schemaFor[TestCase].dataType.asInstanceOf[StructType]
More Related Contents:
- Finding duplicates from large data set using Apache Spark
- Using a column value as a parameter to a spark DataFrame function
- Find maximum row per group in Spark DataFrame
- Split Spark Dataframe string column into multiple columns
- While writing to hdfs path getting error java.io.IOException: Failed to rename
- Multiple Aggregate operations on the same column of a spark dataframe
- Partitioning in spark while reading from RDBMS via JDBC
- How to control partition size in Spark SQL
- How to import multiple csv files in a single load?
- How to access element of a VectorUDT column in a Spark DataFrame?
- How to split a list to multiple columns in Pyspark?
- pyspark dataframe filter or include based on list
- How to save/insert each DStream into a permanent table
- Use collect_list and collect_set in Spark SQL
- Spark load data and add filename as dataframe column
- What is the maximum size for a broadcast object in Spark?
- PySpark: how to resample frequencies
- How does createOrReplaceTempView work in Spark?
- Fill in null with previously known good value with pyspark
- PySpark – get row number for each row in a group
- Keep only duplicates from a DataFrame regarding some field
- pyspark: count distinct over a window
- Using windowing functions in Spark
- How can I access python variable in Spark SQL?
- PySpark: How to fillna values in dataframe for specific columns?
- How to get Kafka offsets for structured query for manual and reliable offset management?
- How to calculate Median in spark sqlContext for column of data type double
- Spark: disk I/O on stage boundaries explanation
- spark.ml StringIndexer throws ‘Unseen label’ on fit()
- PySpark error: AttributeError: ‘NoneType’ object has no attribute ‘_jvm’