df=spark.read.csv('filename.csv',header=True)
Spark is to create the SparkSession instance
We can create in scala :
How to build a sparkSession in Spark and scala
admin
#scala #spark #scala #bigdata
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('abc').getOrCreate()