Experienced / Expert level questions
Experienced / Expert level questions & answers
Ques 1. Explain the concept of lazy evaluation in Apache Spark.
Lazy evaluation is a strategy in which the execution of operations is delayed until the result is actually needed. This helps in optimizing the execution plan.
Example:
val filteredRDD = inputRDD.filter(x => x > 0)
filteredRDD.count()
Ques 2. How does Spark handle fault tolerance in RDDs?
Spark achieves fault tolerance through lineage information (DAG) and recomputing lost data from the original source. If a partition of an RDD is lost, Spark can recompute it using the lineage information.
Example:
val resilientRDD = originalRDD.filter(x => x > 0)
Ques 3. What is the significance of the Spark Shuffle operation?
The Spark Shuffle operation redistributes data across partitions during certain transformations, such as groupByKey or reduceByKey. It is a costly operation that involves data exchange and can impact performance.
Example:
val groupedRDD = inputRDD.groupByKey()
Ques 4. What are the advantages of using Spark over Hadoop MapReduce?
Spark offers in-memory processing, higher-level abstractions like DataFrames, and iterative processing, making it faster and more versatile than Hadoop MapReduce.
Example:
SparkContext sc = new SparkContext("local", "SparkExample")
Ques 5. How does Spark handle data skewness in transformations like groupByKey?
Data skewness occurs when certain keys have significantly more data than others. Spark handles it by using techniques like data pre-partitioning or using advanced algorithms like map-side aggregation.
Example:
val skewedData = inputRDD.groupByKey(numPartitions)
Ques 6. How does Spark handle data locality optimization?
Spark aims to schedule tasks on nodes that have a copy of the data to minimize data transfer over the network. This is achieved by using data locality-aware task scheduling.
Example:
sparkConf.set("spark.locality.wait", "2s")
Most helpful rated by users:
- What is the purpose of the Spark SQL module?
- Explain the difference between narrow and wide transformations in Spark.