【杂谈】RDD-运行-2

RDD是每个分区独立执行,并行执行的,执行路径

Executor.launchTask() -> 
    Task.runTask() -> 
        RDD.iterator ->    
            RDD.compute() or from checkpoin

有2种类型的Task, ShuffleMapTask 和 ResultTask。分别对应Shuffle task,action task(生产了计算结果,可以返回给driver了)

    ShuffleMapTask:
override def runTask(context: TaskContext): MapStatus = {
	  writer = manager.getWriter[Any, Any](dep.shuffleHandle, partitionId, context)
              // 每个分区分别计算
	  writer.write(rdd.iterator(partition, context).asInstanceOf[Iterator[_ <: Product2[Any, Any]]])
 }


ResultTask:
	override def runTask(context: TaskContext): U = {
	
	val ser = SparkEnv.get.closureSerializer.newInstance()
	// 反序列出RDD的元信息和算子
	val (rdd, func) = ser.deserialize[(RDD[T], (TaskContext, Iterator[T]) => U)](
	  ByteBuffer.wrap(taskBinary.value), Thread.currentThread.getContextClassLoader)
	_executorDeserializeTime = System.currentTimeMillis() - deserializeStartTime

	// 在对应的分区上执行算子,计算结果会被通过网络返回给driver
	func(context, rdd.iterator(partition, context))
  }
原文地址:https://www.cnblogs.com/ivanny/p/spark_rdd_2.html