Spark数据本地性

1、文件系统本地性

  第一次运行时数据不在内存中,需要从HDFS上取,任务最好运行在数据所在的节点上

2、内存本地性

  第二次运行,数据已经在内存中,所有任务最好运行在该数据所在内存的节点上

3、LRU置换

  如果数据只缓存在内存中而并没有缓存到磁盘上,此时数据被置换出内存,则从HDFS上读取;

  如果数据不仅缓存到内存而且还缓存到磁盘上,此时数据被置换出内存,则从磁盘上直接读取;

BlockManage.scala

putBlockInfo.synchronized {
      var marked = false
      try {
        if (level.useMemory) {
          // Save it just to memory first, even if it also has useDisk set to true; we will
          // drop it to disk later if the memory store can't hold it.
          val res = data match {
        ...
        }
          size = res.size
          res.data match {
            case Right(newBytes) => bytesAfterPut = newBytes
            case Left(newIterator) => valuesAfterPut = newIterator
          }
          // Keep track of which blocks are dropped from memory
          res.droppedBlocks.foreach { block => updatedBlocks += block }
        }
......

注:只要设置了内存存储,即使也设置了磁盘存储,也只会先存在内存中,不是一开始就存放在磁盘上,只有当内存不够时才会置换到磁盘上去

详情参照:http://download.csdn.net/detail/u013424982/7191967

 

 

原文地址:https://www.cnblogs.com/luogankun/p/3886079.html