Spark资源配置(核数与内存)

转载自:http://blog.csdn.net/zrc199021/article/details/54020692

关于所在节点核数怎么看?

======================================================================

# 总核数 = 物理CPU个数 X 每颗物理CPU的核数

# 总逻辑CPU数 = 物理CPU个数 X 每颗物理CPU的核数 X 超线程数

 

# 查看物理CPU个数

cat /proc/cpuinfo| grep "physical id"| sort| uniq| wc -l

 

# 查看每个物理CPU中core的个数(即核数)

cat /proc/cpuinfo| grep "cpu cores"| uniq

 

# 查看逻辑CPU的个数

cat /proc/cpuinfo| grep "processor"| wc -l

======================================================================

 

spark资源主要就是core和memery。

 

spark主题功能分三部分:spark RDD,sparkSQL,spark shell,如果每个部分的功能都要用,那么每块都要占用资源。

 

其中,spark RDD和spark shell 是动态分配占用资源的,sparkSQL是静态分配资源的(启动后即一直占着分配的资源)

 

spark分配的总体资源在哪里看?

  1. cat /home/mr/spark/conf/spark-env.sh
  1. JAVA_HOME=/usr/java/jdk
  2. SPARK_HOME=/home/mr/spark
  3. SPARK_PID_DIR=/home/mr/spark/pids
  4. SPARK_LOCAL_DIRS=/data2/zdh/spark/tmp,/data3/zdh/spark/tmp,/data4/zdh/spark/tmp
  5. SPARK_WORKER_DIR=/data2/zdh/spark/work
  6. SPARK_LOG_DIR=/data1/zdh/spark/logs
  7. SPARK_HISTORY_OPTS="-Dspark.history.ui.port=18088-Dspark.history.retainedApplications=500"
  8. SPARK_MASTER_WEBUI_PORT=18080
  9. SPARK_WORKER_WEBUI_PORT=18081
  10. SPARK_WORKER_CORES=25
  11. SPARK_WORKER_MEMORY=150g
  12. SPARK_DAEMON_MEMORY=2g
  13. SPARK_LOCAL_HOSTNAME=`hostname`
  14. YARN_CONF_DIR=/home/mr/yarn/etc/hadoop

SparkSQL的总体资源在哪看?

  1. [root@vmax47 conf]# cat sparksql-defaults.conf
  2. spark.serializer=org.apache.spark.serializer.KryoSerializer
  3. spark.driver.extraJavaOptions=-Xss32m-XX:PermSize=128M-XX:MaxPermSize=512m
  4. spark.driver.extraClassPath=/home/mr/spark/libext/*
  5. spark.executor.extraClassPath=/home/mr/spark/libext/*
  6. spark.executor.memory=10g
  7. spark.eventLog.enabled=true
  8. spark.eventLog.dir=/data1/zdh/spark/logs/eventLog
  9. spark.history.fs.logDirectory=/data1/zdh/spark/logs/eventLog
  10. spark.worker.cleanup.enabled=true
  11. spark.shuffle.consolidateFiles=true
  12. spark.ui.retainedJobs=200
  13. spark.ui.retainedStages=200
  14. spark.deploy.retainedApplications=100
  15. spark.deploy.retainedDrivers=100
  16. spark.speculation=true
  17. spark.speculation.interval=1000
  18. spark.speculation.multiplier=4
  19. spark.speculation.quantile=0.85
  20. spark.shuffle.service.enabled=false
  21. spark.dynamicAllocation.enabled=false
  22. spark.dynamicAllocation.minExecutors=0
  23. spark.dynamicAllocation.maxExecutors=2147483647
  24. spark.sql.broadcastTimeout=600
  25. spark.yarn.queue=mr
  26. spark.master=spark://vmax47:7077,SPARK49:7077
  27. spark.deploy.recoveryMode=ZOOKEEPER
  28. spark.deploy.zookeeper.url=SPARK49:2181,HADOOP50:2181,vmax47:2181
  29. spark.ui.port=4100
  30. spark.driver.memory=40G
  31. spark.cores.max=30​

查看Spark资源可从18080端口查看:

原文地址:https://www.cnblogs.com/yangcx666/p/8723703.html