spark 配置

1、

mv slaves.template slaves

slaves 文件添加

hadoop.slave01
hadoop.slave02
hadoop.slave03

  

2、

cp spark-env.sh.template spark-env.sh 
spark-env.sh 文件添加
SPARK_MASTER_HOST=hadoop.slave01
SPARK_MASTER_PORT=7077
export JAVA_HOME=/usr/java/jdk1.8.0_201

  

3、JobHistoryServer

修改spark-default.conf.

spark.eventLog.enabled           true
spark.eventLog.dir               hdfs://hadoop.slave01:9000/directory

还有

spark.yarn.historyServer.address=hadoop102:18080
spark.history.ui.port=18080

  

修改spark-env.sh

export SPARK_HISTORY_OPTS="-Dspark.history.ui.port=18080 
-Dspark.history.retainedApplications=30 
-Dspark.history.fs.logDirectory=hdfs://hadoop.slave01:9000/directory"

  

4、在hdfs 创建

hadoop fs -mkdir /directory

  

原文地址:https://www.cnblogs.com/Jomini/p/11609805.html