Spark 运行模式 standalong & yarn

standalong 模式需要在spark master 节点上启动 spark/sbin/start-all.sh
主从节点都可以run
standalong client
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://bigdatastorm:7077 --driver-memory 512m --executor-memory 512m --total-executor-cores 1 ./lib/spark-examples-1.6.0-hadoop2.6.0.jar 100
standalong cluster
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://bigdatastorm:7077 --deploy-mode cluster --driver-memory 512m --executor-memory 512m --total-executor-cores 1 ./lib/spark-examples-1.6.0-hadoop2.6.0.jar 100
yarn 模式需要启动yarn
主从节点都可以run
yarn client
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode client --driver-memory 512m --executor-memory 512m --total-executor-cores 1 ./lib/spark-examples-1.6.0-hadoop2.6.0.jar 100
yarn cluster
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --driver-memory 512m --executor-memory 512m --total-executor-cores 1 ./lib/spark-examples-1.6.0-hadoop2.6.0.jar 100

================================================
standalong 模式需要在/opt/spark-1.6.0/conf 配置 slaves 和 spark-env.sh ,slaves 配置work 节点
spark-env.sh 配置
export SPARK_MASTER_IP=bigdatastorm
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_CORES=1
export SPARK_WORKER_INSTANCES=1
export SPARK_WORKER_MEMORY=512m
export SPARK_LOCAL_DIRS=/data/spark/dataSparkDir

yarn 模式下需要额外配置
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

更重要的是需要配置Scala 了 ,并设置环境变量




原文地址:https://www.cnblogs.com/TendToBigData/p/10501386.html