Spark分布式安装

三台 服务器 n0,n2,n3

centos 6.4 X64

JDK,

SCALA 2.11

Hadoop 2.2.0

spark-0.9.1-bin-hadoop2.tgz

说明:

1.所有机器上安装scala

2.所有机器上安装spark,可从master机器配置好,用scp 复制到剩余节点.

======================

#vim /etc/profile

export SCALA_HOME=/usr/local/scala

export PATH=$SCALA_HOME/bin:$PATH

#source /etc/profile

===========================

解压配置 spark-0.9.1

[hm@n0 ~]$ tar  spark-0.9.1-bin-hadoop2.tgz

[hm@n0 ~]$ ln -s spark-0.9.1-bin-hadoop2  spark

$cd spark/conf

$vim  spark-env.sh

export SCALA_HOME=/usr/local/scala
export JAVA_HOME=/usr/local/java
export SPARK_MASTER_IP=n0
export SPARK_WORKER_MEMORY=1000m

$vim slaves

n2

n3

$ scp -r   spark-0.9.1-bin-hadoop2   n2:/home/hm

$ scp -r   spark-0.9.1-bin-hadoop2   n3:/home/hm

$ cd spark

$ sbin/start-all.sh

[hm@n0 ~]$ jps
3766 NameNode
4613 HMaster
4123 ResourceManager
21996 Master
4413 QuorumPeerMain
24045 Jps
3958 SecondaryNameNode

==================

 运行例子

 cd  spark

集群模式运行

>bin/run-example org.apache.spark.examples.SparkPi   spark://10.69.10.160:7077

本地模式运行

>bin/run-example org.apache.spark.examples.SparkPi   local

原文地址:https://www.cnblogs.com/GrantYu/p/3686018.html