安装配置Spark集群

首先准备3台电脑或虚拟机,分别是Master,Worker1,Worker2,安装操作系统(本文中使用CentOS7)。

1、配置集群,以下步骤在Master机器上执行

  1.1、关闭防火墙:systemctl stop firewalld.service

  1.2、设置机器ip为静态ip

    1.2.1、修改配置

cd /etc/sysconfig/network-scripts/
vim ifcfg-eno16777736

更改内容如下:
BOOTPROTO=static
#配置静态IP,网关,子网掩码
IPADDR=192.168.232.133
NETMASK=255.255.255.0
GATEWAY=192.168.232.2
#取消networkmanager 管理
NM_CONTROLLED=no

ONBOOT=yes

    1.2.2、重启网络服务:systemctl restart network.service

  1.3、设置机器名hostname:hostnamectl set-hostname Master

  1.4、设置/etc/hosts

192.168.232.133    Master
192.168.232.134    Worker1
192.168.232.135    Worker2

  1.5、按以上5个步骤配置Worker1,Worker2

  1.6、测试集群内机器是否可相互ping通:ping Worker1

2、配置ssh免密码登录

  2.1、 配置Master无密码登录所有Worker

    2.1.1、在Master节点上生成密码对,在Master上执行以下命令:
      ssh-keygen -t rsa -P ''
      生成的密钥对:id_rsa和id_rsa.pub,默认存储在"/root/.ssh"目录下。

    2.1.2、在Master节点上做如下配置,把id_rsa.pub追加到授权的key里面去。
      cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

    2.1.3、修改ssh配置文件"/etc/ssh/sshd_config"的下列内容:

RSAAuthentication yes # 启用 RSA 认证
PubkeyAuthentication yes # 启用公钥私钥配对认证方式
AuthorizedKeysFile .ssh/authorized_keys # 公钥文件路径(和上面生成的文件同)

    2.1.4、重启ssh服务,才能使刚才设置有效:service sshd restart

    2.1.5、验证无密码登录本机是否成功:ssh Master

    2.1.6、把公钥复制到所有的Worker机器上。使用scp命令进行复制公钥:

scp /root/.ssh/id_rsa.pub root@Worker1:/root/
scp /root/.ssh/id_rsa.pub root@Worker2:/root/

  2.2、配置Worker1机器
    2.2.1、在"/root/"下创建".ssh"文件夹,如果已经存在就不需要创建了。
      mkdir /root/.ssh

    2.2.2、将Master的公钥追加到Worker1的授权文件"authorized_keys"中去。
      cat /root/id_rsa.pub >> /root/.ssh/authorized_keys

    2.2.3、修改"/etc/ssh/sshd_config",具体步骤参考前面Master设置的第1.3和第1.4。

    2.2.4、用Master使用ssh无密码登录Worker1
      ssh worker1

    2.2.5、删除"/root/"目录下的"id_rsa.pub"文件。
      rm –r /root/id_rsa.pub

    2.2.6、重复上面的5个步骤把Worker2服务器进行相同的配置。

  2.3、 配置所有Worker无密码登录Master

    2.3.1、在Worker1节点上生成密码对,并把自己的公钥追加到"authorized_keys"文件中,执行下面命令:
      ssh-keygen -t rsa -P ''
      cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys

    2.3.2、将Worker1节点的公钥"id_rsa.pub"复制到Master节点的"/root/"目录下。
      scp /root/.ssh/id_rsa.pub root@Master:/root/

    2.3.3、在Master节点将Worker1的公钥追加到Master的授权文件"authorized_keys"中去。
      cat ~/id_rsa.pub >> ~/.ssh/authorized_keys

    2.3.4、在Master节点删除"id_rsa.pub"文件。
      rm –r /root/id_rsa.pub

    2.3.5、测试从Worker1免密码登录到Master:ssh Master

  2.4、按照上面的步骤把Worker2和Master之间建立起无密码登录。这样,Master能无密码登录每个Worker,每个Worker也能无密码登录到Master。

3、在Master安装Java、Scala,把下载的安装包解压即可tar -xzvf ...

4、在Master安装配置Hadoop
  4.1、配置hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>Master:50090</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/usr/etc/hadoop-2.7.5/hdfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/usr/etc/hadoop-2.7.5/hdfs/data</value>
    </property>
    <property>
        <name>dfs.namenode.checkpoint.dir</name>
        <value>/usr/etc/hadoop-2.7.5/hdfs/namesecondary</value>
    </property>
</configuration>        

  4.2、配置yarn-site.xml

<configuration>
   <property>
         <name>yarn.resourcemanager.hostname</name>
         <value>Master</value>
   </property>
   <property>
         <name>yarn.nodemanager.aux-services</name>
         <value>mapreduce_shuffle</value>
   </property>
   <property>
         <name>yarn.resourcemanager.address</name>
         <value>Master:8032</value>
   </property>
   <property>
         <name>yarn.resourcemanager.scheduler.address</name>
         <value>Master:8030</value>
   </property>
   <property>
         <name>yarn.resourcemanager.resource-tracker.address</name>
         <value>Master:8031</value>
   </property>
   <property>
         <name>yarn.resourcemanager.admin.address</name>
         <value>Master:8033</value>
   </property>
   <property>
         <name>yarn.resourcemanager.webapp.address</name>
         <value>Master:8088</value>
   </property>
</configuration>

  4.3、配置mapred-site.xml

<configuration>
   <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
</configuration>

  4.4、配置hadoop-env.sh

export JAVA_HOME=/usr/etc/jdk1.8.0_161
export HADOOP_HOME=/usr/etc/hadoop-2.7.5
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}

  4.5、配置core-site.xml

<configuration>
   <property>
      <name>fs.defaultFS</name>
          <value>hdfs://Master:9000</value>
   </property>
    <property>
       <name>hadoop.tmp.dir</name>
       <value>/usr/etc/hadoop-2.7.5/tmp</value>
    </property>
   <property>
       <name>hadoop.native.lib</name>
       <value>true</value>
    </property>
</configuration>

  4.6、配置slaves

Worker1
Worker2

5、在Master安装配置Spark
  5.1、配置spark-env.sh

export JAVA_HOME=/usr/etc/jdk1.8.0_161
export SCALA_HOME=/usr/etc/scala-2.12.4
export HADOOP_HOME=/usr/etc/hadoop-2.7.5
export HADOOP_CONF_DIR=/usr/etc/hadoop-2.7.5/etc/hadoop
export SPARK_MASTER_IP=Master
export SPARK_WORKER_MEMORY=1g
export SPARK_EXECUTOR_MEMORY=1g
export SPARK_DRIVER_MEMORY=500m
export SPARK_WORKER_CORES=2
export SPARK_HOME=/usr/etc/spark-2.3.0-bin-hadoop2.7
export SPARK_DIST_CLASSPATH=$(/usr/etc/hadoop-2.7.5/bin/hadoop classpath)

5.2、配置spark-defaults.conf

spark.eventLog.enabled true
spark.eventLog.dir hdfs://Master:9000/historyserverforSpark
spark.yarn.historyServer.address Master:18080
spark.history.fs.logDirectory hdfs://Master:9000/historyserverforSpark
spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"

5.3、配置slaves

Worker1
Worker2

6、在Master配置环境变量/etc/profile,并通过source /etc/profile使生效

export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL
export JAVA_HOME=/usr/etc/jdk1.8.0_161
export JRE_HOME=/usr/etc/jdk1.8.0_161/jre
export SCALA_HOME=/usr/etc/scala-2.12.4

export HADOOP_HOME=/usr/etc/hadoop-2.7.5
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib"

export SPARK_HOME=/usr/etc/spark-2.3.0-bin-hadoop2.7

export HIVE_HOME=/usr/etc/apache-hive-2.1.1-bin

export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$SCALA_HOME/lib:$HADOOP_HOME/lib

PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SPARK_HOME/bin:$SPARK_HOME/sbin:$HIVE_HOME/bin:$SCALA_HOME/bin:$JAVA_HOME/bin:$PATH

export JAVA_HOME PATH

7、在Master通过scp命令拷贝java,scala,hadoop,spark,/etc/profile到Worker1,Worker2机器上

8、在Master机器上运行命令:hadoop namenode -format,格式化磁盘

9、在Master机器上运行命令:start-hdfs.sh,启动hdfs服务,可在浏览器通过Master:50070访问

10、在Master机器上运行命令:进入spark的bin目录,start-all.sh,启动Spark,可在浏览器通过Master:8080访问

11、在Master机器上运行命令:start-history-server.sh,启动Spark历史服务,可在浏览器通过Master:18080访问

12、测试集群application运行

12.1、使用spark-submit提交Application:

./spark-submit --class org.apache.spark.examples.SparkPi --master spark://Master:7077 ../examples/jars/spark-examples_2.11-2.3.0.jar 100000
--class:命名空间(包名)+类名;--master:spark集群的master;.jar:jar包位置;10000:任务个数

12.2、启动spark-shell,运行woldcount程序:

sc.textFile("/README.md").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).map(pair=>(pair._2,pair._1).sortByKey(false,1).map(pair=>(pair._2,pair._1)).saveAsTextFile("/resdir/wordcount")
原文地址:https://www.cnblogs.com/nswdxpg/p/8526920.html