Hadoop集群搭建-full完全分布式(三)

环境:Hadoop-2.8.5 、centos7、jdk1.8

一、步骤

1).4台centos虚拟机

2). 将hadoop配置修改为完全分布式

3). 启动完全分布式集群

4). 在完全分布式集群上测试wordcount程序

二、4台centos虚拟机配置

4台虚拟机:node-001、node-002、node-003、node-004

克隆4台虚拟机——》生成新的mac地址——》修改主机名——》修改node-001的IP地址——》删除70-persistent-net.rules文件——》重启虚拟机生效

三、修改Hadoop配置为完全分布式

需要修改 $HADOOP_HOME/etc/hadoop目录下配置文件 hadoop-env.sh、 core-site.xmlhdfs-site.xml、 yarn-site.xmlmapred-site.xmlsalves

配置Hadoop 环境变量

export HADOOP_PREFIX=/home/lims/bd/hadoop-2.8.5
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin

1.进入$HADOOP_HOME/etc/hadoop目录

vi hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0

2.修改core-site.xml

vi core-site.xml
<configuration>
<!--配置hdfs文件系统的命名空间-->
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://node-001:9000</value>
  </property>

<!-- 配置操作hdfs的存冲大小 -->
  <property>
    <name>io.file.buffer.size</name>
    <value>4096</value>
  </property>
<!-- 配置临时数据存储目录 -->
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/lims/bd/tmp</value>
  </property>

</configuration>

3.修改hdfs-site.xml

[lims@node-001 hadoop]# vi hdfs-site.xml
<configuration>
<!-- 将备份数修改为3,小于等于当前datanode数目即可-->
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
<!-- 将secondary namenode改为hadoop2-->
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>node-002:50090</value>
</property>
<property>
       <name>dfs.namenode.name.dir</name>
      <value>file://${hadoop.tmp.dir}/dfs/name</value>
</property>
<property>
    <name>dfs.namenode.data.dir</name>
    <value>file://${hadoop.tmp.dir}/dfs/data</value>
 </property>
 <property>
    <name>dfs.permissions.enabled</name>
    <value>false</value>
 </property>
</configuration>

4.修改yarn-site.xml


[lims@node-001 hadoop]# vi yarn-site.xml

<
configuration> <!-- Site specific YARN configuration properties --> <!-- 添加了yarn.resourcemanager.hostname 属性--> <property> <name>yarn.resourcemanager.hostname</name> <value>node-001</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <!-- 添加了yarn.nodemanager.auxservices.mapreduce.shuffle.class属性--> <property> <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> </configuration>

5.配置mapred-site.xml文件

<configuration>

<!-- MR YARN Application properties -->

<property>
  <name>mapreduce.framework.name</name>
  <value>yarn</value>
  <description>The runtime framework for executing MapReduce jobs.
  Can be one of local, classic or yarn.
  </description>
</property>

<!-- jobhistory properties -->
<property>
  <name>mapreduce.jobhistory.address</name>
  <value>node-002:10020</value>
  <description>MapReduce JobHistory Server IPC host:port</description>
</property>

<property>
  <name>mapreduce.jobhistory.webapp.address</name>
  <value>node-003:19888</value>
  <description>MapReduce JobHistory Server Web UI host:port</description>
</property>

</configuration>

6.配置salves文件

node-002
node-003
node-004

7.将hadoop/下配置分发到各个节点,hosts配置分发到各个节点

scp hadoop/* lims@node-002:/home/lims/bd/hadoop-2.8.5/etc/hadoop
scp hadoop/* lims@node-003:/home/lims/bd/hadoop-2.8.5/etc/hadoop
scp hadoop/* lims@node-004:/home/lims/bd/hadoop-2.8.5/etc/hadoop

四、启动完全分布式集群

1)node-001上格式化namenode 

hdfs namenode -format

2)node-001上启动Hadoop集群

start-dfs.sh

3)node-001上启动yarn

start-yarn.sh

4)各个节点上查看进程

[lims@node-001 hadoop]$ jps
11602 ResourceManager
14499 Jps
11325 NameNode
[lims@node-002 ~]$ jps
2449 NodeManager
2377 SecondaryNameNode
2316 DataNode
5564 Jps
[lims@node-003 ~]$ jps
4112 Jps
2425 NodeManager
2316 DataNode
[lims@node-004 ~]$ jps
2433 NodeManager
2324 DataNode
4009 Jps

五、完全分布式集群上运行wordcount

1)从node-001进入$HADOOP_HOME/share/hadoop/mapreduce/目录

2)上传test.txt文件到指定目录

hadoop fs -put test.txt /user/lims/

3)运行wordcount测试程序,输出到/output

hadoop jar hadoop-mapreduce-examples-2.8.5.jar wordcount /user/lims/test.txt /output

4)查看mapreduce运行结果

hadoop dfs -text /output/part-*
hadoop dfs -cat /output/part-*
[lims@node-001 hadoop]$ hadoop fs -cat /output/part-*
a    2
aa    2
bb    2
cc    1
dd    1
file    2
is    2
test    2
this    2
tmp    1
原文地址:https://www.cnblogs.com/limaosheng/p/10567618.html