6.Hadoop环境配置

Hadoop的安装和配置

非说明皆在master节点操作!视频链接:https://www.bilibili.com/video/BV1sr4y1y7C8/

1、创建安装目录

mkdir hadoop

2、解压文件到hadoop的安装目录

tar -xzvf hadoop-2.7.3.tar.gz -C /hadoop

3、配置环境变量

vi /etc/profile

#hadoop envirment

export HADOOP_HOME=/hadoop/hadoop-2.7.3

export CLASSPATH=$CLASSPATH:$HADOOP_HOME/lib

export PATH=$PATH:$HADOOP_HOME/bin

source /etc/profile

4、修改hadoop-env.sh

cd /hadoop/hadoop-2.7.3/etc/hadoop/

vi hadoop-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_171

export HADOOP_CONF_DIR=/hadoop/hadoop-2.7.3/etc/hadoop

5、修改core-site.xml

vi core-site.xml

<property>

<name>fs.default.name</name>

<value>hdfs://master:9000</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/hadoop/hadoop-2.7.3/hdfs/tmp</value>

<description>A base for other temporary directories.</description>

</property>

<property>

<name>io.file.buffer.size</name>

<value>131072</value>

</property>

<property>

<name>fs.checkpoint.period</name>

<value>60</value>

</property>

<property>

<name>fs.checkpoint.size</name>

<value>67108864</value>

</property>


6、修改yarn-site.xml

vi yarn-site.xml

<property>

<name>yarn.resourcemanager.address</name>

<value>master:18040</value>

</property>

<property>

<name>yarn.resourcemanager.scheduler.address</name>

<value>master:18030</value>

</property>

<property>

<name>yarn.resourcemanager.webapp.address</name>

<value>master:18088</value>

</property>

<property>

<name>yarn.resourcemanager.resource-tracker.address</name>

<value>master:18025</value>

</property>

<property>

<name>yarn.resourcemanager.admin.address</name>

<value>master:18141</value>

</property>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

<property>

<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>

<value>org.apache.hadoop.mapred.ShuffleHandler</value>

</property>

 

7、修改hdfs-site.xml

vi hdfs-site.xml

<property>

<name>dfs.replication</name>

<value>2</value>

</property>

<property>

<name>dfs.namenode.name.dir</name>

<value>file:/hadoop/hadoop-2.7.3/hdfs/name</value>

<final>true</final>

</property>

<property>

<name>dfs.datanode.data.dir</name>

<value>file:/hadoop/hadoop-2.7.3/hdfs/data</value>

<final>true</final>

</property>

<property>

<name>dfs.namenode.secondary.http-address</name>

<value>master:9001</value>

</property>

<property>

<name>dfs.webhdfs.enabled</name>

<value>true</value>

</property>

<property>

<name>dfs.permissions</name>

<value>false</value>

</property>

 

8、修改mapred-site.xml

cp mapred-site.xml.template mapred-site.xml

vi mapred-site.xml

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

9、修改slaves文件

vi slaves

slave1

slave2

10、新建master文件

vi master

master

11、复制到slave1和slave2

scp -r /hadoop root@slave1:/

scp -r /hadoop root@slave2:/

修改slave1和slave2配置并生效(同3、)

12、格式化hadoop

先格式化再启动Hadoop,只格式化一次

hadoop namenode -format

13、启动Hadoop

同步时间:ntpd服务

启动zookeeper:所有机器都启动

启动Hadoop:只在master角色机器启动

/hadoop/hadoop-2.7.3/sbin/start-all.sh

关闭:

/hadoop/hadoop-2.7.3/sbin/stop-all.sh

关闭zookeepr

关闭Linux系统

 
原文地址:https://www.cnblogs.com/thx2199/p/15430607.html