Hadoop1.2.1 全集群3节点安装-rpm安装

1.在三个节点上安装JDK RPM

2.在三个节点上安装HADOOP-1.2.1 RPM

rpm方式安装和gz解压方式安装后的目录结构有些不同.安装好之后无需设置HADOOP_HOME环境变量

[root@server-914 usr]# whereis hadoop
hadoop: /usr/bin/hadoop /etc/hadoop /usr/etc/hadoop /usr/include/hadoop /usr/share/hadoop

可执行文件在/usr/bin/hadoop,之前在conf目录下的配置文件都在/etc/hadoop下,/usr/etc/hadoop是指向/etc/hadoop的链接,管理集群的脚步在/usr/sbin下,包括start-all.sh,start-dfs.sh,start-mapred.sh等.

2.1.修改/usr/sbin下与hadoop相关的shell脚步的权限为可执行:chmod u+x *.sh

3.在第一个节点上.bashrc中设置环境变量JAVA_HOME和PATH

4.修改/etc/hadoop目录下的下列4个文件: 

slaves

192.168.32.65
192.168.32.69
192.168.32.71

core-site.xml

<property>
<name>fs.default.name</name>
<value>hdfs://192.168.32.65:9000</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation.
</description>
</property>

mapred-site.xml

<property>
<name>mapred.job.tracker</name>
<value>master:9001</value>
<description>The host and port that the MapReduce job tracker runs at.</description>
</property>

hdfs-site.xml

<property>
<name>dfs.replication</name>
<value>3</value>
<description>The actual number of replications can be specified when the file is created.</description>
</property>

5.同步关键配置文件到slave节点,包括.bashrc,core-site.xml,hdfs-site.xml,mapred-site.xml

原文地址:https://www.cnblogs.com/littlesuccess/p/3644438.html