将namenode与SecondNamenode分离在不同的主机上

继上篇~

1、停止hadoop
stop-all.sh 
 
2、修改vim masters 文件 
其实,master文件不决定哪个是namenode,而决定的是secondarynamenode(决定谁是namenode的关键配置是core-site.xml中的fs.default.name这个参数)。所以,这里直接写上作为secondnamenode的IP或机器名称(可以是集群中任一个datanode节点)就可以了。一行一个(可以配置多个secondnamenode)。
 
我这里放到slave3节点(slave3节点是上篇文档动态增加的数据节点)上面:
原先的 masters 文件里面的内容为 “master”,这是主节点的 host文件配置的名子 ;
 
需要将之前的 master 换成 slave3
 
3、修改hdfs-site.xml文件,这个配置文件要改1个参数(如果没有这具参数需要添加上): 
<property> 
  <name>dfs.http.address</name> 
  <value>master:50070</value> 
</property>
 
其中的 “master”表示的是 namenode进程是在哪台主机上启动的,namenode当前还是主页面:我的主机名是:master 
 
 
4、这时修改完成正常启动就可以了:
通过启动信息可以看到  secondarynamenode 是在slave3节点是启动的了;
 
另:在master 文件内配置多了,可以启多个 secondarynamenode :

[root@master conf]# more masters
slave3
slave1

---------------------

[root@master conf]# start-all.sh
Warning: $HADOOP_HOME is deprecated.

starting namenode, logging to /opt/hadoop/libexec/../logs/hadoop-root-namenode-master.out
slave2: starting datanode, logging to /opt/hadoop/libexec/../logs/hadoop-root-datanode-slave2.out
slave1: starting datanode, logging to /opt/hadoop/libexec/../logs/hadoop-root-datanode-slave1.out
slave3: starting datanode, logging to /opt/hadoop/libexec/../logs/hadoop-root-datanode-slave3.out
slave3: starting secondarynamenode, logging to /opt/hadoop/libexec/../logs/hadoop-root-secondarynamenode-slave3.out
slave1: starting secondarynamenode, logging to /opt/hadoop/libexec/../logs/hadoop-root-secondarynamenode-slave1.out
starting jobtracker, logging to /opt/hadoop/libexec/../logs/hadoop-root-jobtracker-master.out
slave2: starting tasktracker, logging to /opt/hadoop/libexec/../logs/hadoop-root-tasktracker-slave2.out
slave1: starting tasktracker, logging to /opt/hadoop/libexec/../logs/hadoop-root-tasktracker-slave1.out
slave3: starting tasktracker, logging to /opt/hadoop/libexec/../logs/hadoop-root-tasktracker-slave3.out
[root@master conf]#

原文地址:https://www.cnblogs.com/cesar2008/p/SecondNamenode.html