hadoop3.x的HA解析

1.官网   https://hadoop.apache.org/docs/r3.1.2/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html

2.namenode支持3个及3个以上的namenode,官方建议3个,journal node奇数个

hdfs-site.xml
#namespace名称:

<property>
  <name>dfs.nameservices</name>
  <value>mycluster</value>
</property>
#namenode id hdfs-site.xml 指定3个namenode
<property>
  <name>dfs.ha.namenodes.mycluster</name>
  <value>nn1,nn2, nn3</value>
</property>
#各个namenode的通信地址:rpc

<property>
  <name>dfs.namenode.rpc-address.mycluster.nn1</name>
  <value>machine1.example.com:8020</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn2</name>
  <value>machine2.example.com:8020</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn3</name>
  <value>machine3.example.com:8020</value>
</property>
#每个namenode的http服务  在2.x默认是50070,在3.x已经变成9870
<property>
  <name>dfs.namenode.http-address.mycluster.nn1</name>
  <value>machine1.example.com:9870</value>
</property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn2</name>
  <value>machine2.example.com:9870</value>
</property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn3</name>
  <value>machine3.example.com:9870</value>
</property>
#配置journal node的共享路径
<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://node1.example.com:8485;node2.example.com:8485;node3.example.com:8485/mycluster</value>
</property>
#定义联系namonode的java类,用于确定当前处于 Active 状态的 NameNode:ConfiguredFailoverProxyProvider或者RequestHedgingProxyProvider
<property> 
  <name> dfs.client.failover.proxy.provider.mycluster </ name> 
  <value> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider </ value> 
</ property>
#故障转移期间的防护方法shell和sshfence,免密登陆杀进程
    <!-- 配置隔离机制 -->
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
    </property>
    <!-- 使用隔离机制时需要ssh免登陆 -->
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/dip/.ssh/id_rsa</value>
    </property>
<property>
<!-- Journal Edit Files 的存储目录 -->
<name>dfs.journalnode.edits.dir</name>
<value>/opt/cslc/hadoop-2.7.7/journalnode/data</value>
</property>

故障自动转移(需要在core-site.xml配置zookeeper地址)

<property>
   <name>dfs.ha.automatic-failover.enabled</name>
   <value>true</value>
 </property>

core-site.xml

<property>
  <name>fs.defaultFS</name>
  <value>hdfs://mycluster</value>
</property>

故障转移配置zookeeper

 <property>
   <name>ha.zookeeper.quorum</name>
   <value>zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181</value>
 </property>

 启动journal node

hdfs --daemon start journalnode

一个namenode初始化

hdfs namenode -format

剩余namenode同步数据

故障自动转移:在ZooKeeper中初始化HA状态(在一个namenode上执行)

hdfs   zkfc  -formatZK

原文地址:https://www.cnblogs.com/students/p/12016125.html