Hadoop集群搭建-HDFS-HA模式

hadoop版本为2.X  只支持一个standyby的namenode

流程:

<1> 搭建zookeeper集群

zookeeper 集群搭建
		node02:
			cd zookeeper的conf目录
			cp zoo_sample.cfg  zoo.cfg
			vi zoo.cfg
				datadir=/var/bigdata/hadoop/zk
				server.1=node02:2888:3888
				server.2=node03:2888:3888
				server.3=node04:2888:3888
			mkdir /var/bigdata/hadoop/zk
			echo 1 >  /var/bigdata/hadoop/zk/myid 
			vi /etc/profile
				export ZOOKEEPER_HOME=/opt/moudels/zookeeper-3.4.6
				export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin
			. /etc/profile
			cd /opt/bigdata
			scp -r ./zookeeper-3.4.6  node03:`pwd`
			scp -r ./zookeeper-3.4.6  node04:`pwd`
		node03:
			mkdir /var/bigdata/hadoop/zk
			echo 2 >  /var/bigdata/hadoop/zk/myid
			*环境变量
			. /etc/profile
		node04:
			mkdir /var/bigdata/hadoop/zk
			echo 3 >  /var/bigdata/hadoop/zk/myid
			*环境变量
			. /etc/profile

		node02~node04:
			zkServer.sh start

<2> 修改hadoop的配置文件

相关配置;

	core-site.xml
		<property>
		  <name>fs.defaultFS</name>
		  <value>hdfs://mycluster</value>
		</property>

		 <property>
		   <name>ha.zookeeper.quorum</name>
		   <value>node02:2181,node03:2181,node04:2181</value>
		 </property>

	hdfs-site.xml
		#以下是  一对多,逻辑到物理节点的映射
		<property>
		  <name>dfs.nameservices</name>
		  <value>mycluster</value>
		</property>
		<property>
		  <name>dfs.ha.namenodes.mycluster</name>
		  <value>nn1,nn2</value>
		</property>
		<property>
		  <name>dfs.namenode.rpc-address.mycluster.nn1</name>
		  <value>node01:8020</value>
		</property>
		<property>
		  <name>dfs.namenode.rpc-address.mycluster.nn2</name>
		  <value>node02:8020</value>
		</property>
		<property>
		  <name>dfs.namenode.http-address.mycluster.nn1</name>
		  <value>node01:50070</value>
		</property>
		<property>
		  <name>dfs.namenode.http-address.mycluster.nn2</name>
		  <value>node02:50070</value>
		</property>

		#以下是JN在哪里启动,数据存那个磁盘
		<property>
		  <name>dfs.namenode.shared.edits.dir</name>
		  <value>qjournal://node01:8485;node02:8485;node03:8485/mycluster</value>
		</property>
		<property>
		  <name>dfs.journalnode.edits.dir</name>
		  <value>/var/bigdata/hadoop/ha/dfs/jn</value>
		</property>
		
		#HA角色切换的代理类和实现方法,我们用的ssh免密
		<property>
		  <name>dfs.client.failover.proxy.provider.mycluster</name>
		  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
		</property>
		<property>
		  <name>dfs.ha.fencing.methods</name>
		  <value>sshfence</value>
		</property>
		<property>
		  <name>dfs.ha.fencing.ssh.private-key-files</name>
		  <value>/root/.ssh/id_dsa</value>
		</property>
		
		#开启自动化: 启动zkfc
		 <property>
		   <name>dfs.ha.automatic-failover.enabled</name>
		   <value>true</value>
		 </property>
启动
顺序

		1)先启动JN   hadoop-daemon.sh start journalnode 
            hadoop-daemon.sh start journalnode 2)选择一个NN 做格式化:hdfs namenode -format <只有第一次搭建做,以后不用做>
            hadoop namenode -format 3)启动这个格式化的NN ,以备另外一台同步
            hadoop-daemon.sh start namenode 4)在另外一台机器中:
            hdfs namenode -bootstrapStandby 5)格式化zk: hdfs zkfc -formatZK <只有第一次搭建做,以后不用做,别忘了事先启动ZK集群!!!>
            如果后面zkfc挂掉了,需要手动启动这个进程:hadoop-daemon.sh start zkfc 6) start-dfs.sh

 
原文地址:https://www.cnblogs.com/rabbit624/p/14454259.html