Hadoop2.4.1 64-Bit QJM HA and YARN HA + Zookeeper-3.4.6 + Hbase-0.98.8-hadoop2-bin HA Install



Hadoop2.4.1 64-Bit QJM HA and YARN HA Install + Zookeeper-3.4.6 + Hbase-0.98.8-hadoop2-bin HA
(Hadoop2.4.1(QJM方式的HDFS NameNode HA,YARN ResourceManager HA)集群+Zookeeper3.4.6集群+Hbase-0.98.8(Master HA)集群搭建)



HostName            IP                Soft                        Process
h1                    192.168.1.31    Hadoop,Hbase                NameNode(Active),DFSZKFailoverController,HMaster(Active)
h2                    192.168.1.32     Hadoop,Hbase                NameNode(Standby),DFSZKFailoverController,HMaster(Backup)
h3                    192.168.1.33     Hadoop,Hbase                ResourceManager(Active),HRegionServer
h4                    192.168.1.34     Hadoop,Hbase                ResourceManager(Standby),HRegionServer
h5                    192.168.1.35     Hadoop,Zookeeper,Hbase        QuorumPeerMain(follower),JournalNode,DataNode,NodeManager,HRegionServer
h6                    192.168.1.36     Hadoop,Zookeeper,Hbase        QuorumPeerMain(Leader),JournalNode,DataNode,NodeManager,HRegionServer
h7                    192.168.1.37     Hadoop,Zookeeper,Hbase        QuorumPeerMain(follower),JournalNode,DataNode,NodeManager,HRegionServer


##修改h1,h2,h3,h4,h5,h6,h7的主机名,每台机器都按如下步骤改
vi /etc/sysconfig /network
HOSTNAME=h1
esc,shift+zz


##修改h1,h2,h3,h4,h5,h6,h7的主机映射文件,每台机器都按如下步骤改
vi /etc/hosts
192.168.1.31 h1
192.168.1.32 h2
192.168.1.33 h3
192.168.1.34 h4
192.168.1.35 h5
192.168.1.36 h6
192.168.1.37 h7
esc,shift+zz


##关闭h1,h2,h3,h4,h5,h6,h7的防火墙,每台机器都按照如下步骤改
service iptables stop
chkconfig iptables off
chkconfig --list | grep iptables


##在h1上安装64位Jdk-6u45
cd /usr/local/
chmod u+x jdk-6u45-linux-x64.bin
./jdk-6u45-linux-x64.bin
mv jdk1.6.0_45/ jdk


##在h1上配置JAVA_HOME
vi /etc/profile
export JAVA_HOME=/usr/local/jdk
export PATH=.:$JAVA_HOME/bin:$PATH
esc,shift+zz
source /etc/profile


##验证h1上的JDK安装是否成功
java -version
java version "1.6.0_45"
Java(TM) SE Runtime Environment (build 1.6.0_45-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.45-b01, mixed mode)


##将h1上配置好的jdk复制到h2,h3,h4,h5,h6,h7节点上,同时将h1中的环境变量复制到h2,h3,h4,h5,h6,h7上,分别在h2,h3,h4,h5,h6,h7上source /etc/profile
[root@h1 local]# scp -r jdk root@h2:/usr/local/
[root@h1 local]# scp -r jdk root@h3:/usr/local/
[root@h1 local]# scp -r jdk root@h4:/usr/local/
[root@h1 local]# scp -r jdk root@h5:/usr/local/
[root@h1 local]# scp -r jdk root@h6:/usr/local/
[root@h1 local]# scp -r jdk root@h7:/usr/local/
[root@h1 local]# scp /etc/profile root@h2:/etc/
[root@h1 local]# scp /etc/profile root@h3:/etc/
[root@h1 local]# scp /etc/profile root@h4:/etc/
[root@h1 local]# scp /etc/profile root@h5:/etc/
[root@h1 local]# scp /etc/profile root@h6:/etc/
[root@h1 local]# scp /etc/profile root@h7:/etc/


##配置SSH免密码登陆(使用rsa非对称加密方式)
[root@h1 .ssh]# ssh-keygen -t rsa三次回车
[root@h2 .ssh]# ssh-keygen -t rsa三次回车
[root@h3 .ssh]# ssh-keygen -t rsa三次回车
[root@h4 .ssh]# ssh-keygen -t rsa三次回车
[root@h5 .ssh]# ssh-keygen -t rsa三次回车
[root@h6 .ssh]# ssh-keygen -t rsa三次回车
[root@h7 .ssh]# ssh-keygen -t rsa三次回车
[root@h1 .ssh]# ssh-copy-id root@h1,输入yes,输入密码
[root@h2 .ssh]# ssh-copy-id root@h1,输入yes,输入密码
[root@h3 .ssh]# ssh-copy-id root@h1,输入yes,输入密码
[root@h4 .ssh]# ssh-copy-id root@h1,输入yes,输入密码
[root@h5 .ssh]# ssh-copy-id root@h1,输入yes,输入密码
[root@h6 .ssh]# ssh-copy-id root@h1,输入yes,输入密码
[root@h7 .ssh]# ssh-copy-id root@h1,输入yes,输入密码
[root@h1 .ssh]# scp authorized_keys root@h2:/root/.ssh/,输入yes,输入密码
[root@h1 .ssh]# scp authorized_keys root@h3:/root/.ssh/,输入yes,输入密码
[root@h1 .ssh]# scp authorized_keys root@h4:/root/.ssh/,输入yes,输入密码
[root@h1 .ssh]# scp authorized_keys root@h5:/root/.ssh/,输入yes,输入密码
[root@h1 .ssh]# scp authorized_keys root@h6:/root/.ssh/,输入yes,输入密码
[root@h1 .ssh]# scp authorized_keys root@h7:/root/.ssh/,输入yes,输入密码


##验证SSH免密码登陆
各个机器相互登陆到其他机器,首次会提示输入密码
ssh h1
ssh n ...


##在h1上安装编译好的64位Hadoop2.4.1,安装好后复制到h2,h3,h4,h5,h6,h7上
cd /usr/local/
tar -zxvf hadoop-2.4.1-x64.tar.gz
mv hadoop-2.4.1 hadoop


##修改Hadoop配置文件(hadoop-env.sh,core-site.xml,hdfs-site.xml,mapred-site.xml,yarn-site.xml,slaves)
1:hadoop-env.sh
    export JAVA_HOME=/usr/local/jdk
    
2:core-site.xml
    <configuration>
        <!-- 指定hdfs的nameservice为ns1 -->
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://ns1</value>
        </property>
        <!-- 指定hadoop临时目录 -->
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/usr/local/hadoop/tmp</value>
        </property>
        <!-- 指定zookeeper地址 -->
        <property>
            <name>ha.zookeeper.quorum</name>
            <value>h5:2181,h6:2181,h7:2181</value>
        </property>
    </configuration>
    
3:hdfs-site.xml(HDFS QJM方式高可用参考官网http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
    <configuration>
        <!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 -->
        <property>
            <name>dfs.nameservices</name>
            <value>ns1</value>
        </property>
        <!-- ns1下面有两个NameNode,分别是nn1,nn2 -->
        <property>
            <name>dfs.ha.namenodes.ns1</name>
            <value>nn1,nn2</value>
        </property>
        <!-- nn1的RPC通信地址 -->
        <property>
            <name>dfs.namenode.rpc-address.ns1.nn1</name>
            <value>h1:9000</value>
        </property>
        <!-- nn1的http通信地址 -->
        <property>
            <name>dfs.namenode.http-address.ns1.nn1</name>
            <value>h1:50070</value>
        </property>
        <!-- nn2的RPC通信地址 -->
        <property>
            <name>dfs.namenode.rpc-address.ns1.nn2</name>
            <value>h2:9000</value>
        </property>
        <!-- nn2的http通信地址 -->
        <property>
            <name>dfs.namenode.http-address.ns1.nn2</name>
            <value>h2:50070</value>
        </property>
        <!-- 指定NameNode的元数据在JournalNode上的存放位置 -->
        <property>
            <name>dfs.namenode.shared.edits.dir</name>
            <value>qjournal://h5:8485;h6:8485;h7:8485/ns1</value>
        </property>
        <!-- 指定JournalNode在本地磁盘存放数据的位置 -->
        <property>
            <name>dfs.journalnode.edits.dir</name>
            <value>/usr/local/hadoop/journal</value>
        </property>
        <!-- 开启NameNode失败自动切换 -->
        <property>
            <name>dfs.ha.automatic-failover.enabled</name>
            <value>true</value>
        </property>
        <!-- 配置失败自动切换实现方式 -->
        <property>
            <name>dfs.client.failover.proxy.provider.ns1</name>
            <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
        </property>
        <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
        <property>
            <name>dfs.ha.fencing.methods</name>
            <value>
                sshfence
                shell(/bin/true)
            </value>
        </property>
        <!-- 使用sshfence隔离机制时需要ssh免登陆 -->
        <property>
            <name>dfs.ha.fencing.ssh.private-key-files</name>
            <value>/root/.ssh/id_rsa</value>
        </property>
        <!-- 配置sshfence隔离机制超时时间 -->
        <property>
            <name>dfs.ha.fencing.ssh.connect-timeout</name>
            <value>30000</value>
        </property>
    </configuration>

4:mapred-site.xml(mv mapred-site.xml.template mapred-site.xml)
    <configuration>
        <!-- 指定mr框架为yarn方式 -->
        <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
        </property>
    </configuration>
    
5:yarn-site.xml(YARN高可用参考官网http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html
    <configuration>
        <!-- 开启RM高可靠 -->
        <property>
           <name>yarn.resourcemanager.ha.enabled</name>
           <value>true</value>
        </property>
        <!-- 指定RM的cluster id -->
        <property>
           <name>yarn.resourcemanager.cluster-id</name>
           <value>yrc</value>
        </property>
        <!-- 指定RM的名字 -->
        <property>
           <name>yarn.resourcemanager.ha.rm-ids</name>
           <value>rm1,rm2</value>
        </property>
        <!-- 分别指定RM的地址 -->
        <property>
           <name>yarn.resourcemanager.hostname.rm1</name>
           <value>h3</value>
        </property>
        <property>
           <name>yarn.resourcemanager.hostname.rm2</name>
           <value>h4</value>
        </property>
        <!-- 指定zk集群地址 -->
        <property>
           <name>yarn.resourcemanager.zk-address</name>
           <value>h5:2181,h6:2181,h7:2181</value>
        </property>
        <property>
           <name>yarn.nodemanager.aux-services</name>
           <value>mapreduce_shuffle</value>
        </property>
    </configuration>
    
6:slaves
    h5
    h6
    h7

    

##在h1机器上配置HADOOP_HOME
vi etc/profile
export JAVA_HOME=/usr/local/jdk
export HADOOP_HOME=/usr/local/hadoop
export PATH=.:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH
esc,shift+zz
source /etc/profile

    
##将以上在h1中配置好的Hadoop复制到h2,h3,h4,h5,h6,h7节点上,同时将h1中的/etc/profile复制到h2,h3,h4节点上,分别在h2,h3,h4上source /etc/profile
cd /usr/local/
[root@h1 local]# scp -r hadoop root@h2:/usr/local/
[root@h1 local]# scp -r hadoop root@h3:/usr/local/
[root@h1 local]# scp -r hadoop root@h4:/usr/local/
[root@h1 local]# scp -r hadoop root@h5:/usr/local/
[root@h1 local]# scp -r hadoop root@h6:/usr/local/
[root@h1 local]# scp -r hadoop root@h7:/usr/local/
[root@h1 local]# scp /etc/profile root@h2:/etc/
[root@h1 local]# scp /etc/profile root@h3:/etc/
[root@h1 local]# scp /etc/profile root@h4:/etc/


##在h5上安装Zookeeper
cd /usr/local/
tar -zxvf zookeeper-3.4.6.tar.gz
mv zookeeper-3.4.6 zk


##修改Zookeeper配置文件(zoo_sample.cfg)
cd /usr/local/zk/conf
mv zoo_sample.cfg zoo.cfg
1:zoo.cfg
    dataDir=/usr/local/zk/data
    server.1=h5:2888:3888
    server.2=h6:2888:3888
    server.3=h7:2888:3888


##在h5机器上配置ZOOKEEPER_HOME
vi /etc/profile
export JAVA_HOME=/usr/local/jdk
export HADOOP_HOME=/usr/local/hadoop
export ZOOKEEPER_HOME=/usr/local/zk
export PATH=.:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH
esc,shift+zz
source /etc/profile


##将h5上配置好的Zookeeper复制到h6,h7节点上,同时将h5中的/etc/profile复制到h6,h7节点上,分别在h6,h7上source /etc/profile
[root@h5 local]# scp -r zk root@h6:/usr/local/        ###修改h6节点上的/usr/local/zk/data/myid的值为2
[root@h5 local]# scp -r zk root@h7:/usr/local/        ###修改h7节点上的/usr/local/zk/data/myid的值为3
[root@h5 local]# scp /etc/profile root@h6:/etc/
[root@h5 local]# scp /etc/profile root@h7:/etc/


##分别在h5,h6,h7机器上启动Zookeeper,同时验证Zookeeper是否启动成功
[root@h5 ~]# zkServer.sh start
[root@h6 ~]# zkServer.sh start
[root@h7 ~]# zkServer.sh start
[root@h5 ~]# jps    ###启动了QuorumPeerMain进程
[root@h6 ~]# jps    ###启动了QuorumPeerMain进程
[root@h7 ~]# jps    ###启动了QuorumPeerMain进程
[root@h5 ~]# zkServer.sh status        ###选举h5节点的角色为follower
[root@h6 ~]# zkServer.sh status        ###选举h6节点的角色为leader
[root@h7 ~]# zkServer.sh status        ###选举h7节点的角色为follower


**准备启动Hadoop集群前的操作,严格按照如下步骤执行


##启动Zookeeper集群说明
Hadoop2中HDFS(NameNode)的HA需要依赖Zookeeper集群
Hadoop2中YARN(ResourceManager)的HA需要依赖Zookeeper集群


##在h1机器上启动Hadoop的JournalNode
cd /usr/local/hadoop/sbin
[root@h1 sbin]# hadoop-daemons.sh start journalnode
h7: starting journalnode, logging to /usr/local/hadoop/logs/hadoop-root-journalnode-h7.out
h5: starting journalnode, logging to /usr/local/hadoop/logs/hadoop-root-journalnode-h5.out
h6: starting journalnode, logging to /usr/local/hadoop/logs/hadoop-root-journalnode-h6.out
[root@h1 sbin]# for i in h1 h2 h3 h4 h5 h6 h7; do echo $i; ssh $i `which jps`; done        ###在h5,h6,h7节点上分别多了JournalNode进程


##在h1上格式化Hadoop的HDFS
[root@h1 bin]# hdfs namenode -format    ###Hadoop1.x中是hadoop namenode -format,Hadoop2.x中是hdfs namenode -format。打印出如下结果则表示格式化HDFS成功
15/01/08 03:24:29 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = h1/192.168.1.31
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.4.1
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.1-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.4.1.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.1.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.1.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.1-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.1-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.1.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = Unknown -r Unknown; compiled by 'root' on 2014-07-31T07:06Z
STARTUP_MSG:   java = 1.6.0_45
************************************************************/
15/01/08 03:24:29 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/01/08 03:24:29 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-df4917d3-0436-446c-ad34-1ed7842d0ad5
15/01/08 03:24:31 INFO namenode.FSNamesystem: fsLock is fair:true
15/01/08 03:24:31 INFO namenode.HostFileManager: read includes:
HostSet(
)
15/01/08 03:24:31 INFO namenode.HostFileManager: read excludes:
HostSet(
)
15/01/08 03:24:31 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
15/01/08 03:24:31 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
15/01/08 03:24:31 INFO util.GSet: Computing capacity for map BlocksMap
15/01/08 03:24:31 INFO util.GSet: VM type       = 64-bit
15/01/08 03:24:31 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
15/01/08 03:24:31 INFO util.GSet: capacity      = 2^21 = 2097152 entries
15/01/08 03:24:31 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
15/01/08 03:24:31 INFO blockmanagement.BlockManager: defaultReplication         = 3
15/01/08 03:24:31 INFO blockmanagement.BlockManager: maxReplication             = 512
15/01/08 03:24:31 INFO blockmanagement.BlockManager: minReplication             = 1
15/01/08 03:24:31 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
15/01/08 03:24:31 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
15/01/08 03:24:31 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
15/01/08 03:24:31 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
15/01/08 03:24:31 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
15/01/08 03:24:31 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
15/01/08 03:24:31 INFO namenode.FSNamesystem: supergroup          = supergroup
15/01/08 03:24:31 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/01/08 03:24:31 INFO namenode.FSNamesystem: Determined nameservice ID: ns1
15/01/08 03:24:31 INFO namenode.FSNamesystem: HA Enabled: true
15/01/08 03:24:31 INFO namenode.FSNamesystem: Append Enabled: true
15/01/08 03:24:32 INFO util.GSet: Computing capacity for map INodeMap
15/01/08 03:24:32 INFO util.GSet: VM type       = 64-bit
15/01/08 03:24:32 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
15/01/08 03:24:32 INFO util.GSet: capacity      = 2^20 = 1048576 entries
15/01/08 03:24:32 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/01/08 03:24:32 INFO util.GSet: Computing capacity for map cachedBlocks
15/01/08 03:24:32 INFO util.GSet: VM type       = 64-bit
15/01/08 03:24:32 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
15/01/08 03:24:32 INFO util.GSet: capacity      = 2^18 = 262144 entries
15/01/08 03:24:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/01/08 03:24:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/01/08 03:24:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
15/01/08 03:24:32 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/01/08 03:24:32 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/01/08 03:24:32 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/01/08 03:24:32 INFO util.GSet: VM type       = 64-bit
15/01/08 03:24:32 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
15/01/08 03:24:32 INFO util.GSet: capacity      = 2^15 = 32768 entries
15/01/08 03:24:32 INFO namenode.AclConfigFlag: ACLs enabled? false
15/01/08 03:24:34 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1222079376-192.168.1.31-1420658674703
15/01/08 03:24:34 INFO common.Storage: Storage directory /usr/local/hadoop/tmp/dfs/name has been successfully formatted.
15/01/08 03:24:35 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
15/01/08 03:24:35 INFO util.ExitUtil: Exiting with status 0
15/01/08 03:24:35 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at h1/192.168.1.31
************************************************************/
    
    
##将h1机器上格式化Hadoop后生成的/usr/local/hadoop/tmp文件夹拷贝到h2节点上,HDFS的HA要求Active的NameNode和Standby的NameNode上的元数据必须保持一致
[root@h1 hadoop]# scp -r tmp/ root@h2:/usr/local/hadoop/
    
    
##在h1上格式化ZKFC
[root@h1 hadoop]# hdfs zkfc -formatZK
15/01/08 03:27:21 INFO tools.DFSZKFailoverController: Failover controller configured for NameNode NameNode at h1/192.168.1.31:9000
15/01/08 03:27:21 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
15/01/08 03:27:21 INFO zookeeper.ZooKeeper: Client environment:host.name=h1
15/01/08 03:27:21 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_45
15/01/08 03:27:21 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
15/01/08 03:27:21 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/local/jdk/jre
15/01/08 03:27:21 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.1-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.4.1.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.1.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.1.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.1-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.1-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.1.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
15/01/08 03:27:21 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/local/hadoop/lib/native
15/01/08 03:27:21 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
15/01/08 03:27:21 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
15/01/08 03:27:21 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
15/01/08 03:27:21 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
15/01/08 03:27:21 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-431.el6.x86_64
15/01/08 03:27:21 INFO zookeeper.ZooKeeper: Client environment:user.name=root
15/01/08 03:27:21 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
15/01/08 03:27:21 INFO zookeeper.ZooKeeper: Client environment:user.dir=/usr/local/hadoop
15/01/08 03:27:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=h5:2181,h6:2181,h7:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@2a0ab444
15/01/08 03:27:21 INFO zookeeper.ClientCnxn: Opening socket connection to server h5/192.168.1.35:2181. Will not attempt to authenticate using SASL (Unable to locate a login configuration)
15/01/08 03:27:21 INFO zookeeper.ClientCnxn: Socket connection established to h5/192.168.1.35:2181, initiating session
15/01/08 03:27:21 INFO zookeeper.ClientCnxn: Session establishment complete on server h5/192.168.1.35:2181, sessionid = 0x14ac5c556220000, negotiated timeout = 5000
15/01/08 03:27:21 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/ns1 in ZK.
15/01/08 03:27:21 INFO zookeeper.ZooKeeper: Session: 0x14ac5c556220000 closed
15/01/08 03:27:21 WARN ha.ActiveStandbyElector: Ignoring stale result from old client with sessionId 0x14ac5c556220000
15/01/08 03:27:21 INFO zookeeper.ClientCnxn: EventThread shut down
    
    
##在h1机器上启动HDFS
[root@h1 sbin]# start-dfs.sh
Starting namenodes on [h1 h2]
h2: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-h2.out
h1: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-h1.out
h7: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-h7.out
h6: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-h6.out
h5: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-h5.out
Starting journal nodes [h5 h6 h7]
h6: journalnode running as process 2045. Stop it first.
h7: journalnode running as process 23674. Stop it first.
h5: journalnode running as process 2054. Stop it first.
Starting ZK Failover Controllers on NN hosts [h1 h2]
h1: starting zkfc, logging to /usr/local/hadoop/logs/hadoop-root-zkfc-h1.out
h2: starting zkfc, logging to /usr/local/hadoop/logs/hadoop-root-zkfc-h2.out


##在h3机器上启动YARN(Active)    
[root@h3 sbin]# start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-h3.out
h7: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-h7.out
h6: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-h6.out
h5: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-h5.out
    
    
##在h4机器上启动YARN(Standby)        
cd /usr/local/hadoop/sbin/
[root@h4 sbin]# yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-h4.out
    
    
##在h1上验证,如下结果表示集群配置正确并已经启动成功
[root@h1 sbin]# for i in h1 h2 h3 h4 h5 h6 h7; do echo $i; ssh $i `which jps`; done
h1
24329 DFSZKFailoverController
24052 NameNode
24576 Jps
h2
23733 NameNode
23830 DFSZKFailoverController
23996 Jps
h3
24139 Jps
24027 ResourceManager
h4
24002 Jps
23710 ResourceManager
h5
23823 DataNode
24001 NodeManager
24127 Jps
1901 QuorumPeerMain
2054 JournalNode
h6
23991 NodeManager
1886 QuorumPeerMain
23813 DataNode
2045 JournalNode
24117 Jps
h7
23674 JournalNode
24107 Jps
1882 QuorumPeerMain
23803 DataNode
23981 NodeManager
    
    
##在Windows的浏览器上验证,如下结果表示集群配置正确并已经启动成功
进入C:WindowsSystem32driversetc目录下,配置Windows的hosts文件完成对Hadoop集群的主机映射关系(如无法修改请右键-安全为当前用户授予完全控制该hosts文件的权限)
192.168.1.31 h1
192.168.1.32 h2
192.168.1.33 h3
192.168.1.34 h4
192.168.1.35 h5
192.168.1.36 h6
192.168.1.37 h7
在浏览器中输入:h1:50070        ###h1机器页面导航栏下的第一行显示Overview 'h1:9000' (active),HDFS HA配置成功
在浏览器中输入:h2:50070        ###h2机器页面导航栏下的第一行显示Overview 'h1:9000' (standby),HDFS HA配置成功
在浏览器中输入:h1:50070        ###h3机器页面右侧第一行第九列表格 Active Nodes 下面的值为3,点击3显示如下
/default-rack     RUNNING     h6:42552     h6:8042     8-Jan-2015 04:41:02         0     0 B     8 GB     2.4.1
/default-rack     RUNNING     h7:35883     h7:8042     8-Jan-2015 04:41:03         0     0 B     8 GB     2.4.1
/default-rack     RUNNING     h5:37064     h5:8042     8-Jan-2015 04:41:02         0     0 B     8 GB     2.4.1
在浏览器中输入:h4:8088            ###显示This is standby RM. Redirecting to the current active RM: http://h3:8088/cluster/nodes一会自动跳转到h3:8088页面表示YARN HA配置成功

在命令行中输入:yarn rmadmin –getServiceState rm1(查看状态)    |   yarn rmadmin –transitionToStandby rm1(状态切换的命令)
以上验证通过后表示Hadoop集群配置正确并启动成功
    
    
##在h1上安装hbase-0.98.8-hadoop2-bin.tar.gz(必须先配置好Zookeeper集群)
tar -zxvf hbase-0.98.8-hadoop2-bin.tar.gz
mv hbase-0.98.8-hadoop2 hbase


##在h1上配置HBASE_HOME
vi /etc/profile
export JAVA_HOME=/usr/local/jdk
export HADOOP_HOME=/usr/local/hadoop
export HBASE_HOME=/usr/local/hbase
export PATH=.:$HBASE_HOME/bin:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH
esc,shift+zz
source /etc/profile


##修改Hbase配置文件(hbase-env.sh,hbase-site.xml,regionservers,需要将Hadoop的hdfs-site.xml和core-site.xml 放到hbase/conf下)
cd /usr/local/hbase/conf
1:hbase-env.sh
    export JAVA_HOME=/usr/local/jdk        ###指定hbase依赖的jdk路径
    export HBASE_MANAGES_ZK=false        ###告诉hbase使用外部的Zookeeper集群

2:hbase-site.xml
    <configuration>
        <!-- 指定hbase在HDFS上存储的路径 -->
        <property>
                <name>hbase.rootdir</name>
                <value>hdfs://ns1/hbase</value>
        </property>
        <!-- 指定hbase是分布式的 -->
        <property>
                <name>hbase.cluster.distributed</name>
                <value>true</value>
        </property>
        <!-- 指定zk的地址,多个用“,”分割 -->
        <property>
                <name>hbase.zookeeper.quorum</name>
                <value>h5:2181,h6:2181,h7:2181</value>
        </property>
    </configuration>
    
3:regionservers
    h3
    h4
    h5
    h6
    h7
    
    
##将h1上配置好的Hbase复制到h2,h3,h4,h5,h6,h7上,同时将h1中的/etc/profile复制到h2节点上,在h2上source /etc/profile
[root@h1 local]scp -r hbase root@h2:/usr/local/
[root@h1 local]scp -r hbase root@h3:/usr/local/
[root@h1 local]scp -r hbase root@h4:/usr/local/
[root@h1 local]scp -r hbase root@h5:/usr/local/
[root@h1 local]scp -r hbase root@h6:/usr/local/
[root@h1 local]scp -r hbase root@h7:/usr/local/
[root@h1 local]scp /etc/profile root@h2:/etc/


##同步h1,h2,h3,h4,h5,h6,h7节点上的时间
for i in h1 h2 h3 h4 h5 h6 h7; do echo $i; ssh $i set date '2015-01-01 11:11:11'; done
for i in h1 h2 h3 h4 h5 h6 h7; do echo $i; ssh $i date; done    
    
    
##在h1节点上启动Hbase集群
[root@h1 bin]# start-hbase.sh        ###启动Hbase集群,h1为主节点进程Master


##在h2节点上启动Hbase BackUp Master
[root@h1 bin]# hbase-daemon.sh start master        ###启动Hbase备用主节点进程Master,h2为备用节点Master
    
    
##如下表示Hadoop集群,Zookeeper集群和Hbase集群正常启动
[root@h1 sbin]# for i in h1 h2 h3 h4 h5 h6 h7; do echo $i; ssh $i `which jps`; done
h1
2075 NameNode
3188 Jps
2689 HMaster
2332 DFSZKFailoverController
h2
1973 NameNode
2049 DFSZKFailoverController
2264 HMaster
2604 Jps
h3
1972 ResourceManager
2596 Jps
2326 HRegionServer
h4
1990 ResourceManager
2380 Jps
2104 HRegionServer
h5
2200 NodeManager
2417 HRegionServer
2715 Jps
1890 QuorumPeerMain
1983 DataNode
2046 JournalNode
h6
2420 HRegionServer
1986 DataNode
2203 NodeManager
1887 QuorumPeerMain
2052 JournalNode
2729 Jps
h7
1987 DataNode
2421 HRegionServer
2204 NodeManager
2053 JournalNode
2723 Jps
1894 QuorumPeerMain    
    
    
    
##到这里Hadoop2.4.1(QJM方式的HDFS NameNode HA,YARN ResourceManager HA)集群+Zookeeper3.4.6集群+Hbase-0.98.8(Master HA)集群已正确配置并启动成功.


原文地址:https://www.cnblogs.com/mengyao/p/4214180.html