hadoop2.1.0在ubuntu下的安装配置

在Ubuntu下安装hadoop2.1.0之前,首先需要安装如下程序:

|- JDK 1.6 or later

|- SSH(安全协议外壳)

  要装这两个程序的原因:

  1. Hadoop是用Java开发的,Hadoop的编译及MapReduce的运行都需要使用JDK。

  2. Hadoop需要通过SSH来启动salve列表中各台主机的守护进程,因此SSH也是必须安装的,即使是安装伪分布式版本(因为Hadoop并没有区分集群式和伪分布式)。对于伪分布式,Hadoop会采用与集群相同的处理方式,即依次序启动文件conf/slaves中记载     的主机上的进程,只不过伪分布式中salve为localhost(即为自身),所以对于伪分布式Hadoop,SSH一样是必须的。

|- Maven 3.0 or later

命令安装

1,apt-get -y install maven build-essential autoconf automake libtool cmake zlib1g-dev pkg-config libssl-dev

 Because On Linux, you need the tools to create the native libraries.

2,确认是否安装成功 

    mvn -version

  Maven的作用和原理:

    当您的项目逐渐变得庞大和复杂时,最好使用一种构建工具来自动构建您的项目。例如,一个典型的java项目,每次构建时都要经历编译java源代码,把class文件打成.jar包,生成javadocs文档等步骤。这些步骤都可以用构建工具帮您自动完成。说到构建工具,大家肯定都知道make,但make是依赖具体操作系统的。Java-centric选择了Ant,一种可以跨平台的使用xml来替换Makefile糟糕语法的构建工具。
    来自Apache软件组织的构建工具Maven更可能成为您的选择,Maven不仅提供了out-of-the-box的解决方案来统一处理构建相关的任务,还提供了信息统计的功能。使您的开发团队可以更好地跟踪项目的进展情况。
    作为构建工具,Maven和Ant一样,利用构建配置文件进行编译,打包,测试等操作。您可以用Maven自带的功能进行任何的操作,但前提是做好了相应的配置。当然,修改已有的模板来开始新的项目是个好方法。除非您在写特有的task,不然都会有target重用的问题。
    Maven进行了一些改进。您将项目配置内容写成XML文件,并且可以使用很多Maven自带的功能。另外还可以在Maven项目中调用任何Ant的task。
    Maven自带的"goals"有以下功能:
     编译源代码
    产生Javadoc文档
    运行unit测试
    源代码文法分析
    产生违反团队编码规范的详细报告
    产生CVS最新提交报告
    产生CVS更改最频繁的文件报告和提交最频繁的开发人员报告
    产生可以交叉引用的HTML格式的源代码,等等。

|-ProtocolBuffer 2.5.0Protocol Buffers are a way of encoding structured data in an efficient yet extensible format. Google uses Protocol Buffers for almost all of its internal RPC protocols and file formats.   

      最新版本的Hadoop代码中已经默认了Protocol buffer(以下简称PB,http://code.google.com/p/protobuf/)作为RPC的默认实现,原来的WritableRpcEngine已经被淘汰了。来自cloudera的Aaron T. Myers在邮件中这样说的“since PB can provide support for evolving protocols in a compatible fashion.”

     首先要明白PB是什么,PB是Google开源的一种轻便高效的结构化数据存储格式,可以用于结构化数据序列化/反序列化,很适合做数据存储或 RPC 数据交换格式。它可用于通讯协议、数据存储等领域的语言无关、平台无关、可扩展的序列化结构数据格式。目前提供了 C++、Java、Python 三种语言的 API。简单理解就是某个进程把一些结构化数据通过网络通信的形式传递给另外一个进程(典型应用就是RPC);或者某个进程要把某些结构化数据持久化存储到磁盘上(这个有点类似于在Mongodb中的BSON格式)。对于存储的这个例子来说,使用PB和XML,JSON相比的缺点就是存储在磁盘上的数据用户是无法理解的,除非用PB反序列化之后才行,这个有点类似于IDL。优点就是序列化/反序列化速度快,网络或者磁盘IO传输的数据少,这个在Data-Intensive Scalable Computing中是非常重要的。

   Hadoop使用PB作为RPC实现的另外一个原因是PB的语言、平台无关性。在mailing list里听说过社区的人有这样的考虑:就是现在每个MapReduce task都是在一个JVM虚拟机上运行的(即使是Streaming的模式,MR任务的数据流也是通过JVM与NN或者DN进行RPC交换的),JVM最严重的问题就是内存,例如OOM。我看社区里有人讨论说如果用PB这样的RPC实现,那么每个MR task都可以直接与NN或者DN进行RPC交换了,这样就可以用C/C++来实现每一个MR task了。百度做的HCE(https://issues.apache.org/jira/browse/MAPREDUCE-1270)和这种思路有点类似,但是由于当时的Hadoop RPC通信还是通过WritableRpcEngine来实现的,所以MR task还是没有摆脱通过本地的JVM代理与NN或者DN通信的束缚,因为Child JVM Process还是存在的,还是由它来设置运行时环境和RPC交互。

关于PB的原理和实现,请大家参考http://code.google.com/p/protobuf/或者http://www.ibm.com/developerworks/cn/linux/l-cn-gpb/?ca=drs-tp4608,本文不再赘述。

  • JDK的安装:    
sudo mkdir -p /usr/local/java
sudo mv /home/john/Downloads/jdk-6u26-linux-x64.bin /usr/local/java
cd /usr/local/java
sudo chmod 700 jdk-6u26-linux-x64.bin
sudo ./jdk-6u26-linux-x64.bin
sudo rm jdk-6u26-linux-x64.bin
sudo ln -s jdk1.6.0_26 /usr/local/java/latest
  • 配置环境变量

输入命令:

sudo gedit /etc/environment

输入密码,打开environment文件。

在文件的最下面输入如下内容:

JAVA_HOME="/usr/local/java/latest"
JRE_HOME="/usr/local/java/latest/jre"
PATH="/usr/local/java/latest/bin:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games"

输入命令验证JDK是否安装成功:

java -version

查看信息:

java version "1.6.0_14"
Java(TM) SE Runtime Environment (build 1.6.0_14-b08)
Java HotSpot(TM) Server VM (build 14.0-b16, mixed mode)
  • SSH的安装和配置免密码登陆:

同样以Ubuntu为例,假设用户名为u。

1)确认已经连接上互联网,输入命令

sudo apt-get install ssh

这里先解释一下sudo与apt这两个命令,sudo这个命令允许普通用户执行某些或全部需要root权限命令,它提供了详尽的日志,可以记录下每个用户使用这个命令做了些什么操作;同时sudo也提供了灵活的管理方式,可以限制用户使用命令。sudo的配置文件为/etc/sudoers。

apt的全称为the Advanced Packaging Tool,是Debian计划的一部分,是Ubuntu的软件包管理软件,通过apt安装软件无须考虑软件的依赖关系,可以直接安装所需要的软件,apt会自动下载有依赖关系的包,并按顺序安装,在Ubuntu中安装有apt的一个图形化界面程序synaptic(中文译名为“新立得”),大家如果有兴趣也可以使用这个程序来安装所需要的软件。(如果大家想了解更多,可以查看一下关于Debian计划的资料。)

2)配置为可以无密码登录本机。

首先查看在u用户下是否存在.ssh文件夹(注意ssh前面有“.”,这是一个隐藏文件夹),输入命令:

ls -a /home/u

一般来说,安装SSH时会自动在当前用户下创建这个隐藏文件夹,如果没有,可以手动创建一个。

接下来,输入命令:

ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa

解释一下,ssh-keygen代表生成密钥;-t(注意区分大小写)表示指定生成的密钥类型;dsa是dsa密钥认证的意思,即密钥类型;-P用于提供密语;-f指定生成的密钥文件。(关于密钥密语的相关知识这里就不详细介绍了,里面会涉及SSH的一些知识,如果读者有兴趣,可以自行查阅资料。)

在Ubuntu中,~代表当前用户文件夹,这里即/home/u。

这个命令会在.ssh文件夹下创建两个文件id_dsa及id_dsa.pub,这是SSH的一对私钥和公钥,类似于钥匙及锁,把id_dsa.pub(公钥)追加到授权的key里面去。

输入命令:

cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

这段话的意思是把公钥加到用于认证的公钥文件中,这里的authorized_keys是用于认证的公钥文件。

至此无密码登录本机已设置完毕。

3)验证SSH是否已安装成功,以及是否可以无密码登录本机。

输入命令:

ssh -version

显示结果:

OpenSSH_6.2p2 Ubuntu-6ubuntu0.1, OpenSSL 1.0.1e 11 Feb 2013
Bad escape character 'rsion'.

显示SSH已经安装成功了。

输入命令:

ssh localhost

会有如下显示:

The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 8b:c3:51:a5:2a:31:b7:74:06:9d:62:04:4f:84:f8:77.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
Linux master 2.6.31-14-generic #48-Ubuntu SMP Fri Oct 16 14:04:26 UTC 2009 i686
To access official Ubuntu documentation, please visit:
http://help.ubuntu.com/
Last login: Mon Oct 18 17:12:40 2010 from master
admin@Hadoop:~$

这说明已经安装成功,第一次登录时会询问你是否继续链接,输入yes即可进入。

实际上,在Hadoop的安装过程中,是否无密码登录是无关紧要的,但是如果不配置无密码登录,每次启动Hadoop,都需要输入密码以登录到每台机器的DataNode上,考虑到一般的Hadoop集群动辄数百台或上千台机器,因此一般来说都会配置SSH的无密码登录。

  • ProtocolBuffer 2.5.0的安装

安装protobuf

    下载地址:http://code.google.com/p/protobuf/downloads/detail?name=protobuf-2.4.1.tar.gz&can=2&q=    安装过程:

tar zxvf protobuf-2.4.1.tar.gz

cd protobuf-2.4.1

./configure

make

make check

make install

查看是否安装成功:protoc --version

hadoop源码的编译

源码下载:

从subversion库check out:
[zhouhh@Hadoop48 hsrc]$ svn co http://svn.apache.org/repos/asf/hadoop/common/trunk

[zhouhh@Hadoop48 hsrc]$ cd trunk/
[zhouhh@Hadoop48 trunk]$ ls
BUILDING.txt        hadoop-assemblies           hadoop-common-project           hadoop-hdfs-project           hadoop-minicluster           hadoop-project-dist      hadoop-yarn-project

dev-support hadoop-client           hadoop-dist          hadoop-mapreduce-project            hadoop-project             hadoop-tools             pom.xml

hadoop (Main Hadoop project)
– hadoop-project (Parent POM for all Hadoop Maven modules. )
(All plugins & dependencies versions are defined here.)
– hadoop-project-dist (Parent POM for modules that generate distributions.)
– hadoop-annotations (Generates the Hadoop doclet used to generated the Javadocs)
– hadoop-assemblies (Maven assemblies used by the different modules)
– hadoop-common-project (Hadoop Common)
– hadoop-hdfs-project (Hadoop HDFS)
– hadoop-mapreduce-project (Hadoop MapReduce)
– hadoop-tools (Hadoop tools like Streaming, Distcp, etc.)
– hadoop-dist (Hadoop distribution assembler)

源码编译:

[zhouhh@Hadoop48 trunk]$ mvn install -DskipTests -Pdist

[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ………………………….. SUCCESS [0.605s]
[INFO] Apache Hadoop Project POM ……………………. SUCCESS [0.558s]
[INFO] Apache Hadoop Annotations ……………………. SUCCESS [0.288s]
[INFO] Apache Hadoop Project Dist POM ……………….. SUCCESS [0.094s]
[INFO] Apache Hadoop Assemblies …………………….. SUCCESS [0.088s]
[INFO] Apache Hadoop Auth ………………………….. SUCCESS [0.152s]
[INFO] Apache Hadoop Auth Examples ………………….. SUCCESS [0.093s]
[INFO] Apache Hadoop Common ………………………… SUCCESS [5.188s]
[INFO] Apache Hadoop Common Project …………………. SUCCESS [0.049s]
[INFO] Apache Hadoop HDFS ………………………….. SUCCESS [12.065s]
[INFO] Apache Hadoop HttpFS ………………………… SUCCESS [0.194s]
[INFO] Apache Hadoop HDFS BookKeeper Journal …………. SUCCESS [0.616s]
[INFO] Apache Hadoop HDFS Project …………………… SUCCESS [0.029s]
[INFO] hadoop-yarn ………………………………… SUCCESS [0.157s]
[INFO] hadoop-yarn-api …………………………….. SUCCESS [2.951s]
[INFO] hadoop-yarn-common ………………………….. SUCCESS [0.752s]
[INFO] hadoop-yarn-server ………………………….. SUCCESS [0.124s]
[INFO] hadoop-yarn-server-common ……………………. SUCCESS [0.736s]
[INFO] hadoop-yarn-server-nodemanager ……………….. SUCCESS [0.592s]
[INFO] hadoop-yarn-server-web-proxy …………………. SUCCESS [0.123s]
[INFO] hadoop-yarn-server-resourcemanager ……………. SUCCESS [0.200s]
[INFO] hadoop-yarn-server-tests …………………….. SUCCESS [0.149s]
[INFO] hadoop-yarn-client ………………………….. SUCCESS [0.119s]
[INFO] hadoop-yarn-applications …………………….. SUCCESS [0.090s]
[INFO] hadoop-yarn-applications-distributedshell ……… SUCCESS [0.167s]
[INFO] hadoop-mapreduce-client ……………………… SUCCESS [0.049s]
[INFO] hadoop-mapreduce-client-core …………………. SUCCESS [1.103s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher …. SUCCESS [0.142s]
[INFO] hadoop-yarn-site ……………………………. SUCCESS [0.082s]
[INFO] hadoop-yarn-project …………………………. SUCCESS [0.075s]
[INFO] hadoop-mapreduce-client-common ……………….. SUCCESS [1.202s]
[INFO] hadoop-mapreduce-client-shuffle ………………. SUCCESS [0.066s]
[INFO] hadoop-mapreduce-client-app ………………….. SUCCESS [0.109s]
[INFO] hadoop-mapreduce-client-hs …………………… SUCCESS [0.123s]
[INFO] hadoop-mapreduce-client-jobclient …………….. SUCCESS [0.114s]
[INFO] hadoop-mapreduce-client-hs-plugins ……………. SUCCESS [0.084s]
[INFO] Apache Hadoop MapReduce Examples ……………… SUCCESS [0.130s]
[INFO] hadoop-mapreduce ……………………………. SUCCESS [0.060s]
[INFO] Apache Hadoop MapReduce Streaming …………….. SUCCESS [0.071s]
[INFO] Apache Hadoop Distributed Copy ……………….. SUCCESS [0.069s]
[INFO] Apache Hadoop Archives ………………………. SUCCESS [0.061s]
[INFO] Apache Hadoop Rumen …………………………. SUCCESS [0.135s]
[INFO] Apache Hadoop Gridmix ……………………….. SUCCESS [0.082s]
[INFO] Apache Hadoop Data Join ……………………… SUCCESS [0.070s]
[INFO] Apache Hadoop Extras ………………………… SUCCESS [0.192s]
[INFO] Apache Hadoop Pipes …………………………. SUCCESS [0.019s]
[INFO] Apache Hadoop Tools Dist …………………….. SUCCESS [0.057s]
[INFO] Apache Hadoop Tools …………………………. SUCCESS [0.018s]
[INFO] Apache Hadoop Distribution …………………… SUCCESS [0.047s]
[INFO] Apache Hadoop Client ………………………… SUCCESS [0.047s]
[INFO] Apache Hadoop Mini-Cluster …………………… SUCCESS [0.053s]
[INFO] ————————————————————————
[INFO] BUILD SUCCESS
[INFO] ————————————————————————
[INFO] Total time: 32.093s
[INFO] Finished at: Wed Dec 26 11:00:10 CST 2012

[INFO] Final Memory: 60M/76

有两个出错的地方,出错的patch:

diff --git hadoop-project/pom.xml hadoop-project/pom.xml
index 3938532..31ee469 100644
--- hadoop-project/pom.xml
+++ hadoop-project/pom.xml
@@ -600,7 +600,7 @@
       <dependency>
         <groupId>com.google.protobuf</groupId>
         <artifactId>protobuf-java</artifactId>
-        <version>2.4.0a</version>
+        <version>2.5.0</version>
       </dependency>
       <dependency>
         <groupId>commons-daemon</groupId>

Index: hadoop-common-project/hadoop-auth/pom.xml
===================================================================
--- hadoop-common-project/hadoop-auth/pom.xml	(revision 1543124)
+++ hadoop-common-project/hadoop-auth/pom.xml	(working copy)
@@ -54,6 +54,11 @@
     </dependency>
     <dependency>
       <groupId>org.mortbay.jetty</groupId>
+      <artifactId>jetty-util</artifactId>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.mortbay.jetty</groupId>
       <artifactId>jetty</artifactId>
       <scope>test</scope>
     </dependency>

 hadoop的配置

环境变量的配置:

$ export HADOOP_HOME=$HOME/yarn/hadoop-2.0.1-alpha
$ export HADOOP_MAPRED_HOME=$HOME/yarn/hadoop-2.0.1-alpha
$ export HADOOP_COMMON_HOME=$HOME/yarn/hadoop-2.0.1-alpha
$ export HADOOP_HDFS_HOME=$HOME/yarn/hadoop-2.0.1-alpha
$ export YARN_HOME=$HOME/yarn/hadoop-2.0.1-alpha
$ export HADOOP_CONF_DIR=$HOME/yarn/hadoop-2.0.1-alpha/etc/hadoop

This is very important as if you miss any one variable or set the value incorrectly, it will be very difficult to detect the error and the job will fail.

Also, add these to your ~/.bashrc or other shell start-up script so that you don’t need to set them every time.

配置文件的配置:

1.core-site.xml


<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
  </property>

  <property>
    <name>dfs.permissions.enabled</name>
    <value>false</value>
  </property>

  <property>
    <name>io.native.lib.available</name>
    <value>true</value>
  </property>

  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/gao/yarn/yarn_data/hdfs/namenode/</value>
  </property>

</configuration>

2.hafs-site.xml


<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>

 <property>
    <name>dfs.namenode.http-address</name>
    <value>0.0.0.0:50070</value>
  </property>

  <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>0.0.0.0:50090</value>
  </property>

  <property>
    <name>dfs.datanode.address</name>
    <value>0.0.0.0:50010</value>
  </property>

  <property>
    <name>dfs.datanode.http.address</name>
    <value>0.0.0.0:50075</value>
  </property>

  <property>
    <name>dfs.datanode.ipc.address</name>
    <value>0.0.0.0:50020</value>
  </property>

</configuration>

3.format node

This step is needed only for the first time. Doing it every time will result in loss of content on HDFS.

$ bin/hadoop namenode -format

4. Start HDFS processes

Name node:

$ sbin/hadoop-daemon.sh start namenode
starting namenode, logging to /home/hduser/yarn/hadoop-2.0.1-alpha/logs/hadoop-hduser-namenode-pc3-laptop.out
$ jps
18509 Jps
17107 NameNode

Data node:

$ sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /home/hduser/yarn/hadoop-2.0.1-alpha/logs/hadoop-hduser-datanode-pc3-laptop.out
$ jps
18509 Jps
17107 NameNode
17170 DataNode

5. Start Hadoop Map-Reduce Processes

Resource Manager:

$ sbin/yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /home/hduser/yarn/hadoop-2.0.1-alpha/logs/yarn-hduser-resourcemanager-pc3-laptop.out
$ jps
18509 Jps
17107 NameNode
17170 DataNode
17252 ResourceManager

Node Manager:

$ sbin/yarn-daemon.sh start nodemanager
starting nodemanager, logging to /home/hduser/yarn/hadoop-2.0.1-alpha/logs/yarn-hduser-nodemanager-pc3-laptop.out
$jps
18509 Jps
17107 NameNode
17170 DataNode
17252 ResourceManager
17309 NodeManager

Job History Server:

$ sbin/mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /home/hduser/yarn/hadoop-2.0.1-alpha/logs/yarn-hduser-historyserver-pc3-laptop.out
$jps
18509 Jps
17107 NameNode
17170 DataNode
17252 ResourceManager
17309 NodeManager
17626 JobHistoryServer

6. Running the famous wordcount example to verify installation

$ mkdir in
$ cat > in/file
This is one line
This is another one

Add this directory to HDFS:

$ bin/hadoop dfs -copyFromLocal in /in

Run wordcount example provided:

$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.*-alpha.jar wordcount /in /out

Check the output:

$ bin/hadoop dfs -cat /out/*
This 2
another 1
is 2
line 1
one 2

7. Web interface

Browse HDFS and check health using http://localhost:50070 in the browser:

You can check the status of the applications running using the following URL:

http://localhost:8088

7. Stop the processes

$ sbin/hadoop-daemon.sh stop namenode
$ sbin/hadoop-daemon.sh stop datanode
$ sbin/yarn-daemon.sh stop resourcemanager
$ sbin/yarn-daemon.sh stop nodemanager
$ sbin/mr-jobhistory-daemon.sh stop historyserver
原文地址:https://www.cnblogs.com/gaodong/p/3343678.html