Redhat hadoop2.7.2安装笔记

本次安装是在windows7环境下安装redhat虚拟机进行的,所须要的软件例如以下:

VirtualBox-5.0.16-105871-Win.exe

rhel-server-5.4-x86_64-dvd.iso

首先安装虚拟机软件,然后在此基础上安装redhat。安装redhat的时候记得关闭防火墙还有其他的一些服务都disabled掉。

首先在windows7上开一个共享文件夹,将例如以下软件放入共享文件夹:

jdk-7u71-linux-x64.tar.gz

hadoop-2.7.2.tar(hadoop2.7.2原始文件是一个gz文件,在本地解压能够得到tar文件)

安装完redhat后,安装增强工具,安装完成以后能够看到windows下的共享文件夹,复制jdk,hadoop文件到home文件夹。在此之前配置一个免passwordSSH工作环境。自行百度搜索。

安装完jdk以后,编辑/etc/profile文件。最后加上

export JAVA_HOME=/home/jdk1.7
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

source /etc/profile检查jdk是否成功安装。

解压hadoop压缩文件。home文件夹下的文件结构例如以下:


在/home/hadoop2文件夹下创建数据存放的文件夹。tmp、hdfs、hdfs/data、hdfs/name

编辑hadoop-env.sh,改动例如以下:

export JAVA_HOME=/home/jdk1.7
export HADOOP_CONF_DIR=/home/hadoop2/etc/hadoop

编辑yarn-env.sh,改动例如以下:

export JAVA_HOME=/home/jdk1.7
编辑core-site.xml文件

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://127.0.0.1:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/home/hadoop2/tmp</value>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131702</value>
    </property>
</configuration>
编辑mapred-site.xml文件

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>127.0.0.1:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>127.0.0.1:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>127.0.0.1:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>127.0.0.1:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>127.0.0.1:8088</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>768</value>
    </property>
</configuration>
编辑hdfs-site.xml文件

<configuration>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/home/hadoop2/hdfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/home/hadoop2/hdfs/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>127.0.0.1:9001</value>
    </property>
    <property>
    <name>dfs.webhdfs.enabled</name>
    <value>true</value>
    </property>
</configuration>
第一次安装完毕以后,须要运行例如以下命令

hadoop namenode –format

然后启动hadoop

[root@localhost sbin]# ./start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
16/03/29 00:56:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost.localdomain]
localhost.localdomain: starting namenode, logging to /home/hadoop2/logs/hadoop-root-namenode-localhost.localdomain.out
localhost: starting datanode, logging to /home/hadoop2/logs/hadoop-root-datanode-localhost.localdomain.out
Starting secondary namenodes [localhost.localdomain]
localhost.localdomain: starting secondarynamenode, logging to /home/hadoop2/logs/hadoop-root-secondarynamenode-localhost.localdomain.out
16/03/29 00:56:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /home/hadoop2/logs/yarn-root-resourcemanager-localhost.localdomain.out
localhost: starting nodemanager, logging to /home/hadoop2/logs/yarn-root-nodemanager-localhost.localdomain.out
用户jps查看进程,假设是6个表明成功安装:

[root@localhost sbin]# jps
4571 NameNode
3065 DataNode
3479 NodeManager
3373 ResourceManager
3221 SecondaryNameNode
3774 Jps
也能够从浏览器中訪问相关资源

相关问题:

1.共享目录找不到,一般要更新gcc kernel等

2.报Unable To Load Native-Hadoop Library For Your Platform错误。一般按上面的步骤就不会报此错误。









原文地址:https://www.cnblogs.com/yxysuanfa/p/7210874.html