hadoop 单节点安装

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/SingleCluster.html

,一,.安装hdfs

1,hadoop-env.sh修改java_home

2,core-site加入

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://spark00:8020</value>
    </property>
	  <property>
        <name>hadoop.tmp.dir</name>
        <value>/root/soft/hadoop-2.6.0-cdh5.4.0/data/tmp</value>
    </property>
</configuration>

1,注意使用主机名映射
2,端口使用:8020
3,缓存目录(在hadoop安装目录中创建/data/emp目录)
/root/soft/hadoop-2.6.0-cdh5.4.0/data/tmp

3,hdfs-site,设置副本的数量,因为我是单节点,设置一个即可

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

4,

配置datanode的配置文件slaves

  1. vi slaves
修改datanode节点如下:
  1. spark01

5,格式化
cd /root/soft/hadoop-2.6.0-cdh5.4.0

bin/hdfs namenode -format

----------------------------

启动namenode:sbin/hadoop-daemon.sh start namenode
启动datenode:sbin/hadoop-daemon.sh start datenode






































原文地址:https://www.cnblogs.com/xiaoxiao5ya/p/7cdd6d16387d78ca3cd7cfc2eaae7fe5.html