伪分布模式hdfs
1.启动hsfs
2. 编辑vi hadoop-env.sh
![image.png](https://upload-images.jianshu.io/upload_images/18296616-78a5653f84db45a9.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
image.png
![image.png](https://upload-images.jianshu.io/upload_images/18296616-b8ba3f42877eba26.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
image.png
3.配置nameNode和生产文件第地址
[shaozhiqi@hadoop101 hadoop]$ vi core-site.xml
指定HDFS中NameNode的地址
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop101:9000</value>
</property>
<!--指定hadoop运行时产生的临时文件存储的目录-->
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/module/hadoop-3.1.2/data/tmp</value>
</property>
</configuration>
4.指定HDFS的副本数
[shaozhiqi@hadoop101 hadoop]$ vi hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
只有一个节点的话相同数据只存放一份,配置三没用,照样存储一份
启动hdfs
1. 格式化nameNode
hdfs namenode –format //生成name的工作空间
![image.png](https://upload-images.jianshu.io/upload_images/18296616-4b830d6f2894bf7a.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
image.png
可以看到data文件夹已被创建
![image.png](https://upload-images.jianshu.io/upload_images/18296616-993edc4dd80f7a4e.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
image.png
查看version可以看到我们的namenode id和集群id已生成
namespaceID=942797111
clusterID=CID-b853720f-e038-4541-a038-bb78bb01452a
![image.png](https://upload-images.jianshu.io/upload_images/18296616-e53ab4d76365e5b0.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
image.png
2. 启动
查看hadoop有哪些命令
![image.png](https://upload-images.jianshu.io/upload_images/18296616-ae3f4d541a80a92e.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
image.png
hadoop-daemon.sh //是hadoop单节点守护进程的命令
3. 启动namenodeh
hadoop-daemon.sh start namenode
![image.png](https://upload-images.jianshu.io/upload_images/18296616-43b940b8b27ad363.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
image.png
查看是否启动成功
Jsp是jdk提供的java的查看哪些java进程(linux是ps -ef)
![image.png](https://upload-images.jianshu.io/upload_images/18296616-161dca0bfdcf2dbd.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
image.png
如图启动成功
查看namenode web页面
192.168.1.101:9870(3.x+版本),旧版本的端口是50070
如果无法访问查看防火墙是否开启,若是开启就将它关闭:
[shaozhiqi@hadoop101 hadoop-3.1.2]$ systemctl stop firewalld.service
![image.png](https://upload-images.jianshu.io/upload_images/18296616-1a996e02d6fc5534.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
image.png
4. 启动datenode
[shaozhiqi@hadoop101 hadoop-3.1.2]$ hadoop-daemon.sh start datanode
<div align="right">
![image.png](https://upload-images.jianshu.io/upload_images/18296616-73072bac4a0c2aa4.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
image.png
</div>
查看web端
![image.png](https://upload-images.jianshu.io/upload_images/18296616-42008d7378f4eb4a.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
image.png
查看data目录返现多了一个data,之前只有name
![image.png](https://upload-images.jianshu.io/upload_images/18296616-ca25d77711ad3295.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
image.png
可以看到namenode和datanode时同一个集群
<font color="red">注意:</font>
我们重新个数话namenode时得删除date目录还有logs,否则集群起不来
![image.png](https://upload-images.jianshu.io/upload_images/18296616-d92bfe01bf929f12.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
image.png
并列的logs
Logs目录
通过刚刚的一系列操作我们生成了data和logs
查看logs目录发现有datanode和namenode的日志
![image.png](https://upload-images.jianshu.io/upload_images/18296616-22c92cb5d0df4625.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
image.png
执行wordcount
在hdfs文件系统上创建输入文件input
[shaozhiqi@hadoop101 hadoop-3.1.2]$ hdfs dfs -mkdir -p /user/shaohadoop/input
这个目录会创建在我们hdfs文件系统上,不是centos的本地路径。
![image.png](https://upload-images.jianshu.io/upload_images/18296616-3e1d6712d45d04a9.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
image.png
将之前我们的wc.input 上传到我们的hdfs的input下
[shaozhiqi@hadoop101 hadoop-3.1.2]$ hdfs dfs -put wcinput/wc.input /user/shaohadoop/input
![image.png](https://upload-images.jianshu.io/upload_images/18296616-4b8ace4b7bf207f1.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
image.png
![image.png](https://upload-images.jianshu.io/upload_images/18296616-1f092a6c4e07ba9e.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
image.png
执行我们的wordcont
[shaozhiqi@hadoop101 hadoop-3.1.2]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.2.jar wordcount /user/shaohadoop/input user/shaohadoop/output
输入路径我们可以用hdfs自定义,但是输出路径自动加了usershaozhiqi,还不太懂
![image.png](https://upload-images.jianshu.io/upload_images/18296616-34287aca55cf0bc7.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
image.png
点击part-r 00000,下载后可以看到运行成功
![image.png](https://upload-images.jianshu.io/upload_images/18296616-051229f6b196eb02.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
image.png