org.apache.hadoop.ipc.RemoteException: java.io.IOException:XXXXXXXXXXX could only be replicated to 0 nodes, instead of 1

原因:Configured Capacity也就是datanode 没用分配容量

  [root@dev9106 bin]# ./hadoop dfsadmin -report

Configured Capacity: 0 (0 KB)   

Present Capacity: 0 (0 KB)

DFS Remaining: 0 (0 KB)

DFS Used: 0 (0 KB)

DFS Used%: ?%

Under replicated blocks: 0

Blocks with corrupt replicas: 0

Missing blocks: 0

解决方法:

  1. 查看你的文件系统

[root@dev9106 /]# df -hl

文件系统              容量  已用 可用 已用% 挂载点

/dev/sda3             1.9G  1.6G  302M  84% /

/dev/sda8             845G   47G  756G   6% /home

/dev/sda7             5.7G  147M  5.3G   3% /tmp

/dev/sda6             9.5G  4.0G  5.1G  45% /usr

/dev/sda5             9.5G  273M  8.8G   3% /var

/dev/sda1             190M   15M  167M   8% /boot

tmpfs                 7.8G     0  7.8G   0% /dev/shm

  1. 修改文件Hadoop conf/core-site.xml 中hadoop.tmp.dir的值

  <property>

   <name>fs.default.name</name>

   <value>hdfs://localhost:9000</value>

</property>

<property>

   <name>hadoop.tmp.dir</name>

   <value>/home/dhfs/tmp</value>

</property>

</configuration>

  1. 停止hadoop服务,重新格式化namenode

4.重启服务

5.Ok

原文地址:https://www.cnblogs.com/lvlv/p/4750591.html