hadoop错误总结

1.hadoop3: mkdir: cannot create directory `/usr/local/hadoop/bin/../logs': Permission denied
把所有Datanode节点执行下面命令

[hadoop@hadoop3 local]$ chown -R  hadoop:hadoop  hadoop-0.20.2/


2.

FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1257313099-10.10.208.38-1394679083528 (storage id DS-743638901-127.0.0.1-50010-1394616048958) service to Linux-hadoop-38/10.10.208.38:9000
java.io.IOException: Incompatible clusterIDs in /usr/local/hadoop/tmp/dfs/data: namenode clusterID = CID-8e201022-6faa-440a-b61c-290e4ccfb006; datanode clusterID = clustername

在重新格式化hdfs之后,namenode节点的对应路径下的VERSION信息会改变,而datanode的VERSION不会改变,所以出现了不一致

解决办法:

1.删除各个namenode和datanode下的在配置文件中设置的存放hdfs数据的路径

再次重新格式化

2.将namenode下的VERSION中的cluderID拷贝到dataname的VERSION中

重新启动hdfs

原文地址:https://www.cnblogs.com/jchubby/p/4429703.html