Hadoop Namenode不能启动

   自己在虚拟机上建立伪分布环境,第一天还一切正常,后来发现每次重新开机以后都不能正常启动,在start-dfs.sh之后jps一下发现namenode不能正常启动,按提示找到logs目录下namenode的启动log发现如下异常。

hadoop@cgy-VirtualBox:~$ jps
5096 ResourceManager
5227 NodeManager
5559 Jps
4742 DataNode
4922 SecondaryNameNode
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/local/hadoop/hdfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2014-12-27 21:48:50,921 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2014-12-27 21:48:50,927 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false. Rechecking.
2014-12-27 21:48:50,927 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false
2014-12-27 21:48:50,927 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2014-12-27 21:48:50,928 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2014-12-27 21:48:50,928 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2014-12-27 21:48:50,928 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/local/hadoop/hdfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)

    

Directory /usr/local/hadoop/hdfs/name

    此目录下的文件是临时文件会被定期删除的,貌似bug已经露出水面。那就重启计算机试试是不是因为这,重启之前检查一下tmp目录下面确定几个format namenode之后应该有的目录都有的,重启之后,发现全部被删掉。在执行一次 start-dfs.sh,看到tmp目录下面建了一些目录,但是dfs/name目录仍然不存在,在start-dfs.sh时候建了一部分目录和文件。而dfs/name需要在hadoop namenode -format时建立。问题清楚了。

解决方案就很简单,这些目录的位置都是根据hadoop.tmp.dir的位置确定的,所以只需在hdfs-site.xml 配置 datanode 和 namenode 存储目录。

<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/usr/local/hadoop-2.0.2-alpha/dfs/name</value>
</property>
 
<property>
  <name>dfs.datanode.data.dir</name>
  <value>file:/usr/local/hadoop-2.0.2-alpha/dfs/data</value>
</property>
原文地址:https://www.cnblogs.com/birdhack/p/4194868.html