hadoop 学习之异常篇

本文旨在给予自己在学习hadoop过程中遇到的问题的一个记录和解决方法。

一、

copyFromLocal: java.io.IOException: File /user/hadoop/slaves could only be replicated to 0 nodes, instead of 1
14/06/09 13:45:00 ERROR hdfs.DFSClient: Exception closing file /user/hadoop/slaves : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/slaves could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

这个问题是在我进行伪分布式的情况下进行文件的上传出现的,首先我查看了我的hdfs-site.xml中的replication值,发现我没有配置错误。

解决:重新将文件系统格式化

hadoop namenode -format

二、

[hadoop@localhost logs]$ hadoop fs -ls
ls: Cannot access .: No such file or directory.

这个问题是在查看文件系统中内容时候出现的,因为文件系统中现在没有任何文件,所以会出现这个问题。

解决:可以创建一个新的文件夹或上传一个新的文件。

三、

Exception in thread "main" org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=dvqfq6prcjdsh4phadoop, access=WRITE, inode="hadoop":hadoop:supergroup:rwxr-xr-x
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
    at java.lang.reflect.Constructor.newInstance(Unknown Source)
    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:96)
    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:58)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.<init>(DFSClient.java:2710)
    at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:492)
    at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:195)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:484)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:384)
    at com.hadoop.hdfs.test.FileCopyWithProgess.main(FileCopyWithProgess.java:27)

这个问题出现在本地使用eclipse向hdfs中写入文件时出现的权限问题

解决:在hdfs-site.xml加入如下代码

<property>
  <name>dfs.permissions</name>
  <value>false</value>
  <description>
    If "true", enable permission checking in HDFS.
    If "false", permission checking is turned off,
    but all other behavior is unchanged.
    Switching from one parameter value to the other does not change the mode,
    owner or group of files or directories.
  </description>
</property>
原文地址:https://www.cnblogs.com/fang-s/p/3777784.html