hadoop 错误

1

错误:DataXceiver error processing WRITE_BLOCK operation
2014-05-06 15:21:30,378 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: hadoop-datanode1:50010ataXceiver error processing WRITE_BLOCK operation  src: /192.168.1.193:34147 dest: /192.168.1.191:50010
java.io.IOException: Premature EOF from inputStream
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)! t& ^' l+ P  `
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:435)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:693)( L4 O# b0 x; O6 w/ {
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:569)9 |8 H5 ]+ u7 Q, o
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68)+ Z# ?' n# S- p- I7 v
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
        at java.lang.Thread.run(Thread.java:722)  L8 |7 v% c' f0 e' c1 N
, t5 Q! q  F$ ?" T& i/ A
原因:文件操作超租期,实际上就是data stream操作过程中文件被删掉了。
解决办法:
修改hdfs-site.xml (针对2.x版本,1.x版本属性名应该是:dfs.datanode.max.xcievers):* m2 t" i: n0 B- e3 q, v8 e/ u
<property>
        <name>dfs.datanode.max.transfer.threads</name>
        <value>8192</value>   r1 W, E* k( n: b6 g
</property>
拷贝到各datanode节点并重启datanode即可0 l!

原文地址:https://www.cnblogs.com/chengxin1982/p/3939682.html