hadoop集群通过web管理界面只显示一个节点

hadoop集群通过web管理界面只显示一个节点,但每台机器datanode都启动了

datanode日志;

java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:714)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:685)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:788)
at org.apache.hadoop.ipc.Client$Connection.access$3500(Client.java:410)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1550)
at org.apache.hadoop.ipc.Client.call(Client.java:1381)
at org.apache.hadoop.ipc.Client.call(Client.java:1345)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy15.versionRequest(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.versionRequest(DatanodeProtocolClientSideTranslatorPB.java:274)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.retrieveNamespaceInfo(BPServiceActor.java:215)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:261)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:750)
at java.lang.Thread.run(Thread.java:748)
2020-07-17 15:39:14,872 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: hadoop102/192.168.1.122:9000
2020-07-17 15:39:20,876 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop102/192.168.1.122:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-07-17 15:39:21,878 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop102/192.168.1.122:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-07-17 15:39:22,880 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop102/192.168.1.122:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-07-17 15:39:23,882 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop102/192.168.1.122:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-07-17 15:39:24,884 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop102/192.168.1.122:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-07-17 15:39:25,886 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop102/192.168.1.122:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-07-17 15:39:26,888 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop102/192.168.1.122:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-07-17 15:39:27,890 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop102/192.168.1.122:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-07-17 15:39:28,892 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop102/192.168.1.122:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-07-17 15:39:29,894 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop102/192.168.1.122:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-07-17 15:39:29,896 WARN org.apache.hadoop.ipc.Client: Failed to connect to server: hadoop102/192.168.1.122:9000: retries get failed due to exceeded maximum allowed retries number: 10

主要原因是 每台机器的/etc/hosts 错误

原来为

127.0.0.1 localhost

127.0.1.1 hadoop102

192.168.x.x hadoop102
192.168.x.x hadoop103
192.168.x.x hadoop104

修改为:(即把127.0.1.1 这一行删掉

127.0.0.1 localhost

192.168.x.x hadoop102
192.168.x.x hadoop103
192.168.x.x hadoop104

原文地址:https://www.cnblogs.com/qiu-hua/p/13334271.html