Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep解决方法

14/03/26 23:10:04 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
14/03/26 23:10:05 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
14/03/26 23:10:06 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
14/03/26 23:10:07 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

在使用sqoop工具对hive表导出到mysql中提示该信息,一直重试。通过网上查阅资料,这个问题是指定hdfs路径不严谨导致。

报错写法

sqoop export --connect jdbc:mysql://c6h2:3306/log --username root --password 123 --table dailylog --fields-terminated-by '01' --export-dir '/user/hive/warehouse/weblog_2013_05_30'

解决方法

sqoop export --connect jdbc:mysql://c6h2:3306/log --username root --password 123 --table dailylog --fields-terminated-by '01' --export-dir 'hdfs://cluster1/user/hive/warehouse/weblog_2013_05_30'

这里加上hdfs协议和集群名称,我这里是hadoop2的ha集群模式。

原文地址:https://www.cnblogs.com/luguoyuanf/p/3627279.html