spark连接hive数据库

hive在执行查询sql时出现java.lang.IllegalArgumentException: Wrong FS: hdfs://node1:9000/user/hive/warehouse/test1.db/t1, expected: hdfs://cluster1

原因是hadoop由普通集群修改成了高可用集群后没有更改hive设置中warehouse在hdfs上的储存路径
修改hive-site.xml文件内hive.metastore.warehouse.dir的值

将之前的hdfs://k200:9000/user/hive/warehouse 修改为 hdfs://k131/user/hive/warehouse

(这里的hdfs://cluster1是Hadoop配置文件core-site.xml中的fs.defaultFS指定的值)

 1 <?xml version="1.0"?>
 2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 3 <configuration>
 4         <property>
 5             <name>javax.jdo.option.ConnectionURL</name>
 6             <value>jdbc:mysql://k131:3306/metastore? 
 7               createDatabaseIfNotExist=true</value>
 8             <description>JDBC connect string for a JDBC 
 9             metastore</description>
10         </property>
11 
12         <property>
13            <name>javax.jdo.option.ConnectionDriverName</name>
14            <value>com.mysql.jdbc.Driver</value>
15            <description>Driver class name for a JDBC 
16             metastore</description>
17          </property>
18 
19          <property>
20               <name>javax.jdo.option.ConnectionUserName</name>
21               <value>root</value>
22               <description>username to use against metastore 
23               database</description>
24          </property>
25 
26          <property>
27                <name>javax.jdo.option.ConnectionPassword</name>
28                <value>root</value>
29                <description>password to use against metastore 
30                database</description>
31         </property>
32 
33         <property>
34                <name>hive.cli.print.header</name>
35                <value>true</value>
36         </property>
37 
38         <property>
39                <name>hive.cli.print.current.db</name>
40                <value>true</value>
41         </property>
42         <property>
43                <name>hive.exec.mode.local.auto</name>
44                <value>true</value>
45         </property>
46 
47         <property>
48                 <name>hive.zookeeper.quorum</name>
49                 <value>k131</value>
50                 <description>The list of ZooKeeper servers to talk to. This is only needed for read/write locks.</description>
51                 </property>
52 
53         <property>
54               <name>hive.zookeeper.client.port</name>
55               <value>2181</value>
56               <description>The port of ZooKeeper servers to talk to. This is only needed for read/write locks.</description>
57         </property>
58 
59 </configuration>
hive-site.xml

spark 无法查看 hive 表中原来的内容,只能重新创建新表

hive (default)> select * from emp;
FAILED: SemanticException Unable to determine if hdfs://k200:9000/user/hive/warehouse/emp is encrypted: java.lang.IllegalArgumentException: Wrong FS: hdfs://k200:9000/user/hive/warehouse/emp, expected: hdfs://k131:9000
hive (default)>

原文地址:https://www.cnblogs.com/Vowzhou/p/10882160.html