spark on hive 配置hive的metastore为mysql

<property>
<name>hive.metastore.uris</name>
<value></value>
<description>Thrift uri for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>

<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/mysql?useUnicode=true&amp;characterEncoding=UTF-8&amp;createDatabaseIfNotExist=true</value>
</property>


<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>

<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>username to use against metastore database</description>
</property>

<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>yangsiyi</value>
<description>password to use against metastore database</description>
</property>

修改完后 在spark中启动thriftserver,随后在spark的bin下 用beeline方式连接

或者写成一个.sh文件 每次直接执行即可

.sh文件内容如:./beeline -u jdbc:hive2://yangsy132:10000/default -n root -p yangsiyi

原文地址:https://www.cnblogs.com/yangsy0915/p/4906071.html