hive-0.11.0安装

一、安装 

1        下载安装hive

hive-0.11.0.tar.gz(稳定版)

目录:/data

 

tar –zxvfhive-0.11.0.tar.gz

 

2        配置

 

把所有的模板文件copy一份

 

                    cd /opt/hive-0.11.0/conf

 

                    cp hive-default.xml.template hive-site.xml

                    cp hive-env.sh.template hive-env.sh

                    cp hive-log4j.properties.template  hive-log4j.properties

                    cp hive-exec-log4j.properties.template hive-exec-log4j.properties

3、        修改hive-site.xml

<property>
 
 <name>javax.jdo.option.ConnectionURL</name>
 
 <value>jdbc:mysql://192.168.0.6:3306/hive?createDatabaseIfNotExist=true</value>
 
 <description>JDBC connect string for aJDBCmetastore</description>
 
</property>
 
 <property>
 
 <name>javax.jdo.option.ConnectionDriverName</name>
 
 <value>com.mysql.jdbc.Driver</value>
 
 <description>Driver class name for aJDBCmetastore</description>
 
</property>
 
<property>
 
 <name>javax.jdo.option.ConnectionUserName</name>
 
 <value>hive</value>
 
 <description>username to use againstmetastoredatabase</description>
 
</property>
 
<property>
 
 <name>javax.jdo.option.ConnectionPassword</name>
 
 <value>hive</value>
 
 <description>password to use againstmetastoredatabase</description>
 
</property>
 
<property>
 
 <name>hive.metastore.schema.verification</name>
 
 <value>false</value>
 
 <description>


 Enforce metastore schema version consistency.

 True: Verify that version information stored in metastore matcheswithone from Hive jars.  Alsodisableautomatic

       schema migration attempt. Users are required to manully migrateschemaafter Hive upgrade which ensures

       proper metastore schema migration. (Default)

 False: Warn if the version information stored in metastore doesn'tmatchwith one from in Hive jars.

 </description>

</property>

4        修改hive-env.sh

    # Set HADOOP_HOME to pointto a specific hadoop install directory

export  HADOOP_HOME=/usr/local/hadoop

 #HiveConfiguration Directory can be controlled by:

export  HIVE_CONF_DIR=/usr/local/hive/conf

5        安装mysql jdbc

hadoop@james-ubuntu32:~/tmp/tools$cp mysql-connector-java-5.1.28-bin.jar /usr/local/hive/lib

 

6、较验hive

    1、启动hive:

Bin/hive

nohup hive --service hiveserver

    2、测试sql:

show tables;

create table shark_test01(id int, namestring);

select * from shark_test01;

exit;

    查看hive创建的文件:hadoop fs -ls -R /user/hive

 

7        各项配置详解

http://blog.csdn.net/w13770269691/article/details/17232947

 

8、错误及解决

错误3

MetaException(message:file:/user/hive/warehouse/xxxxis not a directory or unable to create one)

解决:

CLASSPATH中加入HADOOP_CONF_DIR

 

错误2

Error in metadata:MetaException(message:Got exception:org.apache.hadoop.hive.metastore.api.MetaExceptionjavax.jdo.JDODataStoreException: An exception was thrown whileadding/validating class(es) : Specified key was too long; max key length is 767bytes

解决:

只要修改MySQLHive元数据库MetaStore的字符集便可。

alter database dbname character set latin1;

 

错误1

java.lang.RuntimeException: Unable toinstantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient

解决:

CLASSPATH中要有mysqljdbc驱动。

 

原文地址:https://www.cnblogs.com/jamesf/p/4751609.html