Hive安装

 
1 老三样
 
2 开启hadoop 完全开启 start-all.sh
 
3 确认我们的MYSQL处在工作状态
 
4 修改HIVE的配置文件
cd /usr/local/hive/conf
 
hive-2.x
cp hive-env.sh.template hive-env.sh
cp hive-default.xml.template hive-site.xml
cp hive-log4j2.properties.template hive-log4j2.properties
cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties
 
hive-1.x
cp hive-env.sh.template hive-env.sh
cp hive-default.xml.template hive-site.xml
cp hive-log4j.properties.template hive-log4j.properties
cp hive-exec-log4j.properties.template hive-exec-log4j.properties
 
5 修改hive-env.sh添加环境变量
export JAVA_HOME=/usr/java  
export HADOOP_HOME=/usr/local/hadoop 
export HIVE_HOME=/usr/local/hive   
export HIVE_CONF_DIR=/usr/local/hive/conf
 
6 修改hive-site文件
<property>
<name>javax.jdo.option.ConnectionURL</name>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>root
</value>
</property>
 
<property>
    <name>hive.exec.scratchdir</name>
    <value>/var/hive</value>
    <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/&lt;username&gt; is created, with ${hive.scratch.dir.permission}.</description>
</property>
 
<property>
    <name>hive.exec.local.scratchdir</name>
    <value>/var/hive/temp/${user.name}</value>
    <description>Local scratch space for Hive jobs</description>
</property>
 
<property>
    <name>hive.downloaded.resources.dir</name>
    <value>/var/hive/resources</value>
    <description>Temporary local directory for added resources in the remote file system.</description>
</property>
 
<property>
    <name>hive.scratch.dir.permission</name>
    <value>777</value>
    <description>The permission for the user specific scratch directories that get created.</description>
</property>
 
<property>
    <name>hive.querylog.location</name>
    <value>/var/hive/querylog</value>
    <description>Location of Hive run time structured log file</description>
</property>
 
<property>
    <name>hive.server2.logging.operation.log.location</name>
    <value>/var/hive/operation_logs</value>
    <description>Top level directory where operation logs are stored if logging functionality is enabled</description>
</property>
 
7 在MYSQL中创建HIVE相关的东西
mysql -uroot -proot
 
CREATE USER 'hive' IDENTIFIED BY 'hive';
 
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%' IDENTIFIED BY 'hive' WITH GRANT OPTION;  
 
FLUSH PRIVILEGES;
 
8 上传mysql连接jar包和hadoop-common的jar包到hive lib下
将两个jar包上传到/usr/local/hive/lib
将hive-site.xml文件上传到 /usr/local/hive/conf下
 
9 创建hive运行时在HDFS上生成的目录
 
mkdir -p  /var/hive/resources /var/hive/querylog /var/hive/operation_logs
 
hdfs dfs -mkdir /usr
hdfs dfs -mkdir /usr/hive/
 
hdfs dfs -mkdir -p /usr/hive/warehouse
hdfs dfs -chmod g+w /usr/hive/warehouse
 
hdfs dfs -mkdir /tmp
hdfs dfs -chmod 777 /tmp
 
如果你的namenode在safemode下 我们就退出安全模式
hdfs dfsadmin -safemode leave
 
10 初始化hive
 
schematool -initSchema -dbType mysql
 
如果初始化失败,先找到原因,然后到mysql力执行
drop database hive;
 
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
 
mv /usr/local/hive/lib/log4j-slf4j-impl-2.6.2.jar /usr/local/hive/lib/log4j-slf4j-impl-2.6.2.jar.bak
原文地址:https://www.cnblogs.com/dasiji/p/11245469.html