hive安装(三)

1. 创建hbase识别的表:

hive> CREATE TABLE hbase_table_1(key int, value string)    
    > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'  
    > WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")   
    > TBLPROPERTIES ("hbase.table.name" = "xyz");
OK
Time taken: 13.808 seconds



hbase.table.name 定义在hbase的table名称 
hbase.columns.mapping 定义在hbase的列族 

2、hbase中看到的表:

[root@cluster3 log]# hbase shell
2014-11-18 10:12:37,283 INFO  [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.98.7-hadoop2, r800c23e2207aa3f9bddb7e9514d8340bcfb89277, Wed Oct  8 15:58:11 PDT 2014

hbase(main):001:0> list
TABLE                                                                           
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/hbase-0.98.7-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/hadoop-2.5.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
xyz                                                                             
1 row(s) in 8.7660 seconds

=> ["xyz"]

2.使用sql导入数据 

新建hive的数据表

hive> create table ccc(foo int,bar string) row format delimited fields terminated by '	' lines terminated by '
' stored as textfile;
OK
Time taken: 1.083 seconds

[root@cluster3 script]# vi text.txt
1       hello
2       world
  
hive> load data local inpath '/usr/local/hadoop/script/text.txt' overwrite into table ccc;
Copying data from file:/usr/local/hadoop/script/text.txt
Copying file: file:/usr/local/hadoop/script/text.txt
Loading data to table default.ccc
rmr: DEPRECATED: Please use 'rm -r' instead.
Deleted hdfs://cluster3:9000/hadoop/hive/warehouse/ccc
Table default.ccc stats: [numFiles=1, numRows=0, totalSize=16, rawDataSize=0]
OK
Time taken: 2.98 seconds



hive> select * from ccc;
OK
1       hello
2       world
Time taken: 0.562 seconds, Fetched: 2 row(s)

3、使用sql导入hbase_table_1

hive> insert overwrite table hbase_table_1 select * from ccc where foo=1;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1416022192895_0002, Tracking URL = http://cluster3:8088/proxy/application_1416022192895_0002/
Kill Command = /usr/local/hadoop/hadoop-2.5.1/bin/hadoop job -kill job_1416022192895_0002
Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
2014-11-18 10:19:18,220 Stage-0 map = 0%, reduce = 0%
2014-11-18 10:19:33,039 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 3.88 sec
MapReduce Total cumulative CPU time: 3 seconds 880 msec
Ended Job = job_1416022192895_0002
MapReduce Jobs Launched:
Job 0: Map: 1 Cumulative CPU: 5.6 sec HDFS Read: 225 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 5 seconds 600 msec
OK
Time taken: 62.379 seconds


查看数据会显示刚刚插入的数据
hive> select * from hbase_table_1;
OK
1 hello
Time taken: 0.226 seconds, Fetched: 1 row(s)

hbase 登录hbase
查看加载的数据
hbase(main):002:0> scan "xyz"
ROW COLUMN+CELL
1 column=cf1:val, timestamp=1416277176266, value=hello
1 row(s) in 0.4420 seconds


添加数据:
hbase(main):003:0> put 'xyz','100','cf1:val','www.gongchang.com'
0 row(s) in 0.4810 seconds

hbase(main):004:0> put 'xyz','200','cf1:val','hello,word!'
0 row(s) in 0.0380 seconds

hbase(main):005:0> scan "xyz"
ROW COLUMN+CELL
1 column=cf1:val, timestamp=1416277176266, value=hello
100 column=cf1:val, timestamp=1416277292377, value=www.gongcha
ng.com
200 column=cf1:val, timestamp=1416277314710, value=hello,word!
3 row(s) in 0.0280 seconds


Hive
参看hive中的数据
hive> select * from hbase_table_1;
OK
1 hello
100 www.gongchang.com
200 hello,word!
Time taken: 0.143 seconds, Fetched: 3 row(s)
刚刚在hbase中插入的数据,已经在hive里了.

原文地址:https://www.cnblogs.com/huanhuanang/p/4105150.html