spark sql cli 配置使用

想使用spark sql cli 直接读取hive中表来做分析的话只需要简答的几部设置就可以了

1.拷贝hive-site.xml 至spark conf

cd /usr/local/hive/conf/hive-site.xml /usr/local/spark-1.5.1/conf/

2.配置spark classpath ,添加mysql驱动类

$ vim conf/spark-env.sh
export SPARK_CLASSPATH=$SPARK_CLASSPATH:$SPARK_LOCAL_DIRS/lib/mysql-connector-java-5.1.21.jar

3.启动hive metastore 

测试时,没有这步也可以成功启动spark sql cli

$nohup hive --service metastore > metastore.log 2>&1 &

4.启动 spark-sql cli

$ bin/spark-sql --master yarn-client
SET spark.sql.hive.version=1.2.1
SET spark.sql.hive.version=1.2.1
spark-sql>

以上就启动完成了.

在hive中跟spark-sql中同时执行一个sql.看看效果

hive

hive> select count(cookie) from dmp_data where age='21001';
.
.
.
Total MapReduce CPU Time Spent: 36 seconds 490 msec
OK
2839776
Time taken: 42.092 seconds, Fetched: 1 row(s)
hive> 

用时42秒

sparksql cli

$ bin/spark-sql --master yarn-client
spark-sql>select count(cookie) from dmp_data where age='21001';
.
.
.
15/12/28 14:11:55 INFO scheduler.DAGScheduler: ResultStage 3 (processCmd at CliDriver.java:376) finished in 2.402 s
15/12/28 14:11:55 INFO scheduler.DAGScheduler: Job 2 finished: processCmd at CliDriver.java:376, took 22.894938 s
2839776
Time taken: 23.917 seconds, Fetched 1 row(s)

用时23秒,少了一半.

当做一些逻辑不是很复杂的任务时,就可以直接用sql来完成了

原文地址:https://www.cnblogs.com/pingjie/p/5082610.html