数仓项目04:环境搭建(MysqlHA+Hive)

1.Mysql安装

在hadoop102和103上安装mysql

安装命令

1.检查本机是否已经安装了mysql的一些软件包,防止冲突
rpm -qa | grep mysql
rpm -qa | grep MySQL

卸载残留的软件包:
sudo rpm -e --nodeps mysql-libs-5.1.73-7.el6.x86_64

2.安装5.6
sudo rpm -ivh MySQL-client-5.6.24-1.el6.x86_64.rpm
sudo rpm -ivh MySQL-server-5.6.24-1.el6.x86_64.rpm

3.配置root用户的密码
查看生成的随机密码: sudo cat /root/.mysql_secret
使用随机密码登录修改新的密码:
	启动服务: sudo service mysql start
	使用随机密码登录,后修改密码: set password=password('123456');
	
4.配置root用户可以再任意机器登录的帐号
①查看本机的所有帐号
select host,user,password from mysql.user;

②删除不是locahost的root用户
delete from mysql.user where host <> 'localhost';

③将host=localhost修改为%

update mysql.user set host='%' where user='root';
④刷新用户
flush privileges;

⑤测试root是否可以从localhost主机名登录
mysql -uroot -p123456

⑥测试root是否可以从hadoop103(从外部地址)主机名登录
 mysql -h hadoop103 -uroot -p123456
 
⑦查看当前mysql服务器收到了哪些客户端连接请求
sudo mysqladmin processlist -uroot -p123456


5.mysql自定义配置文件的存放位置
/etc/my.cnf /etc/mysql/my.cnf /usr/etc/my.cnf ~/.my.cnf

  安装记录

[hadoop@hadoop103 ~]$ rpm -qa | grep mysql
mysql-libs-5.1.73-8.el6_8.x86_64
[hadoop@hadoop103 ~]$ rpm -qa | grep MySQL
[hadoop@hadoop103 ~]$ sudo rpm -e --nodeps mysql-libs-5.1.73-8.el6_8.x86_64
[hadoop@hadoop103 ~]$ ls /opt/soft/
apache-flume-1.7.0-bin.tar.gz          hadoop-lzo-0.4.20.jar       mysql-libs.zip
apache-hive-1.2.1-bin.tar.gz           hadoop-lzo-master.zip       presto-cli-0.196-executable.jar
apache-kylin-2.5.1-bin-hbase1x.tar.gz  hbase-1.3.1-bin.tar.gz      presto-server-0.196.tar.gz
apache-tez-0.9.1-bin.tar.gz            imply-2.7.10.tar.gz         sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz
azkaban-executor-server-2.5.0.tar.gz   jdk-8u144-linux-x64.tar.gz  zeppelin-0.8.0-bin-all.tgz
azkaban-sql-script-2.5.0.tar.gz        kafka_2.11-0.11.0.2.tgz     zookeeper-3.4.10.tar.gz
azkaban-web-server-2.5.0.tar.gz        kafka-clients-0.11.0.2.jar
hadoop-2.7.2.tar.gz                    kafka-manager-1.3.3.22.zip
[hadoop@hadoop103 ~]$ unzip /opt/soft/mysql-libs.zip -d /opt/module/
-bash: unzip: command not found
[hadoop@hadoop103 ~]$ yum install -y unzip zip
Loaded plugins: fastestmirror
You need to be root to perform this command.
[hadoop@hadoop103 ~]$ sudo yum install -y unzip zip
Loaded plugins: fastestmirror
Setting up Install Process
Loading mirror speeds from cached hostfile
base                                                                                           | 3.7 kB     00:00     
epel                                                                                           | 4.7 kB     00:00     
epel/primary_db                                                                                | 6.1 MB     00:00     
extras                                                                                         | 3.4 kB     00:00     
updates                                                                                        | 3.4 kB     00:00     
Resolving Dependencies
--> Running transaction check
---> Package unzip.x86_64 0:6.0-5.el6 will be installed
---> Package zip.x86_64 0:3.0-1.el6_7.1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

======================================================================================================================
 Package                  Arch                      Version                             Repository               Size
======================================================================================================================
Installing:
 unzip                    x86_64                    6.0-5.el6                           base                    152 k
 zip                      x86_64                    3.0-1.el6_7.1                       base                    259 k

Transaction Summary
======================================================================================================================
Install       2 Package(s)

Total download size: 411 k
Installed size: 1.1 M
Downloading Packages:
(1/2): unzip-6.0-5.el6.x86_64.rpm                                                              | 152 kB     00:00     
(2/2): zip-3.0-1.el6_7.1.x86_64.rpm                                                            | 259 kB     00:00     
----------------------------------------------------------------------------------------------------------------------
Total                                                                                 4.3 MB/s | 411 kB     00:00     
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
** Found 3 pre-existing rpmdb problem(s), 'yum check' output follows:
2:postfix-2.6.6-8.el6.x86_64 has missing requires of libmysqlclient.so.16()(64bit)
2:postfix-2.6.6-8.el6.x86_64 has missing requires of libmysqlclient.so.16(libmysqlclient_16)(64bit)
2:postfix-2.6.6-8.el6.x86_64 has missing requires of mysql-libs
  Installing : zip-3.0-1.el6_7.1.x86_64                                                                           1/2 
  Installing : unzip-6.0-5.el6.x86_64                                                                             2/2 
  Verifying  : unzip-6.0-5.el6.x86_64                                                                             1/2 
  Verifying  : zip-3.0-1.el6_7.1.x86_64                                                                           2/2 

Installed:
  unzip.x86_64 0:6.0-5.el6                                 zip.x86_64 0:3.0-1.el6_7.1                                

Complete!
[hadoop@hadoop103 ~]$ unzip /opt/soft/mysql-libs.zip -d /opt/module/
Archive:  /opt/soft/mysql-libs.zip
   creating: /opt/module/mysql-libs/
  inflating: /opt/module/mysql-libs/MySQL-client-5.6.24-1.el6.x86_64.rpm  
  inflating: /opt/module/mysql-libs/mysql-connector-java-5.1.27.tar.gz  
  inflating: /opt/module/mysql-libs/MySQL-server-5.6.24-1.el6.x86_64.rpm  
[hadoop@hadoop103 ~]$ cd /opt/module/mysql-libs/
[hadoop@hadoop103 mysql-libs]$ ll
total 76048
-rw-rw-r-- 1 hadoop hadoop 18509960 Mar 26  2015 MySQL-client-5.6.24-1.el6.x86_64.rpm
-rw-rw-r-- 1 hadoop hadoop  3575135 Dec  1  2013 mysql-connector-java-5.1.27.tar.gz
-rw-rw-r-- 1 hadoop hadoop 55782196 Mar 26  2015 MySQL-server-5.6.24-1.el6.x86_64.rpm
[hadoop@hadoop103 mysql-libs]$ sudo rpm -ivh MySQL-client-5.6.24-1.el6.x86_64.rpm
Preparing...                ########################################### [100%]
   1:MySQL-client           ########################################### [100%]
[hadoop@hadoop103 mysql-libs]$ sudo rpm -ivh MySQL-server-5.6.24-1.el6.x86_64.rpm
Preparing...                ########################################### [100%]
   1:MySQL-server           ########################################### [100%]
warning: user mysql does not exist - using root
warning: group mysql does not exist - using root
2020-11-20 14:48:14 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_fo
r_timestamp server option (see documentation for more details).2020-11-20 14:48:14 0 [Note] /usr/sbin/mysqld (mysqld 5.6.24) starting as process 3314 ...
2020-11-20 14:48:14 3314 [Note] InnoDB: Using atomics to ref count buffer pool pages
2020-11-20 14:48:14 3314 [Note] InnoDB: The InnoDB memory heap is disabled
2020-11-20 14:48:14 3314 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-11-20 14:48:14 3314 [Note] InnoDB: Memory barrier is not used
2020-11-20 14:48:14 3314 [Note] InnoDB: Compressed tables use zlib 1.2.3
2020-11-20 14:48:14 3314 [Note] InnoDB: Using Linux native AIO
2020-11-20 14:48:14 3314 [Note] InnoDB: Using CPU crc32 instructions
2020-11-20 14:48:14 3314 [Note] InnoDB: Initializing buffer pool, size = 128.0M
2020-11-20 14:48:14 3314 [Note] InnoDB: Completed initialization of buffer pool
2020-11-20 14:48:14 3314 [Note] InnoDB: The first specified data file ./ibdata1 did not exist: a new database to be cr
eated!2020-11-20 14:48:14 3314 [Note] InnoDB: Setting file ./ibdata1 size to 12 MB
2020-11-20 14:48:14 3314 [Note] InnoDB: Database physically writes the file full: wait...
2020-11-20 14:48:15 3314 [Note] InnoDB: Setting log file ./ib_logfile101 size to 48 MB
2020-11-20 14:48:15 3314 [Note] InnoDB: Setting log file ./ib_logfile1 size to 48 MB
2020-11-20 14:48:15 3314 [Note] InnoDB: Renaming log file ./ib_logfile101 to ./ib_logfile0
2020-11-20 14:48:15 3314 [Warning] InnoDB: New log files created, LSN=45781
2020-11-20 14:48:15 3314 [Note] InnoDB: Doublewrite buffer not found: creating new
2020-11-20 14:48:15 3314 [Note] InnoDB: Doublewrite buffer created
2020-11-20 14:48:15 3314 [Note] InnoDB: 128 rollback segment(s) are active.
2020-11-20 14:48:15 3314 [Warning] InnoDB: Creating foreign key constraint system tables.
2020-11-20 14:48:16 3314 [Note] InnoDB: Foreign key constraint system tables created
2020-11-20 14:48:16 3314 [Note] InnoDB: Creating tablespace and datafile system tables.
2020-11-20 14:48:16 3314 [Note] InnoDB: Tablespace and datafile system tables created.
2020-11-20 14:48:16 3314 [Note] InnoDB: Waiting for purge to start
2020-11-20 14:48:16 3314 [Note] InnoDB: 5.6.24 started; log sequence number 0
A random root password has been set. You will find it in '/root/.mysql_secret'.
2020-11-20 14:48:16 3314 [Note] Binlog end
2020-11-20 14:48:16 3314 [Note] InnoDB: FTS optimize thread exiting.
2020-11-20 14:48:16 3314 [Note] InnoDB: Starting shutdown...
2020-11-20 14:48:17 3314 [Note] InnoDB: Shutdown completed; log sequence number 1625977


2020-11-20 14:48:17 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_fo
r_timestamp server option (see documentation for more details).2020-11-20 14:48:17 0 [Note] /usr/sbin/mysqld (mysqld 5.6.24) starting as process 3336 ...
2020-11-20 14:48:17 3336 [Note] InnoDB: Using atomics to ref count buffer pool pages
2020-11-20 14:48:17 3336 [Note] InnoDB: The InnoDB memory heap is disabled
2020-11-20 14:48:17 3336 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-11-20 14:48:17 3336 [Note] InnoDB: Memory barrier is not used
2020-11-20 14:48:17 3336 [Note] InnoDB: Compressed tables use zlib 1.2.3
2020-11-20 14:48:17 3336 [Note] InnoDB: Using Linux native AIO
2020-11-20 14:48:17 3336 [Note] InnoDB: Using CPU crc32 instructions
2020-11-20 14:48:17 3336 [Note] InnoDB: Initializing buffer pool, size = 128.0M
2020-11-20 14:48:17 3336 [Note] InnoDB: Completed initialization of buffer pool
2020-11-20 14:48:17 3336 [Note] InnoDB: Highest supported file format is Barracuda.
2020-11-20 14:48:17 3336 [Note] InnoDB: 128 rollback segment(s) are active.
2020-11-20 14:48:17 3336 [Note] InnoDB: Waiting for purge to start
2020-11-20 14:48:18 3336 [Note] InnoDB: 5.6.24 started; log sequence number 1625977
2020-11-20 14:48:18 3336 [Note] Binlog end
2020-11-20 14:48:18 3336 [Note] InnoDB: FTS optimize thread exiting.
2020-11-20 14:48:18 3336 [Note] InnoDB: Starting shutdown...
2020-11-20 14:48:19 3336 [Note] InnoDB: Shutdown completed; log sequence number 1625987




A RANDOM PASSWORD HAS BEEN SET FOR THE MySQL root USER !
You will find that password in '/root/.mysql_secret'.

You must change that password on your first connect,
no other statement but 'SET PASSWORD' will be accepted.
See the manual for the semantics of the 'password expired' flag.

Also, the account for the anonymous user has been removed.

In addition, you can run:

  /usr/bin/mysql_secure_installation

which will also give you the option of removing the test database.
This is strongly recommended for production servers.

See the manual for more instructions.

Please report any problems at http://bugs.mysql.com/

The latest information about MySQL is available on the web at

  http://www.mysql.com

Support MySQL by buying support/licenses at http://shop.mysql.com

New default config file was created as /usr/my.cnf and
will be used by default by the server when you start it.
You may edit this file to change server settings

[hadoop@hadoop103 mysql-libs]$ sudo cat /root/.mysql_secret
# The random password set for the root user at Fri Nov 20 14:48:16 2020 (local time): TMXkwqqmdeBpiD99

[hadoop@hadoop103 mysql-libs]$ mysql -uroot -pTMXkwqqmdeBpiD99
Warning: Using a password on the command line interface can be insecure.
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)
[hadoop@hadoop103 mysql-libs]$ mysql -uroot -p TMXkwqqmdeBpiD99
Enter password: 
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)
[hadoop@hadoop103 mysql-libs]$ mysql -uroot -p^CXkwqqmdeBpiD99
[hadoop@hadoop103 mysql-libs]$ sudo service mysql status
 ERROR! MySQL is not running
[hadoop@hadoop103 mysql-libs]$ sudo service mysql start
Starting MySQL. SUCCESS! 
[hadoop@hadoop103 mysql-libs]$ mysql -uroot -pTMXkwqqmdeBpiD99
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 1
Server version: 5.6.24

Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> set password=password('123456');
Query OK, 0 rows affected (0.00 sec)

mysql> select host,user,password from mysql.user;
+-----------+------+-------------------------------------------+
| host      | user | password                                  |
+-----------+------+-------------------------------------------+
| localhost | root | *6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9 |
| hadoop103 | root | *7A5A4D8630A3EAB7B5C3E79C33DDC69E9F67B39A |
| 127.0.0.1 | root | *7A5A4D8630A3EAB7B5C3E79C33DDC69E9F67B39A |
| ::1       | root | *7A5A4D8630A3EAB7B5C3E79C33DDC69E9F67B39A |
+-----------+------+-------------------------------------------+
4 rows in set (0.00 sec)

mysql> delete from mysql.user where host <> 'localhost';
Query OK, 3 rows affected (0.00 sec)

mysql> update mysql.user set host='%' where user='root';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> quit
Bye
[hadoop@hadoop103 mysql-libs]$ mysql -uroot -p123456
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 2
Server version: 5.6.24 MySQL Community Server (GPL)

Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> Ctrl-C -- exit!
Aborted
[hadoop@hadoop103 mysql-libs]$ mysql -h hadoop103  -uroot -p123456
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 3
Server version: 5.6.24 MySQL Community Server (GPL)

Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> Ctrl-C -- exit!
Aborted
[hadoop@hadoop103 mysql-libs]$ sudo mysqladmin processlist -uroot -p123456
Warning: Using a password on the command line interface can be insecure.
+----+------+-----------+----+---------+------+-------+------------------+
| Id | User | Host      | db | Command | Time | State | Info             |
+----+------+-----------+----+---------+------+-------+------------------+
| 4  | root | localhost |    | Query   | 0    | init  | show processlist |
+----+------+-----------+----+---------+------+-------+------------------+
[hadoop@hadoop103 mysql-libs]$ mysql -h hadoop103  -uroot -p123456
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 7
Server version: 5.6.24 MySQL Community Server (GPL)

Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> Ctrl-C -- exit!
Aborted
[hadoop@hadoop103 mysql-libs]$ sudo mysqladmin processlist -uroot -p123456
Warning: Using a password on the command line interface can be insecure.
+----+------+-----------------+----+---------+------+-------+------------------+
| Id | User | Host            | db | Command | Time | State | Info             |
+----+------+-----------------+----+---------+------+-------+------------------+
| 8  | root | hadoop103:53254 |    | Sleep   | 9    |       |                  |
| 9  | root | localhost       |    | Query   | 0    | init  | show processlist |
+----+------+-----------------+----+---------+------+-------+------------------+
[hadoop@hadoop103 mysql-libs]$

2.配置互为主从的Mysql

1.到/usr/share/mysql下找mysql服务端配置的模版
sudo cp my-default.cnf /etc/my.cnf

2.编辑my.cnf
在[mysqld]下配置:

server_id = 103
log-bin=mysql-bin
binlog_format=mixed
relay_log=mysql-relay

另外一台,配置也一样,只需要修改servei_id

3.重启mysql服务
sudo service mysql restart

4.在主机上使用root@localhost登录,授权从机可以使用哪个用户登录

GRANT replication slave ON *.* TO 'slave'@'%' IDENTIFIED BY '123456';

5.查看主机binlog文件的最新位置
show master status;

6.在从机上执行以下语句
 change master to master_user='slave', master_password='123456',master_host='192.168.6.103',master_log_file='mysql-bin.000001',master_log_pos=311;

7.在从机上开启同步线程
start slave

8.查看同步线程的状态
show slave status G

 change master to master_user='slave', master_password='123456',master_host='192.168.6.102',master_log_file='mysql-bin.000001',master_log_pos=311;

3.HA的mysql搭建(安装keepalive软件)

阿里云服务器不能使用keepalived,普通的ECS没有HA VIP,所以我们的ha搭建失败了。。。。

高可用虚拟IP只能在专有网络上使用

https://help.aliyun.com/document_detail/110065.html

 弹性公网IP EIP(Elastic IP Address)

阿里云SLB实现 MySQL HA

https://blog.csdn.net/weixin_41507897/article/details/108331285

阿里云服务器手动实现mysql双机热备的两种方式

https://www.jb51.net/article/171848.htm

 

普通主机参考方法如下:

1)通过yum方式安装Keepalived
sudo yum install -y keepalived
2)修改Keepalived配置文件/etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
    router_id MySQL-ha
}
vrrp_instance VI_1 {
    state master #初始状态
    interface eth0 #网卡
    virtual_router_id 51 #虚拟路由id
    priority 100 #优先级
    advert_int 1 #Keepalived心跳间隔
    nopreempt #只在高优先级配置,原master恢复之后不重新上位
    authentication {
        auth_type PASS #认证相关
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.100 #虚拟ip
    }
} 

#声明虚拟服务器
virtual_server 192.168.1.100 3306 {
    delay_loop 6
    persistence_timeout 30
    protocol TCP
    #声明真实服务器
    real_server 192.168.1.103 3306 {
        notify_down /var/lib/mysql/killkeepalived.sh #真实服务故障后调用脚本
        TCP_CHECK {
            connect_timeout 3 #超时时间
            nb_get_retry 1 #重试次数
            delay_before_retry 1 #重试时间间隔
        }
    }
}
3)编辑脚本文件/var/lib/mysql/killkeepalived.sh
#! /bin/bash
sudo service keepalived stop
4)加执行权限
sudo chmod +x /var/lib/mysql/killkeepalived.sh
5)启动Keepalived服务
sudo service keepalived start
6)设置Keepalived服务开机自启
sudo chkconfig keepalived on
7)确保开机时MySQL先于Keepalived启动
第一步:查看MySQL启动次序
sudo vim /etc/init.d/mysql
 
第二步:查看Keepalived启动次序
sudo vim /etc/init.d/keepalived
 
第三步:若Keepalived先于MySQL启动,则需要按照以下步骤设置二者启动顺序
1.修改/etc/init.d/mysql
sudo vim /etc/init.d/mysql
 
2.重新设置mysql开机自启
sudo chkconfig --del mysql
sudo chkconfig --add mysql
sudo chkconfig mysql on
3.修改/etc/init.d/keepalived
sudo vim /etc/init.d/keepalived
 
4.重新设置keepalived开机自启
sudo chkconfig --del keepalived
sudo chkconfig --add keepalived
sudo chkconfig keepalivedon

5.Hive的安装

因为hive只是一个客户端,配置在103上面

1.配置
保证环境变量中有JAVA_HOME,HADOOP_HOME,HIVE_HOME即可

2.配置hive的元数据存储在mysql中
①拷贝Mysql的驱动到 $HIVE_HOME/lib中
②编辑hive-site.xml文件,配置元数据的存储位置

[hadoop@hadoop103 hive]$ cat conf/hive-site.xml 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
	<property>
	  <name>javax.jdo.option.ConnectionURL</name>
	  <value>jdbc:mysql://hadoop103:3306/metastore?createDatabaseIfNotExist=true</value>
	  <description>JDBC connect string for a JDBC metastore</description>
	</property>

	<property>
	  <name>javax.jdo.option.ConnectionDriverName</name>
	  <value>com.mysql.jdbc.Driver</value>
	  <description>Driver class name for a JDBC metastore</description>
	</property>

	<property>
	  <name>javax.jdo.option.ConnectionUserName</name>
	  <value>root</value>
	  <description>username to use against metastore database</description>
	</property>

	<property>
	  <name>javax.jdo.option.ConnectionPassword</name>
	  <value>123456</value>
	  <description>password to use against metastore database</description>
	</property>
	<property>
	<name>hive.cli.print.header</name>
	<value>true</value>
</property>

<property>
	<name>hive.cli.print.current.db</name>
	<value>true</value>
</property>

</configuration>

 ③metastore的库的编码必须为latin1(手动创建)

6.Tez的安装

①解压缩,将tez的tar包上传到hdfs

②在$HIVE_HOME/conf/中,编写tez-site.xml

③编写$HIVE_HOME/conf/hive-site.xml

④编写$HIVE_HOME/conf/hive-env.sh,让hive启动时,加载tez的jar包

⑤编写yarn-site.xml,并分发,关闭虚拟内存检查

[hadoop@hadoop103 soft]$ tar -zxvf apache-tez-0.9.1-bin.tar.gz -C  ../module/

[hadoop@hadoop103 module]$ mv apache-tez-0.9.1-bin/ tez-0.9.1

[hadoop@hadoop103 tez-0.9.1]$ hadoop fs -mkdir /tez

[hadoop@hadoop103 tez-0.9.1]$ hadoop fs -put /opt/soft/apache-tez-0.9.1-bin.tar.gz  /tez

[hadoop@hadoop103 conf]$ pwd
/opt/module/hive/conf
[hadoop@hadoop103 conf]$ cat tez-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
	<name>tez.lib.uris</name>
    <value>${fs.defaultFS}/tez/apache-tez-0.9.1-bin.tar.gz</value>
</property>
<property>
     <name>tez.use.cluster.hadoop-libs</name>
     <value>true</value>
</property>
<property>
     <name>tez.history.logging.service.class</name>        <value>org.apache.tez.dag.history.logging.ats.ATSHistoryLog
gingService</value></property>
</configuration>

[hadoop@hadoop103 conf]$ mv hive-env.sh.template hive-env.sh
[hadoop@hadoop103 conf]$ vi hive-env.sh
#export HADOOP_HOME=/opt/module/hadoop-2.7.2

# Hive Configuration Directory can be controlled by:
#export HIVE_CONF_DIR=/opt/module/hive/conf

# Folder containing extra libraries required for hive compilation/execution can be controlled by:
export TEZ_HOME=/opt/module/tez-0.9.1    #是你的tez的解压目录
export TEZ_JARS=""
for jar in `ls $TEZ_HOME |grep jar`; do
    export TEZ_JARS=$TEZ_JARS:$TEZ_HOME/$jar
done
for jar in `ls $TEZ_HOME/lib`; do
    export TEZ_JARS=$TEZ_JARS:$TEZ_HOME/lib/$jar
done

export HIVE_AUX_JARS_PATH=/opt/module/hadoop-2.7.2/share/hadoop/common/hadoop-lzo-0.4.20.jar$TEZ_JARS
[hadoop@hadoop103 conf]$
在hive-site.xml文件中添加如下配置,更改hive计算引擎
<property>
    <name>hive.execution.engine</name>
    <value>tez</value>
</property>
关闭元数据检查
[atguigu@hadoop102 conf]$ pwd
/opt/module/hive/conf
[atguigu@hadoop102 conf]$ vim hive-site.xml
增加如下配置:
<property>
    <name>hive.metastore.schema.verification</name>
    <value>false</value>
</property>
关掉虚拟内存检查,修改yarn-site.xml,修改后一定要分发,并重新启动hadoop集群。
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>

[hadoop@hadoop103 hadoop]$ xsync yarn-site.xml
[hadoop@hadoop103 conf]$ hd stop
Stopping namenodes on [hadoop102]
hadoop102: stopping namenode
hadoop104: stopping datanode
hadoop102: stopping datanode
hadoop103: stopping datanode
Stopping secondary namenodes [hadoop104]
hadoop104: no secondarynamenode to stop
stopping yarn daemons
stopping resourcemanager
hadoop104: stopping nodemanager
hadoop102: stopping nodemanager
hadoop103: stopping nodemanager
no proxyserver to stop
stopping historyserver
[hadoop@hadoop103 conf]$ hd start
Starting namenodes on [hadoop102]
hadoop102: starting namenode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-hadoop-namenode-hadoop102.out
hadoop104: starting datanode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-hadoop-datanode-hadoop104.out
hadoop103: starting datanode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-hadoop-datanode-hadoop103.out
hadoop102: starting datanode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-hadoop-datanode-hadoop102.out
Starting secondary namenodes [hadoop104]
hadoop104: starting secondarynamenode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-hadoop-secondarynamenode-hadoop
104.outstarting yarn daemons
starting resourcemanager, logging to /opt/module/hadoop-2.7.2/logs/yarn-hadoop-resourcemanager-hadoop103.out
hadoop104: starting nodemanager, logging to /opt/module/hadoop-2.7.2/logs/yarn-hadoop-nodemanager-hadoop104.out
hadoop102: starting nodemanager, logging to /opt/module/hadoop-2.7.2/logs/yarn-hadoop-nodemanager-hadoop102.out
hadoop103: starting nodemanager, logging to /opt/module/hadoop-2.7.2/logs/yarn-hadoop-nodemanager-hadoop103.out
starting historyserver, logging to /opt/module/hadoop-2.7.2/logs/mapred-hadoop-historyserver-hadoop102.out
[hadoop@hadoop103 conf]$ hive

Logging initialized using configuration in jar:file:/opt/module/hive/lib/hive-common-1.2.1.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/module/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j
/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/opt/module/tez-0.9.1/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinde
r.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
hive (default)> insert into student values(1,"zhangsan");
Query ID = hadoop_20201130083630_3c6049ee-0b0b-434b-b35d-29773dd87ce5
Total jobs = 1
Launching Job 1 out of 1


Status: Running (Executing on YARN cluster with App id application_1606696495844_0001)

--------------------------------------------------------------------------------
        VERTICES      STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  KILLED
--------------------------------------------------------------------------------
Map 1 ..........   SUCCEEDED      1          1        0        0       0       0
--------------------------------------------------------------------------------
VERTICES: 01/01  [==========================>>] 100%  ELAPSED TIME: 11.11 s    
--------------------------------------------------------------------------------
Loading data to table default.student
Table default.student stats: [numFiles=1, numRows=1, totalSize=11, rawDataSize=10]
OK
_col0	_col1
Time taken: 18.439 seconds
hive (default)> 

7.DBeaver测试数据库

 

  

  

原文地址:https://www.cnblogs.com/jycjy/p/6767102.html