CentOS7.6 静默安装Oracle 12CR2 RAC HUB CDB

CentOS7.6 静默安装Oracle 12CR2 RAC HUB CDB

1 规划

  • 系统基本信息

    操作系统版本public ippriv-ipvipGI版本RMDBS版本
    CentOS 7.6 192.168.1.13 172.26.9.30 192.168.1.4 12.2.0.2 12.2.0.2
    CentOS 7.6 192.168.1.14 172.26.9.31 192.168.1.5 12.2.0.2 12.20.2
  • ASM存储规划

    作用要求实际单个裸设备划分个数计划使用个数冗余方式
    OCR+VOTE >=5G 10G 5 3 normal
    +SYSTEM >=200G 500G 1 1 exteranl
    +ARCH >=500G 500G 1 1 external
    +DATA >=2T 500G 5 5 external
    +MGMT >=100G 500G 1 1 external

    当然其余可用祼设备,将留用冗余。日后需要时再添加。

    note
    • +SYSTEM 用于存储Oracle 数据库自有表空间、redo、undo、临时表空间等
    • +ARCH 用于存储归档文件,空间的划分满足每天的归档量即可。比如月末月初出账、入账,生成的归档就会特别多。
    • +DATA 用于存储业务数据
    • 本次安装不使用cdb. 安装好后,与11G RAC 使用完全相同。

2 下载软件

OTN:oracle database 下载页面。 下载完成后上传至服务器:

[root@bossdb2 ~]# ls ~/
anaconda-ks.cfg  linuxx64_12201_database.zip  linuxx64_12201_grid_home.zip p29963428_12201190716ACFSJUL2019RU_Linux-x86-64.zip p6880880_180000_Linux-x86-64.zip

其中database 是dbms安装包,grid 是集群安装包,p6880880 是最新的OPatch , p29963428 是针对CentOS 7.6 ORACLE 12.2 的补丁包。 在安装GI的时候,可能会遇到如下错误:

- AFD-620: AFD is not supported on this operating system version: 'centos-release-7-6.1810.2.el7.centos.x86_64
- '
- AFD-9201: Not Supported

3 安装准备

注意:如下配置除非特别说明,否则两个节点都需要操作

3.1 安装软件依赖

yum install binutils compat-libcap1 compat-libstdc++-33 compat-libstdc++-33.i686 gcc  gcc-c++ glibc glibc.i686 glibc-develglibc-devel.i686 ksh libgcc libgcc.i686 libstdc++ libstdc++.i686  libstdc++-devel libstdc++-devel.i686 libaiolibaio.i686 libaio-devel libaio-devel.i686 libXext  libXext.i686 libXtst libXtst.i686 libX11 libX11.i686 libXau libXau.i686 libxcb libxcb.i686 libXi  libXi.i686 make sysstat unixODBC unixODBC-devel readline libtermcap-devel pdksh -y

3.2 修改host文件

#在两台主机修改host文件,添加如下内容:

vim /etc/hosts
#public ip
192.168.1.13      bossdb1
192.168.1.14      bossdb2
# oracle vip
192.168.1.6       bossdb1-vip
192.168.1.7       bossdb2-vip

# oracle priv-ip
172.26.9.30     bossdb1-priv
172.26.9.31     bossdb2-priv

# oracle scan-ip
192.168.1.4
192.168.1.5

3.3 关闭selinux和配置防火墙

setenforce 0
firewall-cmd  --set-defaults-zone=trusted
systemctl stop firewalld
systemctl disable firewalld

3.4 添加组与用户

#在两个节点增加用户与组:

groupadd -g 5001 oinstall
groupadd -g 5002 dba
groupadd -g 5003 oper
groupadd -g 5004 backupdba
groupadd -g 5005 dgdba
groupadd -g 5006 kmdba
groupadd -g 5007 asmdba
groupadd -g 5008 asmoper
groupadd -g 5009 asmadmin
useradd -u 601 -g oinstall -G asmadmin,asmdba,dba,asmoper grid
useradd -u 602 -g oinstall -G dba,backupdba,dgdba,kmdba,asmadmin,oper,asmdba oracle

echo "grid" | passwd --stdin grid
echo "oracle" | passwd --stdin oracle

修改用户环境变量

bossdb1:
su - grid
cat >> .bash_profile <<EOF
ORACLE_BASE=/g01/app/grid
ORACLE_HOME=/g01/app/12.2.0
ORACLE_SID=+ASM1
JAVA_HOME=$GRID_ORACLE_HOME/jdk
PATH=$JAVA_HOME/bin:$PATH:$ORACLE_HOME/bin
export ORACLE_BASE ORACLE_HOME ORACLE_SID PATH JAVA_HOME
umask 022
EOF

su - oracle
cat >> .bash_profile <<EOF
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=$ORACLE_BASE/product/12.2.0/dbhome1
ORACLE_SID=boss1
JAVA_HOME=$ORA_ORACLE_HOME/jdk
PATH=$JAVA_HOME/bin:$PATH:$ORACLE_HOME/bin
export ORACLE_BASE ORACLE_HOME ORACLE_SID PATH JAVA_HOME
umask 022
EOF

bossdb2:
su - grid
cat >> .bash_profile <<EOF
ORACLE_BASE=/g01/app/grid
ORACLE_HOME=/g01/app/12.2.0
ORACLE_SID=+ASM2
JAVA_HOME=$GRID_ORACLE_HOME/jdk
PATH=$JAVA_HOME/bin:$PATH:$ORACLE_HOME/bin
export ORACLE_BASE ORACLE_HOME ORACLE_SID PATH JAVA_HOME
umask 022
EOF

su - oracle
cat >> .bash_profile <<EOF
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=$ORACLE_BASE/product/12.2.0/dbhome1
ORACLE_SID=boss2
JAVA_HOME=$ORA_ORACLE_HOME/jdk
PATH=$JAVA_HOME/bin:$PATH:$ORACLE_HOME/bin
export ORACLE_BASE ORACLE_HOME ORACLE_SID PATH JAVA_HOME
umask 022
EOF

3.5 添加目录

mkdir -p /g01/app/grid
mkdir -p /g01/app/12.2.0
mkdir -p /g01/app/oraInventory
chown -R grid:oinstall /g01

mkdir -p /u01/app/oracle/product/12.2.0/dbhome1
mkdir -p /u01/app/oraInventory
chown -R oracle:oinstall /u01
chmod -R 775 /u01

3.6 修改操作系统参数

vim/etc/security/limits.d/99-grid-oracle-limits.conf
#ORACLE SETTING
grid                soft    nproc  2047
grid                hard    nproc  16384
grid                soft    nofile 1024
grid                hard    nofile 65536
grid                soft  stack  10240
grid                hard  stack  32768
oracle              soft    nproc  2047
oracle              hard    nproc  16384
oracle              soft    nofile 1024
oracle              hard    nofile 65536
oracle              soft  stack  10240
oracle              hard  stack  32768


MEMTOTAL=$(free -b | sed -n '2p' | awk '{print $2}')
SHMMAX=$(expr $MEMTOTAL *4 / 5)
SHMMNI=4096
SHMALL=$(expr $MEMTOTAL /4*1024)

cp /etc/sysctl.conf /etc/sysctl.conf.bak
cat >> /etc/sysctl.conf << EOF
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmmax = $SHMMAX
kernel.shmall = $SHMALL
kernel.shmmni = $SHMMNI
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
kernel.panic_on_oops = 1
EOF

# kernel.shmmax大于共享内存区,小于物理内存
# kernel.shmall物理内存/4K

让配置生效:
sysctl –p
#使用centos 7.2 安装grid时,需要修改这个参数,不然会报错
vim /etc/systemd/logind.conf
RemoveIPC=no
systemctl daemon-reload
systemctl restart systemcd-logind

3.7 配置ssh无密登录(两节点)

  • grid 用户

    两节点执行

    su - grid
    ssh-keygen
    ssh-copy-id -i ~/.ssh/id_rsa.pub grid@bossdb1
    ssh-copy-id -i ~/.ssh/id_rsa.pub grid@bossdb2
    ssh bossdb1 date
    ssh bossdb2 date
    ssh bossdb2-priv date
    ssh bossdb1-priv date
    
  • oracle 用户

    两节点操作。

    su - oracle
    ssh-keygen
    ssh-copy-id -i ~/.ssh/id_rsa.pub oracle@bossdb1
    ssh-copy-id -i ~/.ssh/id_rsa.pub oracle@bossdb2
    
    ssh bossdb1 date
    ssh bossdb2 date
    ssh bossdb2-priv date
    ssh bossdb1-priv date
    

3.8 配置Central inventory

只在安装节点配置。

echo -e "inventory_loc=/g01/app/oraInventory
inst_group=oinstall" > /etc/oraInst.loc

我因为在这里配置错了inventory_loc,浪费了大量的时间排查各种诡异的报错。

4 多路径配置

存储划分好挂载到服务器后,操作系统并不能自动识别这些磁盘。需要我们手动扫描磁盘,并配置多路径。

4.1 扫描磁盘

两个节点上以root用户执行如下命令:

for scsi_host in `ls /sys/class/scsi_host/`
do
echo "- - -" > /sys/class/scsi_host/$scsi_host/scan
done

随后 fdisk -l 命令可显示所有已挂载磁盘。

4.2 安装启用Multipath

以root用户在两个节点上执行以下命令:

#安装multipath 包
yum install -y device-mapper
#将多路径软件添加至内核模块中
modprobe dm-multipath
modprobe dm-round-robin
# 设置开机启动多路径软件
systemctl enable multipathd
  • 查看是否安装

    [root@bossdb1 ~]# rpm -qa|grep device-mapper
    device-mapper-persistent-data-0.7.3-3.el7.x86_64
    device-mapper-event-libs-1.02.149-8.el7.x86_64
    device-mapper-event-1.02.149-8.el7.x86_64
    device-mapper-multipath-0.4.9-123.el7.x86_64
    device-mapper-1.02.149-8.el7.x86_64
    device-mapper-libs-1.02.149-8.el7.x86_64
    device-mapper-multipath-libs-0.4.9-123.el7.x86_64
    
  • 查看是否添加进内核

    [root@bossdb1 ~]# lsmod |grep multipath
    dm_multipath           27792  1 dm_round_robin
    dm_mod                124407  25 dm_multipath,dm_log,dm_mirror
    

4.3 配置多路径

上一步操作完并不能直接启动Multipath,因为multipath的配置文件还没有。

4.3.1 生成multipath 配置文件

root用户两个节点上都要操作,生成配置文件使用 /sbin/mpathconf –enable

[root@bossdb2 ~]# ls /etc/multipath.conf
ls: cannot access /etc/multipath.conf: No such file or directory
[root@bossdb2 ~]# /sbin/mpathconf --enable
[root@bossdb2 ~]# ls /etc/multipath.conf
/etc/multipath.conf

4.3.2 编辑配置文件

只在其中一个节点上操作即可。root用户操作。

备份示例文件:

cp /etc/multipath.conf /etc/multipath-sample.conf

编辑后的配置文件如下:

blacklist_exceptions {
        device {
                vendor  "NETAPP"
                product "LUN C-Mode"
        }
}

defaults {
        user_friendly_names yes
        find_multipaths yes
}

blacklist {
          wwid  3600508b1001c205b7b67af3b895fda77
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
}

devices {
        device {
                vendor                  "NETAPP"
                product                 "LUN C-Mode"
                path_grouping_policy    multibus
                getuid_callout          "/usr/lib/udev/scsi_id -g -u  /dev/%n"
                path_checker            readsector0
                path_selector           "round-robin 0"
                hardware_handler        "0"
                failback                15
                rr_weight               priorities
                no_path_retry           queue
        }
}

此时编辑完,启动multipath后,在/dev/mapper里会自动生成很多连接设备,分别指向/dev/路径下的dm块设备。

但是由于块设备都是以dm开头的,不便于Oracle使用,我们需要再次编写multipath.conf ,下一节增加别名配置。

关于查看vendor 和 product 信息参阅:Linux 日常操作1.7 查看磁盘基本信息

4.3.3 为每个磁盘配置别名

与上一节操作节点相同。 root 用户操作。

multipath 在自动命名时,以uuid进行全名。如下:

# fdisk -l
..... 省略 .........
3600a098038304448642b504c49475830 dm-40 NETAPP  ,LUN C-Mode
size=500G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=30 status=active
  |- 1:0:6:19 sdbq 68:64   active ready running
  |- 4:0:6:19 sdga 131:96  active ready running
  |- 1:0:7:19 sdct 70:16   active ready running
  `- 4:0:7:19 sdhd 133:48  active ready running
..... 省略 .........
  • 查看UUID对应磁盘大小

    # fdisk -l |grep "dev/mapper/3600a"|awk '{print $2" "$3}'|sort -k2n|awk -F '/' '{print $4}'|sed 's/://g'
    ..... 省略 .......
    3600a098038304448642b504c4947576f 10.7
    ..... 省略 .......
    3600a098038304448642b504c49475770 10.7
    ..... 省略 .......
    3600a098038304448642b504c49475774 537.0
    3600a098038304448642b504c4947577a 537.0
    ..... 省略 .......

共享存储的磁盘,uuid前面N个字符都是一样的。这里的3600a 是共享存储的UUID 前几个字符,用于与本地磁盘加以区别。

  • 根据磁盘大小为磁盘配置别名

    现在我们知道了。uuid 对应的大小,那么可以开始配置别名了。10G 大小的用于OCR+VOTEDISK, 其他大小的用于存储数据. 使用如下脚本,为每个磁盘起个别名,

    echo "multipaths {" >> /etc/multipath.conf
    ocr_count=1
    data_count=1
    while read a b
    do
    if [[ `echo "$b > 11"|bc` -eq 1 ]]; then
         echo "          multipath {
                    wwid                    $a
                    alias                   asm-data$data_count
                    path_grouping_policy    multibus
                    path_selector           "round-robin 0"
                    failback                auto
                    rr_weight               priorities
                    no_path_retry           5
            }" >> /etc/multipath.conf
         let data_count++
    else
         echo "          multipath {
                    wwid                    $a
                    alias                   asm-ocr$ocr_count
                    path_grouping_policy    multibus
                    path_selector           "round-robin 0"
                    failback                auto
                    rr_weight               priorities
                    no_path_retry           5
            }" >> /etc/multipath.conf
        let ocr_count++
     fi
    done</tmp/uuids
    echo "}" >> /etc/multipath.conf
    
    

    将此配置文件 复制到集群中的其他节点:

    scp /etc/multipath.conf bossdb2:/etc/multipath.conf
    

4.3.4 使配置生效:

两个节点以root用户执行 systemctl restart multipathd. 或者:

multipath -F # 清空当前多路径磁盘符
multipath -v2 # 重新生成多路径磁盘符

5 udev 配置磁盘权限

root用户执行。

5.1 查看UDEV服务是否已启动

systemctl |grep udev
或者
systemctl list-units --type=service |grep udevd

结果如下:

[root@bossdb2 dev]# systemctl start systemd-udevd
[root@bossdb2 dev]#
[root@bossdb2 dev]#
[root@bossdb2 dev]# systemctl list-units --type=service |grep udevd
systemd-udevd.service              loaded active running udev Kernel Device Manager
[root@bossdb2 dev]# systemctl |grep udev
  systemd-udev-trigger.service                                                                                                       loaded active exited    udev Coldplug all Devices
  systemd-udevd.service                                                                                                              loaded active running   udev Kernel Device Manager
  systemd-udevd-control.socket                                                                                                       loaded active running   udev Control Socket
  systemd-udevd-kernel.socket                                                                                                        loaded active running   udev Kernel Socket

5.2 配置修改权限规则

cat >> /etc/udev/rules.d/90-oracleasm.rules<<EOF
ENV{DM_NAME}=="asm*", OWNER:="grid", GROUP:="asmadmin", MODE:="660"
EOF

5.3 使配置生效

/sbin/udevadm trigger --type=devices --action=change
/sbin/udevadm control --reload

使配置重启后仍生效:

cat >> /etc/rc.d/rc.local<<EOF
/sbin/udevadm trigger --type=devices --action=change
/sbin/udevadm control --reload
EOF

5.4 检查权限是否生效

# ls -la /dev/mapper/asm*

lrwxrwxrwx 1 root root 8 May 14 18:48 /dev/mapper/asm-data1 -> ../dm-39
lrwxrwxrwx 1 root root 8 May 14 18:48 /dev/mapper/asm-data10 -> ../dm-48
lrwxrwxrwx 1 root root 8 May 14 18:48 /dev/mapper/asm-data11 -> ../dm-49
lrwxrwxrwx 1 root root 8 May 14 18:48 /dev/mapper/asm-data12 -> ../dm-50
lrwxrwxrwx 1 root root 8 May 14 18:48 /dev/mapper/asm-data13 -> ../dm-25
lrwxrwxrwx 1 root root 8 May 14 18:48 /dev/mapper/asm-data14 -> ../dm-15
lrwxrwxrwx 1 root root 8 May 14 18:48 /dev/mapper/asm-data15 -> ../dm-16
....... 省略 .......
ls -la /dev/dm*
...... 省略 ......
brw-rw---- 1 grid asmadmin 253, 38 May 15 12:50 /dev/dm-38
brw-rw---- 1 grid asmadmin 253, 39 May 15 12:50 /dev/dm-39
...... 省略 ......
brw-rw---- 1 grid asmadmin 253, 48 May 15 12:50 /dev/dm-48
brw-rw---- 1 grid asmadmin 253, 49 May 15 12:50 /dev/dm-49
brw-rw---- 1 grid asmadmin 253, 50 May 15 12:50 /dev/dm-50
...... 省略 ......

如上配置已成功。

6 配置dns

如果需要的话,可以配置dns. 不配置也没关系。下面是配置示例,我本次没有配置。 配置dns的最大优势,就是利用DNS轮询解析IP,以实现高可用。当其中一个scan ip 无法连接时,会自动使用其他的IP。

当然使用dns 解析,也需要其他服务器配置 DNSSERVER 。应用程序需要修改连接配置。基于此问题引发的其他事项的考虑, 本次不使用DNS解析。

# yum 安装
yum -y install unbound
yum install -y bind-utils

# 配置配置文件/etc/unbound/unbound.conf
vi /etc/unbound/unbound.conf
……
38 # interface: 0.0.0.0
39 interface: 0.0.0.0
……
//找到38行,复制去掉注释行,打开监听全网功能。
177         # access-control: 0.0.0.0/0 refuse
178         access-control: 192.168.10.0/24 allow
179         # access-control: 127.0.0.0/8 allow
// 找到配置文件/etc/unbound/unbound.conf的第177行,缺省为注释行,且内容为拒绝访问。复制本行内容到下面一行,去掉注释“#“,改refuse为allow。然后保存退出,重启服务即可。
155         # do-ip6: yes
156          do-ip6: no
//找到155行内容,在其下复制一行并去除注释,改yes为no,重启服务即可去除对Ipv6的监听

# 创建解析文件
[root@orc1 ~]# cat
cat > /etc/unbound/local.d/example.conf << EOF
local-zone: "example.com." static
local-data: "example.com. 86400 IN SOA ns.example.com. root 1 1D 1H 1W 1H"
local-data: "ns.example.com.            IN A 192.168.10.166"
local-data: "orc1.example.com.            IN A 192.168.10.166"
local-data: "orc12c-scan.example.com. IN A 192.168.10.170"
local-data: "orc12c-scan.example.com. IN A 192.168.10.171"
local-data: "orc12c-scan.example.com. IN A 192.168.10.172"
local-data-ptr: "192.168.10.170 orc12c-scan.example.com."
local-data-ptr: "192.168.10.171 orc12c-scan.example.com."
local-data-ptr: "192.168.10.172 orc12c-scan.example.com."
EOF

# 启动服务及检查
systemctl start unbound
systemctl restart unbound
systemctl status unbound
netstat -tunlp |grep unbound

7 静默安装配置

 

7.1 解压

注意12C 需要将安装包解压到 $ORACLE_HOME路径中。

#以grid 用户执行
su - grid
mv linuxx64_12201_grid_home.zip $ORACLE_HOME/
cd $ORACLE_HOME
unzip linuxx64_12201_grid_home.zip

需要安装一个cvuqdisk。这个在grid 的压缩包里是有的。

[grid@bossdb1 12.2.0]$ find ./ -name cvuqdisk*
./cv/rpm/cvuqdisk-1.0.10-1.rpm
./cv/remenv/cvuqdisk-1.0.10-1.rpm
[grid@bossdb1 12.2.0]$ pwd
/g01/app/12.2.0
[grid@bossdb1 12.2.0]$ exit         # 注意安装rpm包需要root权限
logout
[root@bossdb1 log]# rpm -ivh /g01/app/12.2.0/cv/rpm/cvuqdisk-1.0.10-1.rpm
Preparing...                          ################################# [100%]
Updating / installing...
   1:cvuqdisk-1.0.10-1                ################################# [100%]

将该包传送至另外一个节点:

[root@bossdb1 ~]# scp /g01/app/12.2.0/cv/rpm/cvuqdisk-1.0.10-1.rpm bossdb2:~/

登录另外一个节点,安装:

rpm -ivh cvuqdisk-1.0.10-1.rpm

7.2 配置静默安装文件

oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v12.2.0
INVENTORY_LOCATION=/g01/app/oraInventory/
oracle.install.option=CRS_CONFIG
ORACLE_BASE=/g01/app/grid
oracle.install.asm.OSDBA=asmdba
oracle.install.asm.OSOPER=asmoper
oracle.install.asm.OSASM=asmadmin
oracle.install.crs.config.gpnp.scanName=racscan                # 这里是SCANIP 对应的别名。必须在/etc/hosts 中可以找到。
oracle.install.crs.config.gpnp.scanPort=1521
oracle.install.crs.config.ClusterConfiguration=STANDALONE      # 不要与standalone节点混淆。这里指的是只在一个节点进行配置安装。
oracle.install.crs.config.configureAsExtendedCluster=false     # 是否要配置为Extended Cluster,新装RAC时,都是false
oracle.install.crs.config.memberClusterManifestFile=
oracle.install.crs.config.clusterName=bossCluster              # 集群名
oracle.install.crs.config.gpnp.configureGNS=false              # 不配置GNS
oracle.install.crs.config.autoConfigureClusterNodeVIP=false    # 不自动分配VIP
oracle.install.crs.config.gpnp.gnsOption=
oracle.install.crs.config.gpnp.gnsClientDataFile=
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=
oracle.install.crs.config.sites=
oracle.install.crs.config.clusterNodes=bossdb1:bossdb1-vip:HUB,bossdb2:bossdb2-vip:HUB  # 格式为 /etc/hosts 文件中的 主机名:VIP名:HUB,主机名:vip名:HUB,关于hub请查找 flex ASM hub node 与leaf node.
oracle.install.crs.config.networkInterfaceList=eno1:192.168.1.0:1,eno3:172.26.9.0:5       # 格式为 网卡名:ip段:1:网卡名:ip段:5, 1--> 公网ip,5--> asm & private ip
oracle.install.asm.configureGIMRDataDG=true                    # 配置mgmt
oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE       # 配置为flex asm
oracle.install.crs.config.useIPMI=false                        # 不启用IPMI
oracle.install.crs.config.ipmi.bmcUsername=
oracle.install.crs.config.ipmi.bmcPassword=
oracle.install.asm.storageOption=ASM                           # 一般选ASM即可。
oracle.install.asmOnNAS.ocrLocation=
oracle.install.asmOnNAS.configureGIMRDataDG=
oracle.install.asmOnNAS.gimrLocation=
oracle.install.asm.SYSASMPassword=Sys123ora                    # sysasm 用户密码
oracle.install.asm.diskGroup.name=OCR                          # 存储群集信息的磁盘组,一般命名为OCR
oracle.install.asm.diskGroup.redundancy=NORMAL                 # 磁盘组冗余方式
oracle.install.asm.diskGroup.AUSize=4                          # 配置ASM AU SIZE,默认4M.
oracle.install.asm.diskGroup.FailureGroups=
oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/mapper/asm-ocr1,ocr1,/dev/mapper/asm-ocr9,ocr2,/dev/mapper/asm-ocr8,ocr3       # 这里不配置failure group ,所以为空
oracle.install.asm.diskGroup.disks=/dev/mapper/asm-ocr1,/dev/mapper/asm-ocr2,/dev/mapper/asm-ocr3
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/mapper/asm*
oracle.install.asm.monitorPassword=Sys123ora                      #表示ASMSNMP账户的密码
oracle.install.asm.gimrDG.name=MGMT                            # 指定主MGMT安装 名为gimr的ASM 磁盘组。
oracle.install.asm.gimrDG.redundancy=EXTERNAL                  # 注意大写
oracle.install.asm.gimrDG.AUSize=4
oracle.install.asm.gimrDG.FailureGroups=
oracle.install.asm.gimrDG.disksWithFailureGroupNames=          # EXTERNAL 冗余模式,不能配置failureGroupNames
oracle.install.asm.gimrDG.disks=/dev/mapper/asm-data1
oracle.install.asm.gimrDG.quorumFailureGroupNames=
oracle.install.asm.configureAFD=false                          # 在12.2 中仍不太建议使用AFD(oracle 12C 新特性之一)。在18C 之后再考虑使用。AFD对于操作系统版本要求过于严格,而且还有不少BUG。
oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=NONE
oracle.install.config.omsHost=
oracle.install.config.omsPort=
oracle.install.config.emAdminUser=
oracle.install.config.emAdminPassword=
oracle.install.crs.rootconfig.executeRootScript=false
oracle.install.crs.rootconfig.configMethod=
oracle.install.crs.rootconfig.sudoPath=
oracle.install.crs.rootconfig.sudoUserName=
oracle.install.crs.config.batchinfo=
oracle.install.crs.app.applicationAddress=

8 安装前检查环境是否满足条件

$ORACLE_HOME/runcluvfy.sh stage -pre crsinst -n bossdb1,bossdb2  -verbose

输出日志不记录了。太长了。

9 静默安装GRID

 

9.1 安装软件

这一步实际只是安装了软件,没有配置集群。配置集群在安装完软件后,提示需要执行的root.sh是真正配置集群的脚本。 只在安装节点操作。

${ORACLE_HOME}/gridSetup.sh -ignorePrereq -waitforcompletion -silent -responseFile ${ORACLE_HOME}/install/response/gridsetup.rsp

也可以将以上配置,直接写到命令行里,如下:

${ORACLE_HOME}/gridSetup.sh -skipPrereqs -waitforcompletion -ignoreInternalDriverError -silent 
-responseFile ${ORACLE_HOME}/install/response/gridsetup.rsp 
INVENTORY_LOCATION=/g01/app/oraInventory/ 
oracle.install.option=CRS_CONFIG 
ORACLE_BASE=/g01/app/grid 
oracle.install.asm.OSDBA=asmdba 
oracle.install.asm.OSOPER=asmoper 
oracle.install.asm.OSASM=asmadmin 
oracle.install.crs.config.gpnp.scanName=racscan 
oracle.install.crs.config.gpnp.scanPort=1521 
oracle.install.crs.config.ClusterConfiguration=STANDALONE 
oracle.install.crs.config.configureAsExtendedCluster=false 
oracle.install.crs.config.clusterName=bossCluster 
oracle.install.crs.config.gpnp.configureGNS=false 
oracle.install.crs.config.autoConfigureClusterNodeVIP=false 
oracle.install.crs.config.clusterNodes=bossdb1:bossdb1-vip:HUB,bossdb2:bossdb2-vip:HUB 
oracle.install.crs.config.networkInterfaceList=eno1:192.168.1.0:1,eno3:172.26.9.0:5 
oracle.install.asm.configureGIMRDataDG=true 
oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE 
oracle.install.crs.config.useIPMI=false 
oracle.install.asm.storageOption=ASM 
oracle.install.asm.SYSASMPassword=Sys123ora 
oracle.install.asm.diskGroup.name=crs 
oracle.install.asm.diskGroup.redundancy=NORMAL 
oracle.install.asm.diskGroup.AUSize=4 
oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/mapper/asm-ocr1,ocr1,/dev/mapper/asm-ocr9,ocr2,/dev/mapper/asm-ocr8,ocr3 
oracle.install.asm.diskGroup.disks=/dev/mapper/asm-ocr1,/dev/mapper/asm-ocr2,/dev/mapper/asm-ocr3 
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/mapper/asm* 
oracle.install.asm.monitorPassword=Sys123ora 
oracle.install.asm.gimrDG.name=gimr 
oracle.install.asm.gimrDG.redundancy=EXTERNAL 
oracle.install.asm.gimrDG.AUSize=4 
oracle.install.asm.gimrDG.disksWithFailureGroupNames=/dev/mapper/asm-data1, 
oracle.install.asm.gimrDG.disks=/dev/mapper/asm-data1 
oracle.install.asm.configureAFD=false 
oracle.install.crs.configureRHPS=false 
oracle.install.crs.config.ignoreDownNodes=false 
oracle.install.config.managementOption=NONE 
oracle.install.crs.rootconfig.executeRootScript=false

安装时会提示日志文件:

Launching Oracle Grid Infrastructure Setup Wizard...

You can find the log of this install session at:
 /g01/app/oraInventory/logs/GridSetupActions2020-05-15_11-21-02PM/gridSetupActions2020-05-15_11-21-02PM.log

在安装过程中可查看该日志文件,了解安装进度。

安装完成后,有如下提示:

As a root user, execute the following script(s):
        1. /g01/app/oraInventory/orainstRoot.sh
        2. /g01/app/12.2.0/root.sh

Execute /g01/app/oraInventory/orainstRoot.sh on the following nodes:
[bossdb2]
Execute /g01/app/12.2.0/root.sh on the following nodes:
[bossdb1, bossdb2]

Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes.

9.2 配置集群

 

9.2.1 orainstRoot.sh

在非安装节点(bossdb2),以root 用户执行。

sh  /g01/app/oraInventory/orainstRoot.sh

执行结果如下:

[root@bossdb2 ~]#   sh  /g01/app/oraInventory/orainstRoot.sh
Changing permissions of /g01/app/oraInventory/.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /g01/app/oraInventory/ to oinstall.
The execution of the script is complete.

9.2.2 root.sh

需要先在安装节点执行成功,再在其他节点执行。

- 安装节点(bossdb1)
sh  /g01/app/12.2.0/root.sh

- 其他节点(bossdb2)
sh /g01/app/12.2.0/root.sh

此脚本共有19个步骤:

  1. install 'SetupTFA'
  2. install 'ValidateEnv'.
  3. install CheckFirstNode'
  4. install 'GenSiteGUIDs'
  5. install 'SaveParamFile'
  6. install 'SetupOSD'
  7. install 'CheckCRSConfig'
  8. install 'SetupLocalGPNP'
  9. install 'ConfigOLR'
  1. install 'ConfigCHMOS'
  2. install 'CreateOHASD'
  3. install 'ConfigOHASD', add cluster entry into 'oracle-ohasd.service'
  4. 'InstallAFD'
  5. 'InstallACFS',然后重启ohasd
  6. 'InstallKA'
  7. 'InitConfig',然后重启OHASD,启动各种服务,配置votedisk,关闭crs.
  8. 'StartCluster'
  9. 'ConfigNode',主要配置监听服务。
  10. 'PostConfig'.

在此步骤中,我遇到了点麻烦,ORACLE 在配置ASM 网络的时候,与另外一块网卡配置了相同的路由,导致节点2的asm无法连接到集群,无法启动。

9.2.3 配置MGMT

最后,静默安装GRID和图形界面安装不一样的地方还在于,在两节点运行完脚本后,你还需要继续按GI安装的提示执行如下命令来完成mgmtdb的配置。这一步,在 root.sh 脚本之后并没有提示。需要注意,不要遗漏。

在安装节点,以grid 用户执行下面命令:

$ORACLE_HOME/gridSetup.sh -executeConfigTools -silent -response ${ORACLE_HOME}/install/response/gridsetup.rsp

执行报错了,先查看mgmt服务状态:

[grid@bossdb1 addnode]$ crsctl stat res ora.MGMT.dg -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.MGMT.dg
               ONLINE  ONLINE       bossdb1                  STABLE
               ONLINE  ONLINE       bossdb2                  STABLE
--------------------------------------------------------------------------------

由于mgmt 已online,而且状态为stable,则忽略错误,说明这些错误信息本身是无所谓的。

10 创建ASM磁盘

静默安装下,有两种方式创建asm 磁盘组: SQL, ASMCA. 建议使用 ASMCA 方式,比较稳定。 建议通过ASMCA 创建ASM磁盘组。 根据规则创建ASM磁盘:

+SYSTEM >=200G 500G 1 1 exteranl
+ARCH >=500G 500G 1 1 external
+DATA >=2T 500G 5 5 external
+MGMT >=100G 500G 1 1 external

10.1 SQL方式

语法:

create diskgroup <groupname> [external|normal|high redundancy] disk 'asmdisk-file'
su - grid
sqlplus / as sysasm
set lines 32767 pages 5000
col path for a40
create diskgroup arch external redundancy disk '/dev/mapper/asm-data3';

10.2 ASMCA 静默方式

[grid@bossdb1 addnode]$ asmca -silent -createDiskGroup -diskGroupName SYSTEM -DISKLIST '/dev/mapper/asm-data4' -redundancy external -au_size 4 -compatible.asm '12.2' -sysAsmPassword Sys123ora

Disk groups created successfully. Check /g01/app/grid/cfgtoollogs/asmca/asmca-200516PM030212.log for details.

创建完成后,查看当前磁盘组

col COMPATIBILITY for a13
col database_compatibility for a20
col name for a10
select group_number,name,state,COMPATIBILITY,DATABASE_COMPATIBILITY from v$asm_diskgroup;

GROUP_NUMBER NAME                           STATE       COMPATIBILITY                                                DATABASE_COMPATIBILITY
------------ ------------------------------ ----------- ------------------------------------------------------------ ------------------------------------------------------------
           1 MGMT                           MOUNTED     12.2.0.1.0                                                   10.1.0.0.0
           2 OCR                            MOUNTED     12.2.0.1.0                                                   10.1.0.0.0
           3 DATA                           MOUNTED     12.2.0.0.0                                                   10.1.0.0.0
           4 ARCH                           MOUNTED     12.2.0.0.0                                                   10.1.0.0.0
           5 SYSTEM                         MOUNTED     12.2.0.0.0                                                   10.1.0.0.0

10.3 查看磁盘与磁盘组信息

select group_number,name,state,total_mb,free_mb,type,offline_disks from v$asm_diskgroup;

GROUP_NUMBER NAME       STATE         TOTAL_MB    FREE_MB TYPE   OFFLINE_DISKS
------------ ---------- ----------- ---------- ---------- ------ -------------
           1 ARCH       MOUNTED        1024156     813981 EXTERN             0
           2 DATA       MOUNTED        3277268     649260 EXTERN             0
           3 MGMT       MOUNTED         512076     477828 EXTERN             0
           4 OCR        MOUNTED          30720      29852 NORMAL             0
           5 SYSTEM     MOUNTED         512076     489328 EXTERN             0

11 静默安装db

 

11.1 解压文件

这里注意,ORACLE RDBMS安装与grid 不一样,不需要将压缩包解压到$ORACLE_HOME路径,而且是必须不能。

# 以root执行
chown oracle:oinstall  linuxx64_12201_database.zip
cp  linuxx64_12201_database.zip home/oracle/
#以grid 用户执行
su - oracle
unzip  linuxx64_12201_database.zip

11.2 配置响应文件

此响应文件,可以安装和配置数据。不需要先安装软件,再dbca建库。注意配置INVENTORY_LOCATION, gi 和 rmdbs 的inventory要一致。

oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v12.2.0
oracle.install.option=INSTALL_DB_AND_CONFIG
UNIX_GROUP_NAME=oinstall
INVENTORY_LOCATION=/g01/app/oraInventory
ORACLE_HOME=/u01/app/oracle/product/12.2/dbhome_1
ORACLE_BASE=/u01/app/oracle
oracle.install.db.InstallEdition=EE
oracle.install.db.OSDBA_GROUP=dba
oracle.install.db.OSOPER_GROUP=oper
oracle.install.db.OSBACKUPDBA_GROUP=backupdba
oracle.install.db.OSDGDBA_GROUP=dgdba
oracle.install.db.OSKMDBA_GROUP=kmdba
oracle.install.db.OSRACDBA_GROUP=racdba
oracle.install.db.rac.configurationType=
oracle.install.db.CLUSTER_NODES=bossdb1,bossdb2
oracle.install.db.isRACOneInstall=false
oracle.install.db.racOneServiceName=
oracle.install.db.rac.serverpoolName=
oracle.install.db.rac.serverpoolCardinality=
oracle.install.db.config.starterdb.type=GENERAL_PURPOSE
oracle.install.db.config.starterdb.globalDBName=boss
oracle.install.db.config.starterdb.SID=boss
oracle.install.db.ConfigureAsContainerDB=false
oracle.install.db.config.PDBName=                       # 此处为空,上一行为false,创建的数据库就是普通的RAC,不是CDB。如果要创建cdb+pdb, 上一行为true,此行需要设置pdbname
oracle.install.db.config.starterdb.characterSet=ZHS16GBK
oracle.install.db.config.starterdb.memoryOption=false
oracle.install.db.config.starterdb.memoryLimit=51300
oracle.install.db.config.starterdb.installExampleSchemas=false
oracle.install.db.config.starterdb.password.ALL=Sys123ora
oracle.install.db.config.starterdb.password.SYS=
oracle.install.db.config.starterdb.password.SYSTEM=
oracle.install.db.config.starterdb.password.DBSNMP=
oracle.install.db.config.starterdb.password.PDBADMIN=
oracle.install.db.config.starterdb.managementOption=DEFAULT
oracle.install.db.config.starterdb.omsHost=
oracle.install.db.config.starterdb.omsPort=
oracle.install.db.config.starterdb.emAdminUser=
oracle.install.db.config.starterdb.emAdminPassword=
oracle.install.db.config.starterdb.enableRecovery=true
oracle.install.db.config.starterdb.storageType=ASM_STORAGE
oracle.install.db.config.starterdb.fileSystemStorage.dataLocation=
oracle.install.db.config.starterdb.fileSystemStorage.recoveryLocation=
oracle.install.db.config.asm.diskGroup=SYSTEM
oracle.install.db.config.asm.ASMSNMPPassword=Sys123ora
MYORACLESUPPORT_USERNAME=
MYORACLESUPPORT_PASSWORD=
SECURITY_UPDATES_VIA_MYORACLESUPPORT=
DECLINE_SECURITY_UPDATES=
PROXY_HOST=
PROXY_PORT=
PROXY_USER=
PROXY_PWD=
COLLECTOR_SUPPORTHUB_URL=

11.3 执行安装

$HOME/database/runInstaller -silent -skipPrereqs -responseFile $HOME/database/response/db_install.rsp

执行示例:

[oracle@bossdb1 ~]$ $HOME/database/runInstaller -silent -skipPrereqs -responseFile $HOME/database/response/db_install.rsp
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 23593 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 32191 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2020-05-16_06-23-30PM. Please wait ...[oracle@bossdb1 ~]$ You can find the log of this install session at:
 /g01/app/oraInventory/logs/installActions2020-05-16_06-23-30PM.log
The installation of Oracle Database 12c was successful.
Please check '/g01/app/oraInventory/logs/silentInstall2020-05-16_06-23-30PM.log' for more details.
The Cluster Node Addition of /u01/app/oracle/product/12.2/dbhome_1 was successful.
Please check '/g01/app/oraInventory/logs/silentInstall2020-05-16_06-23-30PM.log' for more details.

As a root user, execute the following script(s):
        1. /u01/app/oracle/product/12.2/dbhome_1/root.sh

Execute /u01/app/oracle/product/12.2/dbhome_1/root.sh on the following nodes:
[bossdb1, bossdb2]


Successfully Setup Software.
As install user, execute the following command to complete the configuration.
        /home/oracle/database/runInstaller -executeConfigTools -responseFile /home/oracle/database/response/db_install.rsp [-silent]

执行完后,会提示在两个节点分别以root执行root.sh,并在安装节点以安装rdbms的用户(一般为Oracle)执行ConfigTools。按要求执行:

  • root.sh

    节点1(bossdb1):
    [root@bossdb1 ~]# sh /u01/app/oracle/product/12.2/dbhome_1/root.sh
    Check /u01/app/oracle/product/12.2/dbhome_1/install/root_bossdb1_2020-05-16_18-45-31-958812474.log for the output of root script
    节点2(bossdb2):
    [root@bossdb2 ~]#  sh /u01/app/oracle/product/12.2/dbhome_1/root.sh
    Check /u01/app/oracle/product/12.2/dbhome_1/install/root_bossdb2_2020-05-16_18-45-57-937516434.log for the output of root script
    
  • executeConfigTools

    [oracle@bossdb1 ~]$  /home/oracle/database/runInstaller -executeConfigTools -responseFile /home/oracle/database/response/db_install.rsp -silent
    Starting Oracle Universal Installer...
    
    Checking Temp space: must be greater than 500 MB.   Actual 16204 MB    Passed
    Checking swap space: must be greater than 150 MB.   Actual 32191 MB    Passed
    Preparing to launch Oracle Universal Installer from /tmp/OraInstall2020-05-16_06-46-44PM. Please wait ...[oracle@bossdb1 ~]$ You can find the logs of this session at:
    /g01/app/oraInventory/logs
    
    Successfully Configured Software.
    

11.4 查看集群状态

我们看到集群中已有db 服务,状态online. 说明之前操作都正常。

[grid@bossdb1 ~]$ crsctl stat res ora.boss.db -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.boss.db
      1        ONLINE  ONLINE       bossdb1                  Open,HOME=/u01/app/o
                                                             racle/product/12.2/d
                                                             bhome_1,STABLE
      2        ONLINE  ONLINE       bossdb2                  Open,HOME=/u01/app/o
                                                             racle/product/12.2/d
                                                             bhome_1,STABLE
--------------------------------------------------------------------------------

12 安装补丁

当前最新的补丁是 30501932(for grid) 和 30593149 (for db). 请到MOS(Doc ID 2558817.1))查找2020年 Oracle 12C 最新的RUR。 安装补丁不再需要停CRS或者DB了。

  • 解压补丁包 以root用户执行

    # 更新OPatch,需要在两个节点都执行
    # for grid
    cp /opt/p6880880_180000_Linux-x86-64.zip  /g01/app/12.2.0/
    cd /g01/app/12.2.0
    rm -rf ./OPatch
    unzip p6880880_180000_Linux-x86-64.zip
    chown -R grid:oinstall ./OPatch
    # for oracle rdbms
    cp /opt/p6880880_180000_Linux-x86-64.zip /u01/app/oracle/product/12.2/dbhome_1/
    cd /u01/app/oracle/product/12.2/dbhome_1/
    rm -rf ./OPatch
    unzip p6880880_180000_Linux-x86-64.zip
    chown -R oracle:oinstall ./OPatch
    
    
  • 查看确认组件安装信息

      $ORACLE_HOME/OPatch/opatch lsinventory -detail -oh $ORACLE_HOME
    Oracle Interim Patch Installer version 12.2.0.1.21
    Copyright (c) 2020, Oracle Corporation.  All rights reserved.
    
    
    Oracle Home       : /g01/app/12.2.0
    Central Inventory : /g01/app/oraInventory
       from           : /g01/app/12.2.0/oraInst.loc
    OPatch version    : 12.2.0.1.21
    OUI version       : 12.2.0.1.4
    Log file location : /g01/app/12.2.0/cfgtoollogs/opatch/opatch2020-05-17_16-06-38PM_1.log
    
    Lsinventory Output file location : /g01/app/12.2.0/cfgtoollogs/opatch/lsinv/lsinventory2020-05-17_16-06-38PM.txt
    --------------------------------------------------------------------------------
    Local Machine Information::
    Hostname: bossdb1
    ARU platform id: 226
    ARU platform description:: Linux x86-64
    
    Installed Top-level Products (1):
    
    Oracle Grid Infrastructure 12c                                       12.2.0.1.0
    There are 1 products installed in this Oracle Home.
    
    
    Installed Products (99):
    
    Assistant Common Files                                               12.2.0.1.0
    Automatic Storage Management Assistant                               12.2.0.1.0
    BLASLAPACK Component                                                 12.2.0.1.0
    Buildtools Common Files                                              12.2.0.1.0
    Cluster Ready Services Files                                         12.2.0.1.0
    Cluster Verification Utility Common Files                            12.2.0.1.0
    Cluster Verification Utility Files                                   12.2.0.1.0
    Database Configuration and Upgrade Assistants                        12.2.0.1.0
    Database Migration Assistant for Unicode                             12.2.0.1.0
    Database SQL Scripts                                                 12.2.0.1.0
    Database Workspace Manager                                           12.2.0.1.0
    DB TOOLS Listener                                                    12.2.0.1.0
    Deinstallation Tool                                                  12.2.0.1.0
    Expat libraries                                                       2.0.1.0.3
    Hadoopcore Component                                                 12.2.0.1.0
    HAS Common Files                                                     12.2.0.1.0
    HAS Files for DB                                                     12.2.0.1.0
    Installation Common Files                                            12.2.0.1.0
    Installation Plugin Files                                            12.2.0.1.0
    Installer SDK Component                                              12.2.0.1.4
    Java Development Kit                                                 1.8.0.91.0
    LDAP Required Support Files                                          12.2.0.1.0
    OLAP SQL Scripts                                                     12.2.0.1.0
    Oracle Advanced Security                                             12.2.0.1.0
    Oracle Bali Share                                                    11.1.1.6.0
    Oracle Clusterware RDBMS Files                                       12.2.0.1.0
    Oracle Configuration Manager Deconfiguration                         10.3.1.0.0
    Oracle Core Required Support Files                                   12.2.0.1.0
    Oracle Core Required Support Files for Core DB                       12.2.0.1.0
    Oracle Database 12c                                                  12.2.0.1.0
    Oracle Database 12c Multimedia Files                                 12.2.0.1.0
    Oracle Database Deconfiguration                                      12.2.0.1.0
    Oracle Database Utilities                                            12.2.0.1.0
    Oracle DBCA Deconfiguration                                          12.2.0.1.0
    Oracle Extended Windowing Toolkit                                    11.1.1.6.0
    Oracle Globalization Support                                         12.2.0.1.0
    Oracle Globalization Support                                         12.2.0.1.0
    Oracle Globalization Support For Core                                12.2.0.1.0
    Oracle Grid Infrastructure 12c                                       12.2.0.1.0
    Oracle Grid Infrastructure Bundled Agents                            12.2.0.1.0
    Oracle Grid Management Database                                      12.2.0.1.0
    Oracle Help for Java                                                 11.1.1.7.0
    Oracle Help Share Library                                            11.1.1.7.0
    Oracle Ice Browser                                                   11.1.1.7.0
    Oracle Internet Directory Client                                     12.2.0.1.0
    Oracle Java Client                                                   12.2.0.1.0
    Oracle JDBC/OCI Instant Client                                       12.2.0.1.0
    Oracle JDBC/THIN Interfaces                                          12.2.0.1.0
    Oracle JFC Extended Windowing Toolkit                                11.1.1.6.0
    Oracle JVM                                                           12.2.0.1.0
    Oracle JVM For Core                                                  12.2.0.1.0
    Oracle LDAP administration                                           12.2.0.1.0
    Oracle Locale Builder                                                12.2.0.1.0
    Oracle Multimedia                                                    12.2.0.1.0
    Oracle Multimedia Client Option                                      12.2.0.1.0
    Oracle Multimedia Java Advanced Imaging                              12.2.0.1.0
    Oracle Multimedia Locator                                            12.2.0.1.0
    Oracle Multimedia Locator Java Required Support Files                12.2.0.1.0
    Oracle Multimedia Locator RDBMS Files                                12.2.0.1.0
    Oracle Net                                                           12.2.0.1.0
    Oracle Net Listener                                                  12.2.0.1.0
    Oracle Net Required Support Files                                    12.2.0.1.0
    Oracle Netca Client                                                  12.2.0.1.0
    Oracle Notification Service                                          12.2.0.1.0
    Oracle Notification Service for Instant Client                       12.2.0.1.0
    Oracle One-Off Patch Installer                                       12.2.0.1.6
    Oracle Quality of Service Management (Server)                        12.2.0.1.0
    Oracle RAC Deconfiguration                                           12.2.0.1.0
    Oracle RAC Required Support Files-HAS                                12.2.0.1.0
    Oracle Recovery Manager                                              12.2.0.1.0
    Oracle Security Developer Tools                                      12.2.0.1.0
    Oracle Text Required Support Files                                   12.2.0.1.0
    Oracle Universal Connection Pool                                     12.2.0.1.0
    Oracle Universal Installer                                           12.2.0.1.4
    Oracle USM Deconfiguration                                           12.2.0.1.0
    Oracle Wallet Manager                                                12.2.0.1.0
    oracle.swd.commonlogging                                             13.3.0.0.0
    oracle.swd.opatchautodb                                              12.2.0.1.5
    oracle.swd.oui.core.min                                              12.2.0.1.4
    Parser Generator Required Support Files                              12.2.0.1.0
    Perl Interpreter                                                     5.22.0.0.0
    Perl Modules                                                         5.22.0.0.0
    PL/SQL                                                               12.2.0.1.0
    PL/SQL Embedded Gateway                                              12.2.0.1.0
    Platform Required Support Files                                      12.2.0.1.0
    Precompiler Required Support Files                                   12.2.0.1.0
    RDBMS Required Support Files                                         12.2.0.1.0
    RDBMS Required Support Files for Instant Client                      12.2.0.1.0
    Required Support Files                                               12.2.0.1.0
    Secure Socket Layer                                                  12.2.0.1.0
    SQL*Plus                                                             12.2.0.1.0
    SQL*Plus Files for Instant Client                                    12.2.0.1.0
    SQL*Plus Required Support Files                                      12.2.0.1.0
    SSL Required Support Files for InstantClient                         12.2.0.1.0
    Tomcat Container                                                     12.2.0.1.0
    Tracle File Analyzer                                                 12.2.0.1.0
    Universal Storage Manager Files                                      12.2.0.1.0
    XDK Required Support Files                                           12.2.0.1.0
    XML Parser for Java                                                  12.2.0.1.0
    There are 99 products installed in this Oracle Home.
    
    
    There are no Interim patches installed in this Oracle Home.
    
    
    --------------------------------------------------------------------------------
    
    OPatch succeeded.
    
  • 确认Opatch冲突

    由于这是新安装的集群,此步骤不操作。命令如下:

    % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30501932/30593149
    % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30501932/30585969
    % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30501932/30586063
    % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30501932/26839277
    % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30501932/30591794
    
  • 空间检查

      cat >> /tmp/patch_list_gihome.txt <<EOF
    /g01/app/patches/30501932/30593149
    /g01/app/patches/30501932/30585969
    /g01/app/patches/30501932/30586063
    /g01/app/patches/30501932/26839277
    /g01/app/patches/30501932/30591794
    EOF
    
     $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt
    

    执行示例如下:

    Oracle Interim Patch Installer version 12.2.0.1.21
    Copyright (c) 2020, Oracle Corporation.  All rights reserved.
    
    PREREQ session
    
    Oracle Home       : /g01/app/12.2.0
    Central Inventory : /g01/app/oraInventory
       from           : /g01/app/12.2.0/oraInst.loc
    OPatch version    : 12.2.0.1.21
    OUI version       : 12.2.0.1.4
    Log file location : /g01/app/12.2.0/cfgtoollogs/opatch/opatch2020-05-17_16-20-21PM_1.log
    
    Invoking prereq "checksystemspace"
    
    Prereq "checkSystemSpace" passed.
    
    OPatch succeeded.
    
    
    
  • 检查one-off patch冲突 以root用户执行

    /g01/app/12.2.0/OPatch/opatchauto apply /g01/app/patches/30501932 -analyze -oh /g01/app/12.2.0
    
  • 安装补丁 以root用户执行。

    export PATH=$PATH:/g01/app/12.2.0/OPatch
    -- 所有实例安装
    opatchauto apply /g01/app/patches/30501932
    -- 本地安装
    opatchauto apply /g01/app/patches/30501932 -oh /g01/app/12.2.0 -nonrolling
    

13 优化

 

13.1 参数优化

alter system set memory_max_target=50000M sid='*' scope=spfile;
alter system set memory_target=50000M sid='*' scope=spfile;
alter system set audit_trail='none' sid='*' scope=spfile;
alter system set db_files=2000 sid='*' scope=spfile;
alter system set deferred_segment_creation=false SCOPE=BOTH SID='*';          --关闭段延迟创建
ALTER SYSTEM SET CONTROL_FILE_RECORD_KEEP_TIME=31 SCOPE=BOTH SID='*';         -- 调整控制文件内容保存时长为31天
ALTER SYSTEM SET MAX_DUMP_FILE_SIZE='2048M' SCOPE=BOTH SID='*';               -- 调整dumpfilesize
ALTER SYSTEM SET PROCESSES=2048 SCOPE=SPFILE SID='*';                         -- 调整最大进程数
ALTER SYSTEM SET "_UNDO_AUTOTUNE"=FALSE SCOPE=BOTH SID='*';                   -- 关闭undo自动管理
ALTER SYSTEM SET "_USE_ADAPTIVE_LOG_FILE_SYNC"=FALSE SCOPE=BOTH SID='*';      -- 关闭自适应日志同步
ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON=FALSE SCOPE=SPFILE SID='*';         -- 关闭密码大小写验证,12.2 中这个参数已经被放弃。(需要sqlnet.ora配合配置)
alter system set cursor_sharing=force scope=both sid='*';                     -- 开启游标共享,强制使用绑定变量模式,减少硬解析
alter system set result_cache_max_size=0 scope=both sid='*';                  -- result cache 功能存在比较严重的BUG,会让整个集群hang死。
-- alter database enable block change tracking using file '+data';               -- 开启块追踪,此功能的开通需要消耗一定的资源,参考以往的业务负载决定是否开启。
-- alter system set "_resource_manager_always_on"=FALSE SCOPE=SPFILE SID='*';    -- 11G中与下面的参数一同使用, 12C 中暂时不需要设置此参数,通过包管理,见功能关闭
-- alter system set "_resource_manager_always_off"=true scope=spfile sid='*';

13.2 sqlnet.ora

由于数据库升级后,可能会出现过老的jdbc/odbc,为了保持正常连接,需要配置一下兼容性。

#12C 以前
cat $ORACLE_HOME/network/admin/sqlnet.ora

SQLNET.ALLOWED_LOGON_VERSION_SERVER=8
# 12C 之后
SQLNET.ALLOWED_LOGON_VERSION_SERVER=8
SQLNET.ALLOWED_LOGON_VERSION_CLIENT=8

13.3 功能开启与关闭

 

13.3.1 开启归档

shutdown immediate
startup mount;
alter database archivelog;
alter system set

13.3.2 关闭资源管理计划

execute dbms_scheduler.set_attribute('WEEKNIGHT_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('WEEKEND_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('SATURDAY_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('SUNDAY_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('MONDAY_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('TUESDAY_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('WEDNESDAY_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('THURSDAY_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('FRIDAY_WINDOW','RESOURCE_PLAN','');

13.3.3 启用并配置profile

alter system set resource_limit=true scope=both sid='*';                      -- 开启资源管理, 主要为了限制无良应用的无限长连接。
alter profile default limit idle_time=180;                                    -- 限制空闲连接最长180分钟。请根据实际环境的业务量配置。

Author: halberd.lee

Created: 2020-05-22 Fri 14:18

Validate

原文地址:https://www.cnblogs.com/halberd-lee/p/12936815.html