记一次删除ocr与dbfile的恢复记录

自己造成的一个案例:

场景:ocr磁盘组被我dd掉了,dbfile磁盘组也被我dd掉了。Rac起不来。之前ocr的DATA磁盘组被替换到了ABC磁盘。所幸的是有备份。

   

    重新加载OCR磁盘

[root@rhel1 ~]# /u01/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force -verbose

defined(@array) is deprecated at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1702.

        (Maybe you should just omit the defined()?)

defined(@array) is deprecated at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1702.

        (Maybe you should just omit the defined()?)

defined(@array) is deprecated at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1780.

        (Maybe you should just omit the defined()?)

2019-05-22 10:31:57: Checking for super user privileges

2019-05-22 10:31:57: User has super user privileges

2019-05-22 10:31:57: Parsing the host name

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Stop failed, or completed with errors.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Delete failed, or completed with errors.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Stop failed, or completed with errors.

You must kill ohasd processes or reboot the system to properly

cleanup the processes started by Oracle clusterware

ADVM/ACFS is not supported on redhat-release-server-7.1-1.el7.x86_64

ACFS-9201: Not Supported

Either /etc/oracle/olr.loc does not exist or is not readable

Make sure the file exists and it has read and execute access

/bin/dd: failed to open ‘’: No such file or directory

Successfully deconfigured Oracle Restart stack

[root@rhel1 ~]# less /etc/oracle/olr.loc

/etc/oracle/olr.loc: No such file or directory

[root@rhel1 ~]# cd /u01/app/11.2.0/grid/

[root@rhel1 grid]# ./root.sh

Running Oracle 11g root.sh script...

The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]:

The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]:

The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]:

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

2019-05-22 10:33:37: Parsing the host name

2019-05-22 10:33:37: Checking for super user privileges

2019-05-22 10:33:37: User has super user privileges

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params   --->root.sh执行的是这个脚本

LOCAL ADD MODE

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Adding daemon to inittab   ---->(11.2.0.1需要执行dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1)

CRS-4123: Oracle High Availability Services has been started.

ohasd is starting

ADVM/ACFS is not supported on redhat-release-server-7.1-1.el7.x86_64

CRS-2672: Attempting to start 'ora.gipcd' on 'rhel1'

CRS-2672: Attempting to start 'ora.mdnsd' on 'rhel1'

CRS-2676: Start of 'ora.gipcd' on 'rhel1' succeeded

CRS-2676: Start of 'ora.mdnsd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'rhel1'

CRS-2676: Start of 'ora.gpnpd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rhel1'

CRS-2676: Start of 'ora.cssdmonitor' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'rhel1'

CRS-2672: Attempting to start 'ora.diskmon' on 'rhel1'

CRS-2676: Start of 'ora.diskmon' on 'rhel1' succeeded

CRS-2676: Start of 'ora.cssd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on 'rhel1'

CRS-2676: Start of 'ora.ctssd' on 'rhel1' succeeded

Disk Group DATA already exists. Cannot be created again   --->这个磁盘组之前替换后并没有被我dd,所以asm加载失败。

Configuration of ASM failed, see logs for details

Did not succssfully configure and start ASM

CRS-2500: Cannot stop resource 'ora.crsd' as it is not running

CRS-4000: Command Stop failed, or completed with errors.

Command return code of 1 (256) from command: /u01/app/11.2.0/grid/bin/crsctl stop resource ora.crsd -init

Stop of resource "ora.crsd -init" failed

Failed to stop CRSD

CRS-2500: Cannot stop resource 'ora.asm' as it is not running

CRS-4000: Command Stop failed, or completed with errors.

Command return code of 1 (256) from command: /u01/app/11.2.0/grid/bin/crsctl stop resource ora.asm -init

Stop of resource "ora.asm -init" failed

Failed to stop ASM

CRS-2673: Attempting to stop 'ora.ctssd' on 'rhel1'

CRS-2677: Stop of 'ora.ctssd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rhel1'

CRS-2677: Stop of 'ora.cssdmonitor' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rhel1'

CRS-2677: Stop of 'ora.cssd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rhel1'

CRS-2677: Stop of 'ora.gpnpd' on 'rhel1' succeeded

CRS-2679: Attempting to clean 'ora.gpnpd' on 'rhel1'

CRS-2681: Clean of 'ora.gpnpd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'rhel1'

CRS-2677: Stop of 'ora.gipcd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rhel1'

CRS-2677: Stop of 'ora.mdnsd' on 'rhel1' succeeded

Initial cluster configuration failed.  See /u01/app/11.2.0/grid/cfgtoollogs/crsconfig/rootcrs_rhel1.log for details

[root@rhel1 grid]# /u01/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force –verbose  

defined(@array) is deprecated at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1702.

        (Maybe you should just omit the defined()?)

defined(@array) is deprecated at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1702.

        (Maybe you should just omit the defined()?)

defined(@array) is deprecated at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1780.

        (Maybe you should just omit the defined()?)

2019-05-22 10:38:55: Checking for super user privileges

2019-05-22 10:38:55: User has super user privileges

2019-05-22 10:38:55: Parsing the host name

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

CRS-4535: Cannot communicate with Cluster Ready Services

CRS-4000: Command Stop failed, or completed with errors.

CRS-4535: Cannot communicate with Cluster Ready Services

CRS-4000: Command Delete failed, or completed with errors.

CRS-4133: Oracle High Availability Services has been stopped.

ADVM/ACFS is not supported on redhat-release-server-7.1-1.el7.x86_64

ACFS-9201: Not Supported

Successfully deconfigured Oracle Restart stack

[root@rhel1 ~]# vi  /u01/app/11.2.0/grid/crs/install/crsconfig_params  ----->更改root.sh执行的脚本,在此我修改了asm的组

(

ASM_DISK_GROUP=ABC 

ASM_DISCOVERY_STRING=

ASM_DISKS=ORCL:AAA,ORCL:BBB,ORCL:CCC

)

[root@rhel1 grid]# ./root.sh

Running Oracle 11g root.sh script...

The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]:

The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]:

The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]:

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

2019-05-22 10:40:41: Parsing the host name

2019-05-22 10:40:41: Checking for super user privileges

2019-05-22 10:40:41: User has super user privileges

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

LOCAL ADD MODE

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Adding daemon to inittab

CRS-4123: Oracle High Availability Services has been started.

ohasd is starting

ADVM/ACFS is not supported on redhat-release-server-7.1-1.el7.x86_64

CRS-2672: Attempting to start 'ora.gipcd' on 'rhel1'

CRS-2672: Attempting to start 'ora.mdnsd' on 'rhel1'

CRS-2676: Start of 'ora.gipcd' on 'rhel1' succeeded

CRS-2676: Start of 'ora.mdnsd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'rhel1'

CRS-2676: Start of 'ora.gpnpd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rhel1'

CRS-2676: Start of 'ora.cssdmonitor' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'rhel1'

CRS-2672: Attempting to start 'ora.diskmon' on 'rhel1'

CRS-2676: Start of 'ora.diskmon' on 'rhel1' succeeded

CRS-2676: Start of 'ora.cssd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on 'rhel1'

CRS-2676: Start of 'ora.ctssd' on 'rhel1' succeeded

ASM created and started successfully.

DiskGroup ABC created successfully.

clscfg: -install mode specified

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

CRS-2672: Attempting to start 'ora.crsd' on 'rhel1'

CRS-2676: Start of 'ora.crsd' on 'rhel1' succeeded

Successful addition of voting disk 5967fa6e88834f56bf236d0128957259.

Successful addition of voting disk 49c2405c6c6e4f96bf7a417022905afb.

Successful addition of voting disk 9f7ea2d3d0fc4f94bf9e7fd81a2228d2.

Successfully replaced voting disk group with +ABC.

CRS-4266: Voting file(s) successfully replaced

##  STATE    File Universal Id                File Name Disk group

--  -----    -----------------                --------- ---------

 1. ONLINE   5967fa6e88834f56bf236d0128957259 (ORCL:AAA) [ABC]

 2. ONLINE   49c2405c6c6e4f96bf7a417022905afb (ORCL:BBB) [ABC]

 3. ONLINE   9f7ea2d3d0fc4f94bf9e7fd81a2228d2 (ORCL:CCC) [ABC]

Located 3 voting disk(s).

CRS-2673: Attempting to stop 'ora.crsd' on 'rhel1'

CRS-2677: Stop of 'ora.crsd' on 'rhel1' succeeded

CRS-2679: Attempting to clean 'ora.crsd' on 'rhel1'

CRS-2681: Clean of 'ora.crsd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'rhel1'

CRS-2677: Stop of 'ora.asm' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on 'rhel1'

CRS-2677: Stop of 'ora.ctssd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rhel1'

CRS-2677: Stop of 'ora.cssdmonitor' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rhel1'

CRS-2677: Stop of 'ora.cssd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rhel1'

CRS-2677: Stop of 'ora.gpnpd' on 'rhel1' succeeded

CRS-2679: Attempting to clean 'ora.gpnpd' on 'rhel1'

CRS-2681: Clean of 'ora.gpnpd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'rhel1'

CRS-2677: Stop of 'ora.gipcd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rhel1'

CRS-2677: Stop of 'ora.mdnsd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.mdnsd' on 'rhel1'

CRS-2676: Start of 'ora.mdnsd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.gipcd' on 'rhel1'

CRS-2676: Start of 'ora.gipcd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'rhel1'

CRS-2676: Start of 'ora.gpnpd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rhel1'

CRS-2676: Start of 'ora.cssdmonitor' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'rhel1'

CRS-2672: Attempting to start 'ora.diskmon' on 'rhel1'

CRS-2676: Start of 'ora.diskmon' on 'rhel1' succeeded

CRS-2676: Start of 'ora.cssd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on 'rhel1'

CRS-2676: Start of 'ora.ctssd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'rhel1'

CRS-2676: Start of 'ora.asm' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.crsd' on 'rhel1'

CRS-2676: Start of 'ora.crsd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.evmd' on 'rhel1'

CRS-2676: Start of 'ora.evmd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'rhel1'

CRS-2676: Start of 'ora.asm' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.ABC.dg' on 'rhel1'

CRS-2676: Start of 'ora.ABC.dg' on 'rhel1' succeeded

rhel1     2019/05/22 10:46:57     /u01/app/11.2.0/grid/cdata/rhel1/backup_20190522_104657.olr

Preparing packages...

cvuqdisk-1.0.7-1.x86_64

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Updating inventory properties for clusterware

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3054 MB    Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.

[root@rhel1 grid]#

节点二执行相同步骤。

可是查看asm磁盘组的时候,之前的dbfile与FRA磁盘的信息却不在了。

[grid@rhel2 ~]$ asmcmd

ASMCMD> ls

ABC/

ASMCMD> exit

两个节点分别执行前面的操作

 

     还原OCR磁盘

[root@rhel2 ~]#  /u01/app/11.2.0/grid/bin/crsctl stop crs

[root@rhel1 grid]# ll /u1/ocrdisk/   ---->查看自动备份ocr的信息,其中/u1/ocrdisk/备份路径是我之前定义的,

backup00.ocr                day.ocr

backup01.ocr                ocr

backup02.ocr                week_.ocr

backup_20190507_111618.ocr  week.ocr

backup_20190521_161536.ocr

[root@rhel1 grid]# ./bin/ocrcheck

Status of Oracle Cluster Registry is as follows :

         Version                  :          3

         Total space (kbytes)     :     262120

         Used space (kbytes)      :       2032

         Available space (kbytes) :     260088

         ID                       :  849232275

         Device/File Name         :       +ABC

                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@rhel1 grid]# ./bin/ocrconfig -showbackup

PROT-24: Auto backups for the Oracle Cluster Registry are not available

PROT-25: Manual backups for the Oracle Cluster Registry are not available

[root@rhel1 grid]# ./bin/crsctl start crs -excl

CRS-4123: Oracle High Availability Services has been started.

CRS-2672: Attempting to start 'ora.gipcd' on 'rhel1'

CRS-2672: Attempting to start 'ora.mdnsd' on 'rhel1'

CRS-2676: Start of 'ora.mdnsd' on 'rhel1' succeeded

CRS-2676: Start of 'ora.gipcd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'rhel1'

CRS-2676: Start of 'ora.gpnpd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rhel1'

CRS-2676: Start of 'ora.cssdmonitor' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'rhel1'

CRS-2679: Attempting to clean 'ora.diskmon' on 'rhel1'

CRS-2681: Clean of 'ora.diskmon' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.diskmon' on 'rhel1'

CRS-2676: Start of 'ora.diskmon' on 'rhel1' succeeded

CRS-2676: Start of 'ora.cssd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on 'rhel1'

CRS-2676: Start of 'ora.ctssd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'rhel1'

CRS-2676: Start of 'ora.asm' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.crsd' on 'rhel1'

CRS-2676: Start of 'ora.crsd' on 'rhel1' succeeded

[root@rhel1 grid]# ll /u1/ocrdisk/                  ---->这是之前的ocr备份,通过该备份还原

backup00.ocr                day.ocr

backup01.ocr                ocr

backup02.ocr                week_.ocr

backup_20190507_111618.ocr  week.ocr

backup_20190521_161536.ocr

[root@rhel1 grid]# ll /u1/ocrdisk/backup00.ocr

-rw------- 1 root root 7012352 May 20 16:37 /u1/ocrdisk/backup00.ocr

[root@rhel1 grid]# ./bin/ocrconfig -restore /u1/ocrdisk/backup00.ocr

PROT-19: Cannot proceed while the Cluster Ready Service is running

[root@rhel1 grid]# ./bin/crsctl stop resource ora.crsd –init   

CRS-2673: Attempting to stop 'ora.crsd' on 'rhel1'

CRS-2677: Stop of 'ora.crsd' on 'rhel1' succeeded

CRS-2679: Attempting to clean 'ora.crsd' on 'rhel1'

CRS-2681: Clean of 'ora.crsd' on 'rhel1' succeeded

[root@rhel1 grid]# ./bin/ocrconfig -restore /u1/ocrdisk/backup00.ocr

[root@rhel1 grid]# ./bin/crsctl stop crs

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel1'

CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rhel1'

CRS-2673: Attempting to stop 'ora.ctssd' on 'rhel1'

CRS-2673: Attempting to stop 'ora.asm' on 'rhel1'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rhel1'

CRS-2677: Stop of 'ora.cssdmonitor' on 'rhel1' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'rhel1' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'rhel1' succeeded

CRS-2677: Stop of 'ora.asm' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rhel1'

CRS-2677: Stop of 'ora.cssd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rhel1'

CRS-2673: Attempting to stop 'ora.diskmon' on 'rhel1'

CRS-2677: Stop of 'ora.gpnpd' on 'rhel1' succeeded

CRS-2679: Attempting to clean 'ora.gpnpd' on 'rhel1'

CRS-2681: Clean of 'ora.gpnpd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'rhel1'

CRS-2677: Stop of 'ora.gipcd' on 'rhel1' succeeded

CRS-2677: Stop of 'ora.diskmon' on 'rhel1' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel1' has completed

CRS-4133: Oracle High Availability Services has been stopped.

[root@rhel1 grid]# ./bin/crsctl start crs

CRS-4123: Oracle High Availability Services has been started.

[root@rhel1 grid]# su - grid

Last login: Wed May 22 10:46:59 CST 2019 on pts/0

 [grid@rhel1 ~]$ asmcmd    --->如下存在FRA磁盘组。

ASMCMD> ls           

ABC/

DATA/

FRA/

ASMCMD>

     ocr磁盘组还原成功,之后就是还原数据库。

     还原数据库

[grid@rhel1 ~]$ srvctl start database -d ORCL -o nomount

PRCR-1079 : Failed to start resource ora.orcl.db

ORA-15032: not all alterations performed

ORA-15017: diskgroup "DBFILE" cannot be mounted

ORA-15063: ASM discovered an insufficient number of disks for diskgroup "DBFILE"

CRS-2674: Start of 'ora.DBFILE.dg' on 'rhel1' failed

ORA-15032: not all alterations performed

ORA-15017: diskgroup "DBFILE" cannot be mounted

ORA-15063: ASM discovered an insufficient number of disks for diskgroup "DBFILE"

CRS-2674: Start of 'ora.DBFILE.dg' on 'rhel2' failed

CRS-2632: There are no more servers to try to place resource 'ora.orcl.db' on that would satisfy its placement policy

[grid@rhel1 ~]$ sqlplus "/as sysasm" 

SQL*Plus: Release 11.2.0.1.0 Production on Wed May 22 11:25:00 2019

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

With the Real Application Clusters and Automatic Storage Management options

SQL> create diskgroup DBFILE external redundancy disk 'ORCL:DBFILE' name DBFILE;

Diskgroup created.

[oracle@rhel1 ~]$ cat /u01/app/oracle/product/11.2.0/db_1/dbs/initORCL1.ora  --->查看pfile参数文件,调用的是dbfile磁盘组下面的spfile文件。

SPFILE='+DBFILE/ORCL/spfileORCL.ora'

[oracle@rhel1 ~]$ cat /u01/app/oracle/admin/ORCL/pfile/init.ora.3182019191943 --->该目录下会有最开始的参数文件,通过这个打开至nomount状态

##############################################################################

# Copyright (c) 1991, 2001, 2002 by Oracle Corporation

##############################################################################

###########################################

# Cache and I/O

###########################################

db_block_size=8192

###########################################

# Cluster Database

###########################################

remote_listener=rhel-cluster-scan.grid.example.com:1521

###########################################

# Cursors and Library Cache

###########################################

open_cursors=300

###########################################

# Database Identification

###########################################

db_domain=""

db_name=ORCL

###########################################

# File Configuration

###########################################

db_create_file_dest=+DBFILE

db_recovery_file_dest=+FRA

db_recovery_file_dest_size=5218762752

###########################################

# Miscellaneous

###########################################

compatible=11.2.0.0.0

diagnostic_dest=/u01/app/oracle

memory_target=764411904

###########################################

# Processes and Sessions

###########################################

processes=150

###########################################

# Security and Auditing

###########################################

audit_file_dest=/u01/app/oracle/admin/ORCL/adump

audit_trail=db

remote_login_passwordfile=exclusive

###########################################

# Shared Server

###########################################

dispatchers="(PROTOCOL=TCP) (SERVICE=ORCLXDB)"

control_files=("+DBFILE/orcl/controlfile/current.256.1005931173", "+FRA/orcl/controlfile/current.256.1005931177")

cluster_database=true

ORCL1.instance_number=1

ORCL2.instance_number=2

ORCL2.thread=2

ORCL1.undo_tablespace=UNDOTBS1

ORCL2.undo_tablespace=UNDOTBS2

ORCL1.thread=1

     

                [oracle@rhel1 ~]$ sqlplus "/as sysdba"

SQL > startup nomount pfile='/u01/app/oracle/admin/ORCL/pfile/init.ora.3182019191943'; --->可以将创建成spfile,再复制到asm磁盘文件中。

ORACLE instance started.

Total System Global Area  764121088 bytes

Fixed Size                  2217264 bytes

Variable Size             511707856 bytes

Database Buffers          247463936 bytes

Redo Buffers                2732032 bytes

[oracle@rhel1 ~]$ rman target /

RMAN> restore controlfile from '+FRA/orcl/controlfile/current.256.1005931177'; --->因为fra盘并未被我删除。所以控制文件,归档日志文件还存在里面

Starting restore at 22-MAY-19

using target database control file instead of recovery catalog

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=30 instance=ORCL1 device type=DISK

channel ORA_DISK_1: copied control file copy

output file name=+DBFILE/orcl/controlfile/current.256.1008936907

output file name=+FRA/orcl/controlfile/current.256.1005931177

Finished restore at 22-MAY-19

RMAN> alter database mount;

database mounted

released channel: ORA_DISK_1

RMAN> list backup;

List of Backup Sets

===================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time

------- ---- -- ---------- ----------- ------------ ---------------

3       Full    947.01M    DISK        00:02:47     14-MAY-19

        BP Key: 3   Status: AVAILABLE  Compressed: NO  Tag: TAG20190514T170446

        Piece Name: /home/oracle/03u1hntf_1_1.bak

  List of Datafiles in backup set 3

  File LV Type Ckp SCN    Ckp Time  Name

  ---- -- ---- ---------- --------- ----

  1       Full 1240032    14-MAY-19 +DBFILE/orcl/datafile/system.259.1005931193

  2       Full 1240032    14-MAY-19 +DBFILE/orcl/datafile/sysaux.260.1005931273

  3       Full 1240032    14-MAY-19 +DBFILE/orcl/datafile/undotbs1.261.1005931315

  4       Full 1240032    14-MAY-19 +DBFILE/orcl/datafile/undotbs2.263.1005931371

  5       Full 1240032    14-MAY-19 +DBFILE/orcl/datafile/users.264.1005931391

  6       Full 1240032    14-MAY-19 +DBFILE/orcl/datafile/tbs1.269.1008256827

BS Key  Type LV Size       Device Type Elapsed Time Completion Time

------- ---- -- ---------- ----------- ------------ ---------------

4       Full    17.70M     DISK        00:00:15     14-MAY-19

        BP Key: 4   Status: AVAILABLE  Compressed: NO  Tag: TAG20190514T170446

        Piece Name: /home/oracle/04u1ho2o_1_1.bak   --->刚好在路径下面有备份

  SPFILE Included: Modification time: 14-MAY-19

  SPFILE db_unique_name: ORCL

  Control File Included: Ckp SCN: 1240638      Ckp time: 14-MAY-19

RMAN> restore database;

Starting restore at 22-MAY-19

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=30 instance=ORCL1 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore

channel ORA_DISK_1: specifying datafile(s) to restore from backup set

channel ORA_DISK_1: restoring datafile 00001 to +DBFILE/orcl/datafile/system.259.1005931193

channel ORA_DISK_1: restoring datafile 00002 to +DBFILE/orcl/datafile/sysaux.260.1005931273

channel ORA_DISK_1: restoring datafile 00003 to +DBFILE/orcl/datafile/undotbs1.261.1005931315

channel ORA_DISK_1: restoring datafile 00004 to +DBFILE/orcl/datafile/undotbs2.263.1005931371

channel ORA_DISK_1: restoring datafile 00005 to +DBFILE/orcl/datafile/users.264.1005931391

channel ORA_DISK_1: restoring datafile 00006 to +DBFILE/orcl/datafile/tbs1.269.1008256827

channel ORA_DISK_1: reading from backup piece /home/oracle/03u1hntf_1_1.bak

channel ORA_DISK_1: piece handle=/home/oracle/03u1hntf_1_1.bak tag=TAG20190514T170446

channel ORA_DISK_1: restored backup piece 1

channel ORA_DISK_1: restore complete, elapsed time: 00:02:30

Finished restore at 22-MAY-19

RMAN> recover database;

Starting recover at 22-MAY-19

using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 66 is already on disk as file +FRA/orcl/archivelog/2019_05_14/thread_1_seq_66.263.1008264651

archived log for thread 1 with sequence 67 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_1_seq_67.262.1008754305

archived log for thread 1 with sequence 68 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_1_seq_68.269.1008756045

archived log for thread 1 with sequence 69 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_1_seq_69.270.1008757911

archived log for thread 1 with sequence 70 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_1_seq_70.272.1008757951

archived log for thread 1 with sequence 71 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_1_seq_71.273.1008762899

archived log for thread 1 with sequence 72 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_1_seq_72.276.1008764651

archived log for thread 1 with sequence 73 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_1_seq_73.277.1008774719

archived log for thread 1 with sequence 74 is already on disk as file +FRA/orcl/archivelog/2019_05_21/thread_1_seq_74.278.1008858631

archived log for thread 1 with sequence 75 is already on disk as file +FRA/orcl/archivelog/2019_05_21/thread_1_seq_75.282.1008863235

archived log for thread 1 with sequence 76 is already on disk as file +FRA/orcl/archivelog/2019_05_21/thread_1_seq_76.283.1008864805

archived log for thread 2 with sequence 13 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_2_seq_13.267.1008754309

archived log for thread 2 with sequence 14 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_2_seq_14.268.1008754309

archived log for thread 2 with sequence 15 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_2_seq_15.271.1008757919

archived log for thread 2 with sequence 16 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_2_seq_16.274.1008762921

archived log for thread 2 with sequence 17 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_2_seq_17.275.1008762923

archived log for thread 2 with sequence 18 is already on disk as file +FRA/orcl/archivelog/2019_05_21/thread_2_seq_18.279.1008863229

archived log for thread 2 with sequence 19 is already on disk as file +FRA/orcl/archivelog/2019_05_21/thread_2_seq_19.280.1008863233

archived log for thread 2 with sequence 20 is already on disk as file +FRA/orcl/archivelog/2019_05_21/thread_2_seq_20.281.1008863233

archived log file name=+FRA/orcl/archivelog/2019_05_14/thread_1_seq_66.263.1008264651 thread=1 sequence=66

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_2_seq_13.267.1008754309 thread=2 sequence=13

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_1_seq_67.262.1008754305 thread=1 sequence=67

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_2_seq_14.268.1008754309 thread=2 sequence=14

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_1_seq_68.269.1008756045 thread=1 sequence=68

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_2_seq_15.271.1008757919 thread=2 sequence=15

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_1_seq_69.270.1008757911 thread=1 sequence=69

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_1_seq_70.272.1008757951 thread=1 sequence=70

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_2_seq_16.274.1008762921 thread=2 sequence=16

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_1_seq_71.273.1008762899 thread=1 sequence=71

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_2_seq_17.275.1008762923 thread=2 sequence=17

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_1_seq_72.276.1008764651 thread=1 sequence=72

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_1_seq_73.277.1008774719 thread=1 sequence=73

archived log file name=+FRA/orcl/archivelog/2019_05_21/thread_2_seq_18.279.1008863229 thread=2 sequence=18

archived log file name=+FRA/orcl/archivelog/2019_05_21/thread_1_seq_74.278.1008858631 thread=1 sequence=74

archived log file name=+FRA/orcl/archivelog/2019_05_21/thread_2_seq_19.280.1008863233 thread=2 sequence=19

media recovery complete, elapsed time: 00:00:42

Finished recover at 22-MAY-19

RMAN> alter database open;

RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

RMAN-03002: failure of alter db command at 05/22/2019 12:48:55

ORA-19751: could not create the change tracking file

ORA-19750: change tracking file: '+DBFILE/orcl/changetracking/ctf.268.1008159369'

ORA-17502: ksfdcre:4 Failed to create file +DBFILE/orcl/changetracking/ctf.268.1008159369

ORA-15046: ASM file name '+DBFILE/orcl/changetracking/ctf.268.1008159369' is not in single-file creation form

tracking file是优化增量备份的

连接oracle

SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;----->可以先关闭

Database altered.

SQL> alter database open resetlogs;

Database altered.

SQL> archive log list;

Database log mode              Archive Mode

Automatic archival             Enabled

Archive destination            USE_DB_RECOVERY_FILE_DEST

Oldest online log sequence     2

Next log sequence to archive   3

Current log sequence           3

SQL>  ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;

Database altered.

SQL> SELECT STATUS, FILENAME FROM V$BLOCK_CHANGE_TRACKING;---->因为是OMF管理的,可以不用using file来创建

STATUS

--------------------

FILENAME

--------------------------------------------------------------------------------

ENABLED

+DBFILE/orcl/changetracking/ctf.277.1008949385

[grid@rhel1 ~]$ crsctl stat res -t

--------------------------------------------------------------------------------

NAME           TARGET  STATE        SERVER                   STATE_DETAILS

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.ABC.dg

               ONLINE  ONLINE       rhel1

               ONLINE  ONLINE       rhel2

ora.DATA.dg

               ONLINE  ONLINE       rhel1

               ONLINE  ONLINE       rhel2

ora.DBFILE.dg

               ONLINE  ONLINE       rhel1

               ONLINE  ONLINE       rhel2

ora.FRA.dg

               ONLINE  ONLINE       rhel1

               ONLINE  ONLINE       rhel2

ora.LISTENER.lsnr

               ONLINE  ONLINE       rhel1

               ONLINE  ONLINE       rhel2

ora.asm

               ONLINE  ONLINE       rhel1                    Started

               ONLINE  ONLINE       rhel2                    Started

ora.eons

               ONLINE  ONLINE       rhel1

               ONLINE  ONLINE       rhel2

ora.gsd

               OFFLINE OFFLINE      rhel1

               OFFLINE OFFLINE      rhel2

ora.net1.network

               ONLINE  ONLINE       rhel1

               ONLINE  ONLINE       rhel2

ora.ons

               ONLINE  ONLINE       rhel1

               ONLINE  ONLINE       rhel2

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       rhel2

ora.LISTENER_SCAN2.lsnr

      1        ONLINE  ONLINE       rhel1

ora.LISTENER_SCAN3.lsnr

      1        ONLINE  ONLINE       rhel1

ora.oc4j

      1        OFFLINE OFFLINE

ora.orcl.db

      1        ONLINE  ONLINE       rhel1                    Open

      2        ONLINE  ONLINE       rhel2                    Open

ora.rhel1.vip

      1        ONLINE  ONLINE       rhel1

ora.rhel2.vip

      1        ONLINE  ONLINE       rhel2

ora.scan1.vip

      1        ONLINE  ONLINE       rhel2

ora.scan2.vip

      1        ONLINE  ONLINE       rhel1

ora.scan3.vip

      1        ONLINE  ONLINE       rhel1

原文地址:https://www.cnblogs.com/hfjiang/p/10906512.html