Thursday, 23 April 2015

What is SCAN & How It works….

Single Client Access Name (SCAN) is a feature used in Oracle Real Application Clusters environments that  provides a single name for clients to access any Oracle Database running in a cluster. You can think of SCAN as a cluster alias for databases in the cluster.
 
The benefit is that the client’s connect information does not need to change if you add or remove nodes or databases in the cluster. SCAN was first introduced with Oracle Real Application Clusters
 (RAC) 11g Release 2 and provides additional functionality in Oracle RAC 12c. Having a single name to access the cluster to connect to a database in this cluster allows clients to use EZConnect and the simple JDBC thin URL to access any database running in the cluster, independently of the number of databases or servers running in the cluster and regardless on which server(s) in the cluster  the
requested database is actually active Each term SCAN NAME, SCAN VIP, SCAN Listener,
Node VIP, Local_Listener and Remote_Listenerhas its own meaning and role to play
in RAC environment


SCAN NAME: SCAN NAME was introduced in Oracle 11g R2 version. This a name which can resolve upto maximum three IP address and minimum one is required for Oracle 11g R2 Real Application
 Cluster installation. SCAN NAME must be unique in Entire Organization network, This name is used in Tnsname.ora file of client system, all database connection are made thorugh this name. DBA can use below commands to file SCAN NAME details.

[oracle@database bin]$ srvctl config scan

SCAN name: orarac-scan, Network: 1/10.141.132.0/255.255.254.0/eth8

SCAN VIP name: scan1, IP: /orarac-scan/10.191.111.57

SCAN VIP name: scan2, IP: /orarac-scan/10.191.111.55

SCAN VIP name: scan3, IP: /orarac-scan/10.191.111.56


[oracle@database bin]$ nslookup orarac-scan

Server: 111.12.128.13

Address: 111.12.128.13#53

Name: orarac-scan.oracle.com

Address: 10.191.111.55

Name: orarac-scan.oracle.com

Address: 10.191.111.56

Name: orarac-scan.oracle.com

Address: 10.191.111.57


SCAN VIP: SCAN NAME resolves to one or more than one IP addresses, these IP address are called as SCAN VIP or SCAN IP. Each Scan VIP has a SCAN Listener generated corresponding to it. If there is one SCAN IP one SCAN Listener will be generated, if there are three SCAN IP's three SCAN Listeners
will be generated. These SCAN Listener runs on any of three nodes on the RAC environment or it could be two SCAN Listener on one node if there are three SCAN Listener and only two nodes.
SCAN VIP and SCAN Listener works as a pair when SCAN VIP fails over to other node, the corresponding SCAN listener will also be failed over to the same node. Whenever SCAN VIP fails over happens, it will always select a node with least running SCAN VIP, i.e., if SCAN VIP runs on node1, node2 and node3 of a 4-node cluster, if node3 goes down, the SCAN VIP and corresponding SCAN listener will be failed over to node4 as the other two nodes already have one SCAN VIP running on each node. Database Administrator can use below command to find SCAN Listener running on a node and SCAN VIP corresponding to it.
 
 
[oracle@database ~]$ ps -ef | grep tnslsnr

oracle 18181 10705 0 15:07 pts/0 00:00:00 grep tnslsnr

grid 22438 1 0 Jul24 ? 00:00:39 /u01/app/11.2.0.3/grid/bin/tnslsnr LISTENER_SCAN2 -inherit

grid 25927 1 0 Jul24 ? 00:01:31 /u01/app/11.2.0.3/grid/bin/tnslsnr LISTENER -inherit

grid 29211 1 0 Jul28 ? 00:00:27 /u01/app/11.2.0.3/grid/bin/tnslsnr LISTENER_SCAN3 -inherit


[oracle@database bin]$ lsnrctl status LISTENER_SCAN3

LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 06-AUG-2013 16:16:22

Copyright (c) 1991, 2011, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3)))

STATUS of the LISTENER

------------------------

Alias LISTENER_SCAN3

Version TNSLSNR for Linux: Version 11.2.0.3.0 - Production

Start Date 28-JUL-2013 08:38:01

Uptime 9 days 7 hr. 38 min. 21 sec

Trace Level off

Security ON: Local OS Authentication

SNMP OFF

Listener Parameter File /u01/app/11.2.0.3/grid/network/admin/listener.ora

Listener Log File /u01/app/11.2.0.3/grid/log/diag/tnslsnr/node2/listener_scan3/alert/log.xml

Listening Endpoints Summary...

(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN3)))
 
 
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.191.111.57)(PORT=1521)))

Services Summary...

Service "newdb" has 2 instance(s).

Instance "newdb1", status READY, has 1 handler(s) for this service...

Instance "newdb2", status READY, has 1 handler(s) for this service...

Service "newdbXDB" has 2 instance(s).

Instance "newdb1", status READY, has 1 handler(s) for this service...

Instance "newdb2", status READY, has 1 handler(s) for this service...

The command completed successfully
 
From the above grep command DBA can see, this system has two SCAN Listener running at this moment. SCAN Listener having name LISTENER_SCAN3 is running on SCAN VIP 10.191.111.57 as shown in HOST parameter of lsnrctl status output.
 
 
SCAN Listener: As explained above Each SCAN VIP has SCAN Listener associated with each other and both works as a pair. One SCAN Listener is created corresponding  to one SCAN VIP.

SCAN Listener used to run on database nodes. At max there could be three SCAN Listeners in the whole cluster. To file where all these SCAN Listeners are running use
 
 
[oracle@database ~]$ srvctl status scan_listener

SCAN Listener LISTENER_SCAN1 is enabled

SCAN listener LISTENER_SCAN1 is running on node node1

SCAN Listener LISTENER_SCAN2 is enabled

SCAN listener LISTENER_SCAN2 is running on node node2

SCAN Listener LISTENER_SCAN3 is enabled

SCAN listener LISTENER_SCAN3 is running on node node2

In this example, This is a two node RAC having three SCAN Listeners, Node1 has LISTENER_SCAN1 running

while Node2 has LISTENER_SCAN2 and LISTENER_SCAN3 running.


Node VIP: Each Database node in Real Application cluster environment has one node IP and one Node VIP address, The main difference between these two is Node VIP can move to any  other
system in case if current owning system is down but Node IP can't do that. When ever a new connection request is made the SCAN listener listening on a SCAN IP address and the SCAN port is contracted on a client's behalf.
Because all services on the cluster are registered with the SCAN listener, the SCAN listener replies with the address of the local listener as Node VIP address on the least-loaded node (Each scan listener keeps updated cluster load statistics) and connection is routed to that node.

Local Listener on the Database server is registered with Node VIP and Node IP address.
 
 
[oracle@database ~]$ lsnrctl status listener

LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 07-AUG-2013 05:12:02

Copyright (c) 1991, 2011, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))

STATUS of the LISTENER

------------------------

Alias LISTENER

Version TNSLSNR for Linux: Version 11.2.0.3.0 - Production

Start Date 24-JUL-2013 07:14:10

Uptime 13 days 21 hr. 57 min. 52 sec

Trace Level off

Security ON: Local OS Authentication

SNMP OFF

Listener Parameter File /u01/app/11.2.0.3/grid/network/admin/listener.ora

Listener Log File /u01/app/grid/diag/tnslsnr/node2/listener/alert/log.xml

Listening Endpoints Summary...

(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
 
 
 
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.141.21.360)(PORT=1521)))

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.141.21.366)(PORT=1521)))

Services Summary...

Service "+ASM" has 1 instance(s).

Instance "+ASM2", status READY, has 1 handler(s) for this service...

Service "newdb" has 1 instance(s).

Instance "newdb2", status READY, has 1 handler(s) for this service...

Service "newdbXDB" has 1 instance(s).

Instance "newdb2", status READY, has 1 handler(s) for this service...

The command completed successfully


Here, DBA can see Listner is registered with two IP addresses 10.141.21.360 is Node IP address

and 10.141.21.366 is node VIP address.

Local_Listener: This is a database parameter which is used to provide detail of local listener running on database node itself. In Real Application Cluster environment this has Node VIP address as value.
   
SQL> show parameter local_listener

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
local_listener string (DESCRIPTION=(ADDRESS_LIST=(A

DDRESS=(PROTOCOL=TCP)(HOST=10.

141.21.366)(PORT=1521))))

When a new connection request is made my SCAN Listener, This address is returned to the SCAN Listener

and then connection is made to database local listener. The difference between SCAN Listener and Local

Listener is SCAN listener runs corosponding to SCAN VIP's while Local Listener runs with Node VIP or Node

IP address.SCAN Listener can move to another database node in case of node failure but local Listener

doesn't have this kind of behavior.


Remote_Listener: Each database instance has this parameter set to SCAN NAME of the cluster
environment. SCAN NAME in turn has one or more IP address further called SCAN VIP and each SCAN VIP  has SCAN Listener running on it, so finally each database instance register itself with all SCAN Listeners running across the clusterware.  
SQL> show parameter remote_listener

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
remote_listener string orarac-scan:1521

The solo purpose of remote_listener parameter is to register each database instance with all SCAN
Listeners in RAC. This provide information on what services are being provided by the instance,the current load, and a recommendation on how many incoming connections should be directed to the instance.


 

Statistics Gathering

script to check if automated stats gather is in place or not.
SQL> select Owner, job_name, program_name, schedule_name, schedule_type,failure_count, last_start_date , comments  from dba_scheduler_jobs;
Script to check the window for the automated stats gather.
SQL> select window_name, repeat_interval,duration, window_priority, next_start_date, last_start_date, Comments from dba_scheduler_windows;

Script to check the stale statistics & it's tables
SQL> select * from sys.dba_tab_modifications where table_owner not in (‘SYS’,’SYSTEM’) order by timestamp;
Check the last analyzed data for the table/index
SQL> select table_name, stale_stats, last_analyzed from dba_tab_statistics where owner= '&OWNER' and stale_stats='YES' order by last_analyzed desc;

SQL> select table_name, partition_name, cast(last_analyzed as timestamp) last_analyzed from dba_tab_statistics order by 3, 1, 2;

Script to gather the stats.

a) For Table:
EXEC DBMS_STATS.GATHER_TABLE_STATS (ownname => 'SMART' , tabname => 'AGENT',cascade => true, estimate_percent => 10, granularity => 'ALL', degree => 1);

b) For Schema:
EXEC dbms_stats.gather_schema_stats(ownname=>'XES_MGR',estimate_percent=>100,CASCADE=>TRUE);

c) For Database:
EXEC DBMS_STATS.gather_database_stats(estimate_percent => 15, cascade => TRUE);

SRVCTL Commands

SRVCTL Commands for administering Oracle RAC RDBMS Instances

Start Database with different option


srvctl start database -d SAPDB -o nomount

srvctl start database -d SAPDB -o mount

srvctl start db -d SAPDB

srvctl start database -d SAPDB -o open

Stop Database with different option


srvctl stop database -d SAPDB -o transactional

srvctl stop database -d SAPDB -o immediate

srvctl stop database -d SAPDB -o abort

srvctl stop db -d SAPDB -o immediate

check the status of the database


srvctl status database -d SAPDB -v service_name

srvctl status database -d SAPDB

Enable and disable database


srvctl enable database -d SAPDB

srvctl disable db -d SAPDB

Check the current database configuration


srvctl config database -d SAPDB

Add the instance


Command: srvctl add instance -d db_name -i inst_name -n node_name

Example: srvctl add instance -d SAPDB -i rac01 -n server1

Remove the instance


Command: srvctl remove instance -d db_name -i inst_name

Example: srvctl remove instance -d SAPDB -i rac01

Start the instance


Command: srvctl start instance -d db_name -i inst_names [-o start_options] [-c connect_str|-q]

Command: srvctl start instance -d db_name -i inst_names [-o open]

Command: srvctl start instance -d db_name -i inst_names -o nomount

Command: srvctl start instance -d db_name -i inst_names -o mount

Example: srvctl start instance -d SAPDB -i rac02

To start the instance in Oracle 11g:


Command: srvctl start instance -d db_unique_name {-n node_name -i "instance_name_list"} [-o

start_options]

Example: srvctl start instance -d SAPDB -n server2

Example: srvctl start instance -d SAPDB -i "rac02, rac03"

To stop the instance


Command: srvctl stop instance -d db_name -i inst_names [-o stop_options] [-c connect_str|-q]

Command: srvctl stop instance -d db_name -i inst_names [-o normal]

Command: srvctl stop instance -d db_name -i inst_names -o transactional

Command: srvctl stop instance -d db_name -i inst_names -o immediate

Command: srvctl stop instance -d db_name -i inst_names -o abort

Example: srvctl stop instance -d SAPDB -i rac03

To stop the instance in Oracle 11g:


Command: srvctl stop instance -d db_unique_name {[-n node_name]|[-i "instance_name_list"]} [-o

stop_options] [-f]

Example: srvctl stop instance -d SAPDB -n server1

Example: srvctl stop instance -d SAPDB -i rac01

To check the status of the instance


Command: srvctl status instance -d db_name -i inst_names [-f] [-v] [-S level]

Example: srvctl status instance -d SAPDB -i rac02

To check the status of the instance in Oracle 11g:


Command: srvctl status instance -d db_unique_name {-n node_name | -i "instance_name_list"} [-f] [-v]

Example: srvctl status instance -d SAPDB -i "rac01,rac02" -v

To enable/disable the instance


Command: srvctl enable instance -d db_name -i inst_names

Example: srvctl enable instance -d SAPDB -i "rac01,rac02"

Command: srvctl disable instance -d db_name -i inst_names

Example: srvctl disable inst -d SAPDB -i "rac01,rac03"

To set dependency of instance to ASM


Command: srvctl modify instance -d db_name -i inst_name {-s asm_inst_name|-r}

Command: srvctl modify instance -d db_unique_name -i instance_name {-n node_name|-z}

Example: srvctl modify instance -d SAPDB -i rac01 -n server1

Example: srvctl modify instance -d SAPDB -i rac01 -z

To move the database instance


Command: srvctl modify instance -d db_name -i inst_name -n node_name

Example: srvctl modify instance -d SAPDB -i rac02 -n server1

To remove the database instance


Command: srvctl modify instance -d db_name -i inst_name -r

Example: srvctl modify instance -d SAPDB -i rac01 -r

To get/set/unset the environment configuration of the instance


Command: srvctl getenv instance -d db_name -i inst_name [-t name_list]

Example: srvctl getenv instance -d SAPDB -i rac01

Command: srvctl setenv instance -d db_name [-i inst_name] {-t "name=val[,name=val,...]" | -T

"name=val"}

Example: srvctl setenv instance -d SAPDB -i rac01 [options]

Command: srvctl unsetenv instance -d db_name [-i inst_name] [-t name_list]

Example: srvctl unsetenv instance -d SAPDB -i rac01 [-t options]

SRVCTL Commands for administering Oracle ASM Instances

Add/Remove the ASM


Command: srvctl add asm [-l lsnr_name] [-p spfile] [-d asm_diskstring]

Command: srvctl remove asm -l [-f]

Example: srvctl add asm -l LISTENER_ASM01 -p +diskg_data/spfile.ora

Example: srvctl remove asm -l LISTENER_ASM01 -f

Start/Stop the ASM


Command: srvctl start asm [-n node_name] [-o start_options]

Example: srvctl start asm -n server1

Command: srvctl stop asm [-n node_name] [-o stop_options] [-f]

Example: srvctl stop asm -n server1 -f

Check the status of the ASM


Command: srvctl status asm [-n node_name] [-a]

Example: srvctl status asm -n server1 -a

Enable/Disable ASM


Command: srvctl enable asm [-n node_name]

Example: srvctl enable asm -n server1

Command: srvctl disable asm [-n node_name]

Example: srvctl disable asm -n server1

Configure the ASM


Command: srvctl config asm [-a]

Example: srvctl config asm -a

SRVCTL command for SCAN(Single Client Access Name):

Add a SCAN


Command: srvctl add scan -n nodename

Example: srvctl add scan -n server1

Remove a SCAN


Example: srvctl remove scan [-f]

Note: Below command is using ordinal_number parameter so An ordinal number is that identifies which

SCAN VIP you want to start. The range of values you can specify for this option is 1 to 3.

Start a SCAN


Command: srvctl start scan [-i ordinal_number] [-n node_name]

Example: srvctl start scan -i 1 -n server1

Stop a SCAN


Command: srvctl stop scan [-i ordinal_number] [-f] </strong>

Example: srvctl stop scan -i 1

Check status of SCAN


Command: srvctl status scan [-i ordinal_number]

Example: srvctl status scan -i 1

Enable a SCAN


Command: srvctl enable scan [-i ordinal_number]

Example: srvctl enable scan -i 1

Disable a SCAN


Command: srvctl disable scan [-i ordinal_number]

Example: srvctl disable scan -i 3

Configure a SCAN


Command: srvctl config scan [-i ordinal_number]

Example: srvctl config scan -i 2

Modify SCAN


Command: srvctl modify scan -n scan_name

Example: srvctl modify scan -n SCAN1

Relocate a SCAN


Command: srvctl relocate scan -i ordinal_number [-n node_name]

Example: srvctl relocate scan -i 2 -n server2

Add service with service goals


Example: srvctl add service -d rac -s rac1 -q TRUE -m BASIC -e SELECT -z 180 -w 5 -j LONG

Add service wih preconnect taf


Command: srvctl add service -d db_unique_name -s service_name -u {-r preferred_list | -a available_list}

Example: srvctl add serv -d rac -s report -r rac01,rac02 -a rac03 -P PRECONNECT

Start service


Command: srvctl start service -d db_unique_name [-s "service_name_list" [-n node_name | -i

instance_name]] [-o start_options]

Example: srvctl start serv -d rac -s rac

Example: srvctl start service -d rac -s rac -i rac2

Stop service


srvctl stop service -d db_unique_name [-s "service_name_list"] [-n node_name | -i instance_name] [-f]

srvctl stop service -d rac -s rac

srvctl stop serv -d rac -s rac -i rac2

View service


srvctl status service -d db_unique_name [-s "service_name_list"] [-f] [-v]

srvctl status service -d rac -s rac -v

Modify the service


Command: srvctl modify service -d db_unique_name -s service_name

[-c {UNIFORM|SINGLETON}] [-P {BASIC|PRECONNECT|NONE}]

[-l {[PRIMARY]|[PHYSICAL_STANDBY]|[LOGICAL_STANDBY]|[SNAPSHOT_STANDBY]} [-q {TRUE|FALSE}] [-

x {TRUE|FALSE}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e

{NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z failover_retries] [-w failover_delay] [-y

{AUTOMATIC|MANUAL}]

Command: srvctl modify service -d db_unique_name -s service_name -i old_instance_name -t

new_instance_name [-f]

Command: srvctl modify service -d db_unique_name -s service_name -i avail_inst_name -r [-f]

Command: srvctl modify service -d db_unique_name -s service_name -n -i preferred_list [-a

available_list] [-f]

Example: srvctl modify service -d rac -s rac -i rac1 -t rac2

Example: srvctl modify serv -d rac -s rac -i rac1 -r

Example: srvctl modify service -d rac -s rac -n -i rac1 -a rac2

Relocate the service


Command: srvctl relocate service -d db_unique_name -s service_name {-c source_node -n

target_node|-i old_instance_name -t new_instance_name} [-f]

Example: srvctl relocate service -d rac -s rac -i rac1 -t rac3


RMAN cloning from Active Database without Catalog DB

Active database duplication copies the target database over the network to the destination and then creates the duplicate database.

The duplication work is performed by an auxiliary channel. This channel corresponds to a server session  on the auxiliary instance on the auxiliary host

This documents will focus on the steps that needs to be perform while cloning the database using
RMAN.
Roadmap:
 
 
1. Create the parameter file (initdup.ora) with proper parameters eg: db_file_name_convert,

Log_file_name_convert, undo_tablespace, controlfile etc

2. Create a password file.

3. Establish Oracle Net Connectivity to the Auxiliary Instance

4. Create the directories which are required for the duplicate database.

5. Start the Auxiliary/target instance from Sqlplus

6. Test connectivity to auxiliary and target instance from the Both Sides.

7. Connect to the auxiliary Database Instances check the space availability on file system/ ASM

8. Run the RMAN duplicate command.




1. Creating initialization Parameter file for the Duplicate database.
 
 
If we don’t have the p/spfile with us for new database, we need to create a new pfile to start the
instance with minimum below parameters in it.

DB_NAME=

CONTROL_FILES=

DB_BLOCK_SIZE=

DB_FILE_NAME_CONVERT=

LOG_FILE_NAME_CONVERT=
 
 
UNDO_TABLESPACE = should have same name as that of the source DB UNDO tablespace.

Create a pfile in “$ORACLE_HOME/dbs/” path with only above parameters.




2. Create a password file.
Now we have to create the password file with the same password as that of the source database
cd $ORACLE_HOME/dbs

orapwd password=ORCL file=orapwDUP_DB

3. Establish Oracle Net Connectivity to the Auxiliary Instance
 
 
Now we need to make an entry of the source database into the tsnames.orafile of the auxiliary
database as shown below

PROD_DB =

(DESCRIPTION =

(ADDRESS = (PROTOCOL = TCP)(HOST = localhost.localdomain)(PORT = 1521))

(CONNECT_DATA =

(SERVER = DEDICATED)

(SERVICE_NAME = PROD_DB)

)

)

DUP_DB =

(DESCRIPTION =

(ADDRESS = (PROTOCOL = TCP)(HOST = localhost.localdomain)(PORT = 1521))

(CONNECT_DATA =

(SERVER = DEDICATED)

(SERVICE_NAME = DUP_DB)

)

)


4. Create the directories which are required for the duplicate database.
Now if we are using the conventional file system, we need to create a desired directory
 
structure on destination server using mkdircommand but if we are using ASM as a underlying

file system, we no need to create a directory structure. We have already set the

DB_FILE_NAME_CONVERT & LOG_FILE_NAME_CONVERT parameters in pfile.

5. Start the Auxiliary instance from Sqlplus
Now we need to start the auxiliary database instance till NOMOUNT state only with newly
 
created pfile in step 1.

Export ORACLE_SID=DUP_DB

Export ORACLE_HOME=

Sqlplus / as sysdba

SQL>startup nomount pfile=’<PATH OF PFILE>;

Eg: SQL> startup nomount pfile='/oracle/RQ1/112_64/dbs/initRQ1.ora';
 
6. Test connectivity to auxiliary and target instance from the Both Sides.
Now we have to test the connectivity between source & auxiliary database we setup in step 4.
sqlplus sys/xxxx@PROD_DB as sysdba

sqlplus sys/xxxx@DUP_DB as sysdba

7. Connect to the auxiliary DB & check the space availability on file system/ ASM
Now we need to check if there is enough free space available & which can accommodate the

data of new database here.
Export ORACLE_SID=+ASM

Export ORACLE_HOME=

Sqlplus / as sysasm

Set linesize 200

column pct_free format 99.99

select name, total_mb/1024 TotalGB, free_mb/1024 FreeeGB, (total_mb -free_mb)/1024 used_gb,

free_mb/total_mb *100 pct_free from v$asm_diskgroup;


8. Run the RMAN duplicate command.
Once all the above steps are set, we are good to go with the actual RMAN Duplicate command.
rman TARGET sys/xxxx@PROD_DB AUXILIARY sys/xxxx@DUP_DB

RMAN> DUPLICATE TARGET DATABASE TO RQ1 FROM ACTIVE DATABASE;

As part of the duplicating operation, RMAN automates the following steps:
1) Create Spfile in memory

2) Shut down the instance started with pfile

3) startup with the newly created in memory spfile

4) Restore the control file

5) Mount the database

6) set newname for the datafiles

7) restotre the datafiles

8) Recover the newly restored database (Untill SCN).

9) Create a new controlfile with new/duplicate DB name

10) Open the database in resetlogs mode.

Note: If there is any small error even at the last step of the duplicate command, we need to repeat all  the steps from step 5 ie: starting the instance using pfile, which is very time consuming process. So  please keenly monitor the duplicate database status for any errors




Restore RMAN backup to New Host/Server



1) Copy the RMAN backup taken from source to destination server.

2) On new server set the environment variables as given below
export ORACLE_SID= <SID of Source Database>

export ORACLE_HOME=<Path of Oracle Home on the new server>
3) Invoke the RMAN prompt without connecting it to the recovery catalog.
rman target /

4) Startup the instance in nomount state with default parameters as below
RMAN> startup nomount force;

5) Restore the spfile from the RMAN backup.
RMAN> catalog start with ‘PATH of backup piece which contain spfile’;

RMAN>restore spfile to <New Oracle Home Path/dbs>

from '<Name & absolute path of the backup piece which contain spfile>';
6) Once the spfile has been restored to desired path, create a pfile from newly restored spfile.
export ORACLE_SID= <SID of Source Database>

export ORACLE_HOME=<Path of Oracle Home on the new server>
Sqlplus / as sysdba
 
SQL> create pfile=’< New Oracle Home Path/dbs>’ from spfile=’ <Name & absolute path of the spfile>’;
7) Now shut down the instance started at nomount state in step no 4.
SQL> shut immediate;

8) Now modify the pfile parameters such that it will point to the new path on new server.
*.audit_file_dest=

*control_files=

*.db_recovery_file_dest_size=

*.db_recovery_file_dest=

*.diagnostic_dest=

*.log_archive_dest_1=

*.local_listener=

9) Now start the database with newly restored & modified pfile as shown below
export ORACLE_SID= <SID of Source Database>

export ORACLE_HOME=<Path of Oracle Home on the new server>

sqlplus / as sysdba
SQL> startup nomount pfile=’<Absolute path of the pfile>’;
10) Now restore the controlfile on new server as given below.
rman target /
RMAN> catalog start with ‘<Absolute path of the backup piece contain control file>’;
RMAN> restore controlfile from ‘<Absolute path of the backup piece contain control file >’;
11) Once the controlfile is successfully restored, start the database in mount state.
RMAN> sql ‘alter database mount’; OR RMAN> alter database mount;


12) Now register all the backup pieces with the newly restored controlfile.
RMAN> catalog start with ‘<Absolute path of the backup >’;


13) Open a new sqlplus session.
export ORACLE_SID= <SID of Source Database>

export ORACLE_HOME=<Path of Oracle Home on the new server>

sqlplus / as sysdba

14) Now rename the redolog files

alter database rename file '/prod/oradata/log01.dbf' to '/test/oradata/log01.dbf';

alter database rename file '/prod/oradata/log02.dbf' to '/test/oradata/log02.dbf';

15) As we are restoring the backup on new server where the path of the datafile may be different, we need to set new name for all the datafiles, so run the below command to achieve this.
set head off pages 0 feed off echo off verify off

set lines 200

spool rename_datafiles.lst
select ‘SET NEWNAME FOR DATAFILE ‘ || FILE# || ‘ TO ”’ || ‘<NEW PATH>’ ||

substr(name,instr(name,’/’,-1)+1) || ”';’ from v$datafile;


spool off

exit;

16) Now restore the database using below rman command.
run{

SET NEWNAME FOR DATAFILE 1 TO '+DATA/system01.dbf';

SET NEWNAME FOR DATAFILE 2 TO '+DATA/sysaux01.dbf';

SET NEWNAME FOR DATAFILE 3 TO '+DATA/undotbs01.dbf';

SET NEWNAME FOR DATAFILE 4 TO '+DATA/users01.dbf';

SET NEWNAME FOR DATAFILE 5 TO '+DATA/threatened_fauna_data.dbf';

SET NEWNAME FOR DATAFILE 6 TO '+DATA/example_temp01.dbf';

SET NEWNAME FOR DATAFILE 7 TO '+DATA/EXAMPLE_5';

SET NEWNAME FOR DATAFILE 8 TO '+DATA/INDEX_DATA_6';

restore database;

switch datafile all;

recover database;

}

17) At the end of this command, there will be an error as shown below.
RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

RMAN-03002: failure of recover command at

RMAN-06054: media recovery requesting unknown archived log for thread X with sequence XX and starting

SCN of XXXX

RMAN>

18) Ignore this error because it is looking for the archivelog. Exit from the RMAN prompt.
19) Now If you DON’T WANT TO CHANGE THE NAME OF THE DATABASE, then run open resetlog
command.
SQL> alter database open resetlogs;

Database altered.

20) If you WANT TO CHANGE THE DATABASE NAME TO NEW ONE, open a new session & complete below steps.
 
SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE AS '/tmp/control.trc';;

21) Now open this trace file in vi editor & change the name of the database to the desired one, save the  file with .sql extension.

22) Now run the newly modified controlfile.sql on the newly restored database

SQL> @/tmp/control.sql

23) Now open the database with resetlog option as shown below.
SQL> alter database open resetlogs

Resolving Gaps in Data Guard Apply Using Incremental RMAN Backup



Due to any reason (given below), If you found that the standby database is lagging behind with the primary database with huge number of archives, in this case we have 2 approaches to bring the standby in synch with primary database as below:

1) Take the Incremental backup from primary & path it on Standby.

2) Rebuild the standby database from active primary database.
 

1) Take the Incremental backup from primary & path it on Standby
Possible reasons for standby lagging behind the primary.
a) Archive log at primary database location is corrupted.

b) Archive log at primary database location removed accidently.

c) All the archive logs are available at standby location but will take long time to synch with primary database, which is not acceptable by business.
What needs be done in this case?
If you encountered such situation, don’t get panic & follow below steps.
1) Check & Cancel the Managed recovery process (MRP) on standby database.



SQL> Select PROCESS, STATUS, THREAD#, SEQUENCE# from v$managed_standby where process like 'MRP%';

SQL> alter database recover managed standby database cancel;
 
2) Shutdown the standby database.

SQL> shut immediate
 
3) Check the last applied archive on to the standby database.
select max(SEQUENCE#) from v$archived_log where APPLIED='YES' and THREAD#='1';

select max(SEQUENCE#) from v$archived_log where APPLIED='YES' and THREAD#='2';

select current_scn from v$database;
 
4) Take the SCN based Incremental backup from production database.
RMAN> run { allocate channel c1 type disk;

allocate channel c2 type disk;

allocate channel c3 type disk;

allocate channel c4 type disk;

backup as compressed backupset incremental from scn <SCN NUMBER> database format '<PATH>/FOR_STDBY_BKP_%U.bkp';

};
 
5) Create a new standby controlfile form production.
alter database create standby controlfile as '<ABSOLUTE_PATH>/DEL1_standby.ctl';
OR
RMAN> backup current controlfile for standby format 'Z:\BACKUP\FORSTDBYCTRL.bkp';
 
6) Transfer/Copy the incremental backup & the new standby control file at the standby site.
7) Startup the standby database in nomount state.
SQL> startup nomount;
8) Find the exact path of the standby control files.
 
SQL> show parameter control_files.
9) Invoke the RMAN on standby site & restore the new standby control file.
$ rman target=/

RMAN> RMAN> RESTORE STANDBY CONTROLFILE FROM 'C:\TEMP\INCR_BACKUP\FORSTDBYCTRL.BKP';
10) Start-up the database in mount state.
SQL> alter database mount standby database;
11) Register the incremental backup on the newly restored standby control file.
RMAN> catalog start with 'C:\FRA\SATISTD\BACKUPSET';
After this, RMAN will ask for the confirmation to catalog/register the backup. Say YES.
Do you really want to catalog the above files (enter YES or NO)? yes
12) Restore the newly catalog/registered backup & recover the database.
RMAN> recover database;
13) After the completion/end of recovery, it will come out with error/warning, please ignore it.
14) Now start the Managed recovery process onto the standby database.
SQL> alter database recover managed standby database disconnect from session;
15) Open/tail an alert log file on standby database to see the current status/errors.
SQL> show parameter dump

SQL> exit

Cd <Absolute path of the standby alert logfile>

Tail -40f <Name of alertlog file >
 
16) On primary database check the synch status by running below command.
set scan on

set feed off

set linesize 200

BREAK ON ROW SKIP 1s

column thread format a6;

column "PR - Archived" format a13;

column "STBY - Archived" format a15;

column "STBY - Applied" format a14;

column "Shipping GAP (PR -> STBY)" format a25;

column "Applied GAP (STBY -> STBY)" format a26;

--ACCEPT DEST PROMPT 'Enter the Standby Archive Log Destination : '

SELECT *

FROM (SELECT LPAD (t1, 4, ' ') "Thread",

LPAD (pricre, 9, ' ') "PR - Archived",

LPAD (stdcre, 10, ' ') "STBY - Archived",

LPAD (stdnapp, 9, ' ') "STBY - Applied",

LPAD (pricre - stdcre, 13, ' ')

"Shipping GAP (PR -> STBY)",

LPAD (stdcre - stdnapp, 15, ' ')

"Applied GAP (STBY -> STBY)"

FROM ( SELECT MAX (sequence#) stdcre, thread# t1

FROM v$archived_log

WHERE standby_dest = 'YES'

AND resetlogs_id IN

(SELECT MAX (RESETLOGS_ID)

FROM v$archived_log)

AND thread# IN (1, 2, 3, 4)

GROUP BY thread#) a,

( SELECT MAX (sequence#) stdnapp, thread# t2

FROM v$archived_log

WHERE standby_dest = 'YES'

AND resetlogs_id IN

(SELECT MAX (RESETLOGS_ID)

FROM v$archived_log)

AND thread# IN (1, 2, 3, 4)

AND applied = 'YES'

GROUP BY thread#) b,

( SELECT MAX (sequence#) pricre, thread# t3

FROM v$archived_log

WHERE standby_dest = 'NO'

AND resetlogs_id IN

(SELECT MAX (RESETLOGS_ID)

FROM v$archived_log)

AND thread# IN (1, 2, 3, 4)

GROUP BY thread#) c

WHERE a.t1 = b.t2 AND b.t2 = c.t3 AND c.t3 = a.t1)

ORDER BY 1

/

set feed on

break on off
 
 


2) Rebuild the standby database from active primary database.
 
 
1) Check & Cancel the Managed recovery process (MRP) on standby database.
SQL> Select PROCESS, STATUS, THREAD#, SEQUENCE# from v$managed_standby where process like 'MRP%';
 
SQL> alter  database recover managed standby database cancel;
2) Shutdown the standby database.
SQL> shut immediate
 
3) Startup the database in no-mount state.
SQL> startup nomount
 
4) Delete all the existing archives from STANDBY database so that new archives will be shipped to the standby location & get applied on it.
Source the environment for ASM, invoke ASM command line utility & manually remove the archives.
5) As we already have p-file/SP-file on standby database, we just need to run the below command on standby database.
rman target <Username/Password>@<DB_NAME> auxiliary <Username/Password>@<STDBY_DB_NAME>

run {

allocate channel ch1 device type disk;

allocate channel ch2 device type disk;

allocate channel ch3 device type disk;

allocate channel ch4 device type disk;

allocate auxiliary channel ch5 device type disk;

allocate auxiliary channel ch6 device type disk;

allocate auxiliary channel ch7 device type disk;

allocate auxiliary channel ch8 device type disk;

duplicate target database for standby from active database nofilenamecheck dorecover;

}
 
6) Once this is done. Please mount the standby database & put it in MRP mode.
SQL> alter database recover managed standby database disconnect from session;
7) Make a log switch on primary database (on both the node) & see if those archives are getting
shipped & applied successfully on standby database.
 
On Node-1:     SQL> Alter system switch logfile;

On Node-2:     SQL> Alter system switch logfile;

On Any of the Running Node:

set scan on set

feed off

set linesize 200

BREAK ON ROW SKIP 1s column thread format

a6; column "PR - Archived" format a13; column

"STBY - Archived" format a15; column "STBY -

Applied" format a14; column "Shipping GAP (PR -

> STBY)" format a25; column "Applied GAP (STBY

-> STBY)" format a26;

--ACCEPT DEST PROMPT 'Enter the Standby Archive Log Destination : '

SELECT *

FROM (SELECT LPAD (t1, 4, ' ') "Thread",

LPAD (pricre, 9, ' ') "PR - Archived",

LPAD (stdcre, 10, ' ') "STBY - Archived",

LPAD (stdnapp, 9, ' ') "STBY - Applied",

LPAD (pricre - stdcre, 13, ' ')

"Shipping GAP (PR -> STBY)",

LPAD (stdcre - stdnapp, 15, ' ')

"Applied GAP (STBY -> STBY)"

FROM ( SELECT MAX (sequence#) stdcre, thread# t1

FROM v$archived_log

WHERE standby_dest = 'YES'

AND resetlogs_id IN

(SELECT MAX (RESETLOGS_ID)

FROM v$archived_log)

AND thread# IN (1, 2, 3, 4)

GROUP BY thread#) a,

( SELECT MAX (sequence#) stdnapp, thread# t2

FROM v$archived_log

WHERE standby_dest = 'YES'

AND resetlogs_id IN

(SELECT MAX (RESETLOGS_ID)

FROM v$archived_log)

AND thread# IN (1, 2, 3, 4)

AND applied = 'YES'

GROUP BY thread#) b,

( SELECT MAX (sequence#) pricre, thread# t3

FROM v$archived_log

WHERE standby_dest = 'NO'

AND resetlogs_id IN

(SELECT MAX (RESETLOGS_ID)

FROM v$archived_log)

AND thread# IN (1, 2, 3, 4)

GROUP BY thread#) c

WHERE a.t1 = b.t2 AND b.t2 = c.t3 AND c.t3 = a.t1)

ORDER BY 1

/ set feed

on break o