This topic is for converting single instance database to RAC instance. I am demonstrating this in oracle12c version. There are several ways we can do this.. But RMAN restore is preferred method for least downtime in bigger Tera Byte size database.
If you want to convert from Oracle11g to Oracle12c, then i would recommend to upgrade the stand alone DB to Oracle12c and complete the testing before you migrate to RAC. This will help narrow down the issues when we do one thing at a time.
I followed the oracle document for this RAC migration. Here is the reference document for my post. I also referred the metalink Doc ID 747457.1. Please review the oracle document before you start doing in your environment.
Prerequisite :
Install cluster and ASM on all nodes
Install the oracle software
Stand alone DB and RAC must be same database version.
ASM disks size should be good enough to accommodate the source database.
Cluster should be up and running on RAC node.
Source DB info :
Target DB info :
Step 4 Drop the static listener if you created during the database duplication process. Use NETCA to configure the listener in RAC environment. Copy the password file from ractest1 to other nodes.
srvctl add database -d usben -o $ORACLE_HOME -p +DATA/USBEN/PARAMETERFILE/spfileusben.ora
srvctl add instance -d usben -i usben1 -n RACTEST1.localdomain
srvctl add instance -d usben -i usben2 -n RACTEST2.localdomain
srvctl add instance -d usben -i usben3 -n RACTEST3.localdomain
If you want to convert from Oracle11g to Oracle12c, then i would recommend to upgrade the stand alone DB to Oracle12c and complete the testing before you migrate to RAC. This will help narrow down the issues when we do one thing at a time.
I followed the oracle document for this RAC migration. Here is the reference document for my post. I also referred the metalink Doc ID 747457.1. Please review the oracle document before you start doing in your environment.
Prerequisite :
Install cluster and ASM on all nodes
Install the oracle software
Stand alone DB and RAC must be same database version.
ASM disks size should be good enough to accommodate the source database.
Cluster should be up and running on RAC node.
Source DB info :
Host Name | usbenhost |
Instance name | usben |
DB name | usben |
Storage type | file system |
Target DB info :
Host Name | ractest1, ractest2, ractest3 |
Instance name | usben1, usben2,usben3 |
DB name | usben |
Storage type | ASM |
There are two major task on this migration, The first part is to migrate the database from stand alone server(usbenhost) to ractest1 database server. I am using RMAN restore to migrate the database.
There are multiple ways we can restore the database. As part of the migration, i am also moving the database to ASM disk.
Here are the high level steps for option 1 & Option 2.
Now let us start converting the DB to RAC database!
My standby alone database is already migrated from stand alone server to RAC node . Here is the database info :
select name from v$datafile
union
select name from v$controlfile
union
select member from v$logfile
union
select name from v$tempfile
Step 1
Make sure, MAXINSTANCES and MAXLOGFILES Parameters are good enough for RAC environment. I verified my control file and looks good for me. If not, please recreate the control file with appropriate values.
Step 2
Create the redo log group and undo tablespace for each instance. The redo and undo are handled on per instance basis. We need minimum two redo log groups for each instance and it should be created on shared storage. My case, I keep two redo log groups. I will let you choose the number of redo log groups according to your environment.
Each redo log group should be assigned to thread number in RAC database. It is very important to specify the thread number when you add or create redo log files. This will ensure that the redo log file is assigned to right instance.
Each instance is assigned a thread number starting at 1. Hence i am creating redo log groups for thread#2 and thread#3.
Here i have three node RAC. Hence i am creating redo and undo for other two instance.
Step 3
Create pfile from spfile
Modify the pfile with RAC related parameters
Add the below parameters in pfile.
Here is the updated pfile in ractest1 host. I also modified the archive log destination to ASM disk.
There are multiple ways we can restore the database. As part of the migration, i am also moving the database to ASM disk.
- Duplicate the database using target database backup. Click here for details.
- Duplicate the database using Active database duplication. Click here for details.
- Regular RMAN restore approach. Click here for details.
- Setup the Standby Database. Click here for details.
Here are the high level steps for option 1 & Option 2.
- Shutdown the application.
- Restore the database in RAC host and open the database
- Convert the DB to RAC DB
- Start the application.
- Restore the database in RAC host
- Apply the daily archive log files in RAC host.
- On the cut over date. apply all the archive log files.
- Shutdown the application,
- Open the stand alone database in RAC node
- Convert the DB to RAC DB
- Start the application.
- Setup the standby database in RAC host.
- Shutdown the application,
- Switch over the database or activate the database in RAC host
- Convert the DB to RAC DB
- Start the application.
Now let us start converting the DB to RAC database!
My standby alone database is already migrated from stand alone server to RAC node . Here is the database info :
select name from v$datafile
union
select name from v$controlfile
union
select member from v$logfile
union
select name from v$tempfile
Step 1
Make sure, MAXINSTANCES and MAXLOGFILES Parameters are good enough for RAC environment. I verified my control file and looks good for me. If not, please recreate the control file with appropriate values.
Step 2
Create the redo log group and undo tablespace for each instance. The redo and undo are handled on per instance basis. We need minimum two redo log groups for each instance and it should be created on shared storage. My case, I keep two redo log groups. I will let you choose the number of redo log groups according to your environment.
Each redo log group should be assigned to thread number in RAC database. It is very important to specify the thread number when you add or create redo log files. This will ensure that the redo log file is assigned to right instance.
Each instance is assigned a thread number starting at 1. Hence i am creating redo log groups for thread#2 and thread#3.
Here i have three node RAC. Hence i am creating redo and undo for other two instance.
alter database add logfile thread 2 group 6 ('+DATA') size 50m reuse; alter database add logfile thread 2 group 7 ('+DATA') size 50m reuse; alter database add logfile thread 3 group 8 ('+DATA') size 50m reuse; alter database add logfile thread 3 group 9 ('+DATA') size 50m reuse; alter database enable public thread 2; alter database enable public thread 3; create undo tablespace UNDOTBS2 datafile '+DATA' size 50M; create undo tablespace UNDOTBS3 datafile '+DATA' size 50M; |
Step 3
Create pfile from spfile
Modify the pfile with RAC related parameters
Add the below parameters in pfile.
*.cluster_database_instances=3 *.cluster_database=true usben1.instance_number=1 usben2.instance_number=2 usben3.instance_number=3 usben1.thread=1 usben2.thread=2 usben3.thread=3 usben1.undo_tablespace='UNDOTBS1' usben2.undo_tablespace='UNDOTBS2' usben3.undo_tablespace='UNDOTBS3' |
usben.__data_transfer_cache_size=0 usben.__db_cache_size=536870912 usben.__java_pool_size=16777216 usben.__large_pool_size=150994944 usben.__oracle_base='/ora/app/oracle'#ORACLE_BASE set from environment usben.__pga_aggregate_target=671088640 usben.__sga_target=989855744 usben.__shared_io_pool_size=50331648 usben.__shared_pool_size=218103808 usben.__streams_pool_size=0 *.compatible='12.1.0.2.0' *.control_files='+DATA/USBEN/CONTROLFILE/current.294.904836161', '+DATA/USBEN/CONTROLFILE/current.312.904836161' *.db_block_size=8192 *.db_domain='localdomain' *.db_name='usben' *.db_create_file_dest='+DATA' *.db_recovery_file_dest='+DATA' *.db_recovery_file_dest_size=4560m *.diagnostic_dest='/ora/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=usbenXDB)' *.log_archive_dest_1='location=+DATA/USBEN/ARCH' *.log_archive_dest_state_1='enable' *.open_cursors=300 *.processes=300 *.remote_login_passwordfile='EXCLUSIVE' *.cluster_database_instances=3 *.cluster_database=true usben1.instance_number=1 usben2.instance_number=2 usben3.instance_number=3 usben1.thread=1 usben2.thread=2 usben3.thread=3 usben1.undo_tablespace='UNDOTBS1' usben2.undo_tablespace='UNDOTBS2' usben3.undo_tablespace='UNDOTBS3' |
Step 4 Drop the static listener if you created during the database duplication process. Use NETCA to configure the listener in RAC environment. Copy the password file from ractest1 to other nodes.
Step 5 Start the RAC instance on all three node
Add the entry in /etc/oratab
ractest1 host : usben1:/ora/app/oracle/product/12.1.0.1/db_1:N
ractest1 host : usben2:/ora/app/oracle/product/12.1.0.1/db_1:N
Rename the pfile as below.
mv $ORACLE_HOME/dbs/initusben.ora $ORACLE_HOME/dbs/initusben1.ora
Before you start the instance, make sure you set the new environment variable. Since the instance name is changed from usben to usben1. You get the below error if you don't set the ORACLE_SID with new value USBEN1
Create the spfile from pfile
update the pfile on all three instance as below.
[oracle@RACTEST1 dbs]$ cat initusben1.ora
SPFILE='+DATA/USBEN/PARAMETERFILE/spfileusben.ora'
[oracle@RACTEST1 dbs]$
Now start the instance on all three node with new spfile.
Step 6
Execute catclust.sql on ractest1 node. This script creates the dictionary views needed for Oracle RAC databases. I run this script even before we configure the RAC instance.
Step 7 Register the RAC instance in the CRS.
Add the entry in /etc/oratab
ractest1 host : usben1:/ora/app/oracle/product/12.1.0.1/db_1:N
ractest1 host : usben2:/ora/app/oracle/product/12.1.0.1/db_1:N
ractest1 host : usben3:/ora/app/oracle/product/12.1.0.1/db_1:N
mv $ORACLE_HOME/dbs/initusben.ora $ORACLE_HOME/dbs/initusben1.ora
Before you start the instance, make sure you set the new environment variable. Since the instance name is changed from usben to usben1. You get the below error if you don't set the ORACLE_SID with new value USBEN1
> startup nomount pfile=$ORACLE_HOME/dbs/initusben1.ora ORA-29760: instance_number parameter not specified |
sys@usben> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. sys@usben> sys@usben> startup nomount pfile=$ORACLE_HOME/dbs/initusben1.ora ORACLE instance started. Total System Global Area 297795584 bytes Fixed Size 2923632 bytes Variable Size 239076240 bytes Database Buffers 50331648 bytes Redo Buffers 5464064 bytes sys@usben1> create spfile='+DATA/USBEN/PARAMETERFILE/spfileusben.ora' from PFILE; File created. sys@usben1> |
update the pfile on all three instance as below.
[oracle@RACTEST1 dbs]$ cat initusben1.ora
SPFILE='+DATA/USBEN/PARAMETERFILE/spfileusben.ora'
[oracle@RACTEST1 dbs]$
[oracle@RACTEST2 dbs]$ cat initusben2.ora
SPFILE='+DATA/USBEN/PARAMETERFILE/spfileusben.ora'
[oracle@RACTEST2 dbs]$
SPFILE='+DATA/USBEN/PARAMETERFILE/spfileusben.ora'
[oracle@RACTEST2 dbs]$
[oracle@RACTEST1 dbs]$ cat initusben1.ora
SPFILE='+DATA/USBEN/PARAMETERFILE/spfileusben.ora'
[oracle@RACTEST1 dbs]$
shutdown the current stand alone instance in ractest1 node.SPFILE='+DATA/USBEN/PARAMETERFILE/spfileusben.ora'
[oracle@RACTEST1 dbs]$
Now start the instance on all three node with new spfile.
Step 6
Execute catclust.sql on ractest1 node. This script creates the dictionary views needed for Oracle RAC databases. I run this script even before we configure the RAC instance.
[oracle@RACTEST1 dbs]$ cd $ORACLE_HOME/rdbms/admin [oracle@RACTEST1 admin]$ sqlplus / as sysdba SQL*Plus: Release 12.1.0.2.0 Production on Sat Feb 27 05:34:27 2016 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Advanced Analytics and Real Application Testing options SQL> !pwd /ora/app/oracle/product/12.1.0.1/db_1/rdbms/admin SQL> @catclust.sql Session altered. Package created. Package body created. PL/SQL procedure successfully completed. View created. Synonym created. Grant succeeded. View created. Grant succeeded. View created. Grant succeeded. Synonym created. Synonym created. Synonym created. Synonym created. Synonym created. Synonym created. Synonym created. Synonym created. Synonym created. Synonym created. Synonym created. Synonym created. PL/SQL procedure successfully completed. Session altered. SQL> |
Step 7 Register the RAC instance in the CRS.
srvctl add database -d usben -o $ORACLE_HOME -p +DATA/USBEN/PARAMETERFILE/spfileusben.ora
srvctl add instance -d usben -i usben1 -n RACTEST1.localdomain
srvctl add instance -d usben -i usben2 -n RACTEST2.localdomain
srvctl add instance -d usben -i usben3 -n RACTEST3.localdomain
[oracle@RACTEST1 dbs]$ srvctl add database -d usben -o $ORACLE_HOME -p +DATA/USBEN/PARAMETERFILE/spfileusben.ora [oracle@RACTEST1 dbs]$ srvctl add instance -d usben -i usben1 -n RACTEST1.localdomain [oracle@RACTEST1 dbs]$ srvctl add instance -d usben -i usben2 -n RACTEST2.localdomain [oracle@RACTEST1 dbs]$ srvctl add instance -d usben -i usben3 -n RACTEST3.localdomain [oracle@RACTEST1 dbs]$ |
Let us shutdown the DB and start the database using srvctl command.
The conversion process is completed once we start the database by using srvctl. We can run the following SQL statement to see the status of all the instances in the Oracle RAC database.\
select * from v$active_instances;
column host_name format a50
select host_name,instance_number,instance_name,status,thread# from gv$instance;
Checking the RAC cluster global process :
Step 8 Monitor the logs ,
Monitor the below log files and make sure no issues.
CRS_HOME/log/hostname/crsd/ – The log files for the CRS daemon
CRS_HOME/log/hostname/cssd/ – The log files for the CSS daemon
CRS_HOME/log/hostname/evmd/ – The log files for the EVM daemon
CRS_HOME/log/hostname/client/ – The log files for the Oracle Cluster Registry (OCR)
CRS_HOME/log/hostname/racg/ – The log files for the Oracle RAC high availability component
CRS_HOME/log/hostname/racg – The log files for the Oracle RAC high availability component
CRS_HOME/log/hostanme/alert.log – The alert.log for Clusterware issues.
[oracle@RACTEST1 dbs]$ srvctl start database -d usben [oracle@RACTEST1 dbs]$ srvctl stop database -d usben [oracle@RACTEST1 dbs]$ srvctl start database -d usben [oracle@RACTEST1 dbs]$ srvctl status database -d usben Instance usben1 is running on node ractest1 Instance usben2 is running on node ractest2 Instance usben3 is running on node ractest3 [oracle@RACTEST1 dbs]$ |
The conversion process is completed once we start the database by using srvctl. We can run the following SQL statement to see the status of all the instances in the Oracle RAC database.\
select * from v$active_instances;
column host_name format a50
select host_name,instance_number,instance_name,status,thread# from gv$instance;
Checking the RAC cluster global process :
[root@RACTEST1 ~]# crsctl stat resource -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE ractest1 STABLE ONLINE ONLINE ractest2 STABLE ONLINE ONLINE ractest3 STABLE ora.LISTENER.lsnr ONLINE ONLINE ractest1 STABLE ONLINE ONLINE ractest2 STABLE OFFLINE OFFLINE ractest3 STABLE ora.TEST.dg ONLINE ONLINE ractest1 STABLE ONLINE ONLINE ractest2 STABLE ONLINE ONLINE ractest3 STABLE ora.VOTE.dg ONLINE ONLINE ractest1 STABLE ONLINE ONLINE ractest2 STABLE ONLINE ONLINE ractest3 STABLE ora.VOTE1.dg OFFLINE OFFLINE ractest1 STABLE ONLINE OFFLINE ractest2 STABLE ONLINE OFFLINE ractest3 STABLE ora.VOTE2.dg ONLINE ONLINE ractest1 STABLE ONLINE ONLINE ractest2 STABLE ONLINE ONLINE ractest3 STABLE ora.asm OFFLINE ONLINE ractest1 Started,STABLE ONLINE ONLINE ractest2 Started,STABLE ONLINE ONLINE ractest3 Started,STABLE ora.net1.network ONLINE ONLINE ractest1 STABLE ONLINE ONLINE ractest2 STABLE ONLINE ONLINE ractest3 STABLE ora.ons ONLINE ONLINE ractest1 STABLE ONLINE ONLINE ractest2 STABLE ONLINE ONLINE ractest3 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE ractest2 STABLE ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE ractest3 STABLE ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE ractest3 STABLE ora.MGMTLSNR 1 ONLINE ONLINE ractest3 169.254.214.12,STABL E ora.cvu 1 ONLINE ONLINE ractest2 STABLE ora.govinddb.db 1 OFFLINE OFFLINE Instance Shutdown,ST ABLE 2 OFFLINE OFFLINE Instance Shutdown,ST ABLE 3 OFFLINE OFFLINE STABLE ora.mgmtdb 1 OFFLINE OFFLINE STABLE ora.oc4j 1 ONLINE ONLINE ractest1 STABLE ora.ractest1.vip 1 ONLINE ONLINE ractest1 STABLE ora.ractest2.vip 1 ONLINE ONLINE ractest2 STABLE ora.scan1.vip 1 ONLINE ONLINE ractest2 STABLE ora.scan2.vip 1 ONLINE ONLINE ractest3 STABLE ora.scan3.vip 1 ONLINE ONLINE ractest3 STABLE ora.usben.db 1 ONLINE ONLINE ractest1 Open,STABLE 2 ONLINE ONLINE ractest2 Open,STABLE 3 ONLINE ONLINE ractest3 Open,STABLE -------------------------------------------------------------------------------- [root@RACTEST1 ~]# |
Monitor the below log files and make sure no issues.
CRS_HOME/log/hostname/crsd/ – The log files for the CRS daemon
CRS_HOME/log/hostname/cssd/ – The log files for the CSS daemon
CRS_HOME/log/hostname/evmd/ – The log files for the EVM daemon
CRS_HOME/log/hostname/client/ – The log files for the Oracle Cluster Registry (OCR)
CRS_HOME/log/hostname/racg/ – The log files for the Oracle RAC high availability component
CRS_HOME/log/hostname/racg – The log files for the Oracle RAC high availability component
CRS_HOME/log/hostanme/alert.log – The alert.log for Clusterware issues.
No comments:
Post a Comment