Automating R12.2 EBS Clones with Database 19c

The multitenant database architecture with 19c has changed the process using RapidClone which means that any existing methods of automating an EBS clone need to be changed as well. In order to think about automating an EBS clone, we need to break down the steps.

  1. One Time Setup
  2. Prepare the source
  3. Target Database Home
  4. Clone from the Source
  5. Configure the Target Database
  6. Configure the Target Apps

Depending on how your environment, you might complete the database steps before doing any apps tier steps.   The clone of the apps tier should be as close as possible to the clone of the database tier especially if you are preserving the contents of $APPLCSF since you will want the sequence numbers in the database to match up with the log and output files.

  • To keep things simple for this post, I am assuming a Linux environment, file system storage of the database files, non-RAC database, a single applications tier, and the applications and database tiers are on different nodes.
  • In scripts, I will keep to the style of MOS notes for stuff that is custom to your environment, e.g. <ORACLE_HOME> will mean the path to your ORACLE_HOME
  • Scripts are in a common script directory path, e.g. /opt/oracle/bin (this is not a shared filesystem).
  • The account on both the database node and apps node is oracle, and the database node can ssh to the apps node. If you use an account such as applmgr on the applications node, replace occurences of ssh <apps node> with ssh applmgr@<apps node>

One Time Setup on both the Source and the Target nodes

Putting passwords in scripts is a very poor security practice.  Create a file that can only be read by the script owner with critical passwords, such as /etc/sysconfig/ENV (where ENV is your database connection, e.g. DEV).  Note this file could be the actual passwords (please use some form of obfuscation at least) or the required values to access a password manager to retrieve the passwords.  The point here is to not have someone get access to your critical database and weblogic passwords because they got a copy of the script.

In other words, we do not want a script like:

#!/bin/bash
export APPS_PASSWORD=apps
CON=DEV

sqlplus /nolog <<EOF
connect apps@${CON}/$APPS_PASSWORD
update xx_custom_table
set clone_date=sysdate;
commit;
EOF

We want something like

#!/bin/bash
CON=DEV
if [ -f /etc/sysconfig/$CON ]; then
   . /etc/sysconfig/$CON
else
   echo "Cannot retrieve passwords"
fi

sqlplus /nolog <<EOF
connect apps@${CON}/$APPS_PASSWORD
update xx_custom_table
set clone_date=sysdate;
commit;
EOF

For simplicity, example scripts will assume the file in /etc/sysconfig sets environment variables with the passwords.  You can go a step further and set other variables you may need by have the file configure the environment, that is on the database tier source the CDB environment from $ORACLE_HOME or set up for the run filesystem on the apps tier.

Prepare the Source

I use scripts to run adpreclone.pl. This avoids passing passwords and ensures that the environment is correctly configured.

Database Tier Script,preclone_db.sh

#!/bin/bash
. <ORACLE_HOME>/<PDB SID>_<hostname>.env
. /etc/sysconfig/$ORACLE_SID
cd $ORACLE_HOME/appsutil/scripts/$CONTEXT_NAME
echo $APPS_PASSWORD | ./adpreclone.pl dbTier

Applications Tier Script, preclone_apps.sh

#!/bin/bash
. &lAPPLICATIONS_BASE>/EBSapps.env run
. /etc/sysconfig/$TWO_TASK
cd $ADMIN_SCRIPTS_HOME
(echo $APPS_PASSWORD;echo $WLS_PASSWORD) | ./adpreclone.pl appsTier

Master Script on Database Tier, preclone.sh

#!/bin/bash
. <ORACLE_HOME>/<PDB SID>_<hostname>.env
. /etc/sysconfig/$ORACLE_SID
<SCRIPT_LOC>/preclone_db.sh
ssh <apps node&ht; <APPS SCRIPT LOC>/preclone_apps.sh

Schedule preclone.sh in cron so that it runs daily. For robustness, the scripts should check return status of adpreclone and pass it on, similarly the master script should check the return status of each step, as well as verifying that a patch cycle is not open before running preclone.

Target Database Home

Here is where we are starting the cloning process, if you are going to automate this process, I suggest running the master script from the source database server doing something similar to the example preclone.sh to start the process.

Oracle Home

The first step in cloning the database tier is creating the ORACLE_HOME. In my experience, this only needs to be done after patching the source. However, others (incuding the MOS Note) prefer to do this every time your clone. If you choose to only refresh the ORACLE_HOME after a patch in the source, remember that you are doing that at your own risk and the first step in resolving an issue will be to refresh the ORACLE_HOME. In any case, preserve files including your pairsfile.txt, init.ora, tnsnames.ora, etc. from the target system after the initial ORACLE_HOME creation.

Create a pairsfile.txt for the target system.

s_undo_tablespace=<Source (PDB) system undo tablespace name>
s_db_oh=<Location of new ORACLE_HOME>
s_dbhost=<Target hostname>
s_dbSid=<Target PDB name>
s_pdb_name=<Target PDB name>
s_cdb_name=<Target CDB SID>
s_base=<Base directory for DB Oracle Home>
s_dbuser=<DB User>
s_dbgroup=<DB group> (Not applicable on Windows)
s_dbhome1=<Data directory>
s_display=<Display>
s_dbCluster=false
s_isDBCluster=n
s_dbport=<DB port>
s_port_pool=<Port pool number>

Copy the source ORACLE_HOME to the target location, then configure the new home on the target

#!/bin/bash
ORACLE_HOME=<TARGET ORACLE HOME>
cd $(dirname $0)
SCRIPT_DIR=$(pwd)
if [ ! -f $SCRIPT_DIR/preserve/pairsfile.txt ]; then
   echo "No pairsfile.txt"
   exit 1
fi
SID=$(grep s_dbSid $SCRIPT_DIR/preserve/pairsfile.txt | cut -f2 -d=)
PORT=$(grep s_dbport $SCRIPT_DIR/preserve/pairsfile.txt | cut -f2 -d=)
CONTEXT="${SID}_$(hostname | cut -f1 -d.)"
if [ -f /stage/clone/target_data/$SID/info ]; then
  . /stage/clone/target_data/$SID/info
  export APPS_DECRYPT=<Un-obfuscate the apps password>
else
  stty -echo 2>/dev/null
  read -p "Source APPS Password: " APPS_DECRYPT
  stty echo 2>/dev/null
fi
if [ -x $ORACLE_HOME/oui/bin/detachHome.sh ]; then
  $ORACLE_HOME/oui/bin/detachHome.sh
fi
cd $ORACLE_HOME/appsutil/clone/bin
rm -f ../../${CONTEXT}.xml
echo $APPS_DECRYPT | perl adclonectx.pl contextfile=../../<SOURCE CONTEXT>.xml template=../../template/adxdbctx.tmp pairsfile=$SCRIPT_DIR/preserve/pairsfile.txt
echo $APPS_DECRYPT | perl adcfgclone.pl dbTechStack ../../${CONTEXT}.xml
cd ../..
. ./txkSetCfgCDB.env -dboraclehome=$ORACLE_HOME
cd bin
perl txkGenCDBTnsAdmin.pl -dboraclehome=$ORACLE_HOME -cdbname=C$SID -cdbsid=C$SID -dbport=$PORT -outdir=../log -israc=no
if [ -f $SCRIPT_DIR/preserve/tnsnames.ora ]; then
  cp -p $SCRIPT_DIR/preserve/tnsnames.ora $ORACLE_HOME/network/admin
fi
cd ../scripts/$CONTEXT
./adcdblnctl.sh start C$SID -ignorePDBStatus
if [ -f $SCRIPT_DIR/preserve/initC${SID}.ora.122 ]; then
  cp -p $SCRIPT_DIR/preserve/initC${SID}.ora.122 $ORACLE_HOME/dbs
fi

If you refresh the ORACLE_HOME with every clone (this is the best method to remove human error), then this script will be part of your cloning process (I would move the call to detachHome.sh before actually refreshing the ORACLE_HOME). If you do not clone the ORACLE_HOME every time, have a policy for when you do refresh, for example any time a patch is applied in production, then all ORACLE_HOMEs are refreshed from production. The problem with this is it is easy to forget and having different patch levels in the ORACLE_HOME and the database means you are in a garbage state.

Clone from the Source

I strongly suggest using rman duplicate from your production backup for at least one instance. This gives you a free restore test at regular intervals (this is making an assumption that if you are reading a blog on automating clones, you plan to schedule them). This clone should always include the ORACLE_HOME since that will be required in a bare metal recovery.

If you are using a disk based utility or copying files from the source, then you will need to create a controlfile and do a recovery. I run a single script, bld.sql, to produce these files prior to instigating the copy by calling sqlplus / as sysdba @bld.sql <target script location> <env>, where <env> is the name of the pluggable such as DEV.

This code assumes your production plugin is named PROD.

set heading off
set verify off
set feedback off
set lines 120
set pages 0
set trimspool on
column upcase new_value utarget
column lowcase new_value ltarget
select upper('&2') as upcase from dual;
select lower('&2') as lowcase from dual;
spool &1/Step-1_temp.lst
PROMPT shutdown immediate
PROMPT startup nomount
PROMPT CREATE CONTROLFILE REUSE SET DATABASE &utarget RESETLOGS ARCHIVELOG
PROMPT                MAXLOGFILES 192
PROMPT                MAXLOGMEMBERS 5
PROMPT                MAXDATAFILES 1024
PROMPT                MAXINSTANCES 1
PROMPT                MAXLOGHISTORY 38562
PROMPT                LOGFILE
select 'GROUP '||l.group#||' (' as g
,LISTAGG(''''||replace(replace(lf.member,'/prod/','/<arget/'),'/cprod/','/c<arget/')||'''',',')
        within GROUP (order by lf.member) as logs
,') SIZE '||l.bytes||' BLOCKSIZE '||l.blocksize||',' as s
from v$log l,v$logfile lf
where l.group#=lf.group#
and l.group# < (select max(group#) from v$log)
group by l.group#,l.bytes,l.blocksize
order by l.group#;
select 'GROUP '||l.group#||' (' as g
,LISTAGG(''''||replace(replace(lf.member,'/prod/','/<arget/'),'/cprod/','/c<arget/')||'''',',')
        within GROUP (order by lf.member) as logs
,') SIZE '||l.bytes||' BLOCKSIZE '||l.blocksize as s
from v$log l,v$logfile lf
where l.group#=lf.group#
and l.group# = (select max(group#) from v$log)
group by l.group#,l.bytes,l.blocksize;
PROMPT DATAFILE
select ''''||replace(replace(name,'/prod/','/<arget/'),'/cprod/','/c<arget/')||''',' as name
from v$datafile
where file# < (select max(file#) from v$datafile)
order by file#;
select ''''||replace(replace(name,'/prod/','/<arget/'),'/cprod/','/c<arget/')||'''' as name
from v$datafile
where file# = (select max(file#) from v$datafile)
order by file#;
PROMPT CHARACTER SET UTF8
PROMPT ;;
spool off
spool &1/Step-2_RecoverDatabase.rman
PROMPT run { allocate channel appsync_manager type disk;;
select 'catalog archivelog '''||replace(replace(lf.member,'/prod/','/<arget/'),'/cprod/','/c<arget/')||''';'
from v$logfile lf
order by lf.group#,lf.member;
PROMPT recover database; }
spool off
spool &1/Step-3_RecoverDatabase.sql
PROMPT recover database using backup controlfile until cancel;;
PROMPT CANCEL
spool off
spool &1/Step-4_OpenDatabase.sql
PROMPT alter database noarchivelog;;
PROMPT alter database open resetlogs;;
select 'ALTER TABLESPACE '||tablespace_name||' ADD TEMPFILE '''||replace(replace(file_name,'/prod/','/<arget/'),'/cprod/','/c<arget/')||''' SIZE '||bytes/(1024*1024)||'M REUSE;'
from dba_temp_files
order by tablespace_name,file_id;
spool off
spool &1/Step-5_renamePDB.sql
PROMPT ALTER PLUGGABLE DATABASE PROD CLOSE;;
PROMPT ALTER PLUGGABLE DATABASE PROD unplug into '/u01/app/oracle/product/19.0.0/dbhome1/dbs/PROD.xml';;
PROMPT DROP PLUGGABLE DATABASE PROD;;
PROMPT CREATE PLUGGABLE DATABASE &utarget using '/u01/app/oracle/product/19.0.0/dbhome1/dbs/PROD.xml' NOCOPY SERVICE_NAME_CONVERT=('ebs_PROD','ebs_&utarget','PROD_ebs_patch','&utarget._ebs_patch');;
PROMPT ALTER PLUGGABLE DATABASE &utarget open read write;;
PROMPT ALTER PLUGGABLE DATABASE &utarget save state;;
spool off
alter session set container=PROD;
spool &1/Step-6_tempPDB.sql
PROMPT ALTER SESSION SET CONTAINER=&utarget;;
select 'ALTER TABLESPACE '||tablespace_name||' ADD TEMPFILE '''||replace(replace(file_name,'/prod/','/<arget/'),'/cprod/','/c<arget/')||''' SIZE '||bytes/(1024*1024)||'M REUSE;'
from dba_temp_files
order by tablespace_name,file_id;
spool off
exit

There is extra garbage in the in the initial script that needs to be cleaned up before using on the target. In addition there are passwords and other information that need to be passed to the clone along with the sql scripts for recovery. My preferred method is a shared location between the target and the clone (in my case, this is <mount>/clone/target_data/<target>). If the target database home needed to be updated, I have already completed that task. Others will wish to include the target Oracle Home everytime, in which case, add it to this master script before the call to the target db host at the end.

Here is my sample driver script (a different script is used for each target environment):

#!/bin/bash
OUT_DIR=<SHARED_MOUNT>/clone
TARGET=<TARGET PDB>
SCRIPT_DIR=$OUT_DIR/target_data/$TARGET
DATAOUT=$SCRIPT_DIR/info
if [ -z $SCRIPT_DIR -o "$SCRIPT_DIR" == '/' ]; then
   echo "Something is seriously wrong"
   exit 127
fi
if [ -d $SCRIPT_DIR ]; then
   echo "Cleaning out $SCRIPT_DIR"
   rm -rf $SCRIPT_DIR
fi
mkdir -p $SCRIPT_DIR
. <FULL PATH TO CONTAINER ENVIRONMENT FILE>
. <PASSWORD FILE>
# Make sure we will connect to the container not the pluggable
unset ORACLE_PDB_SID
RUN_BASE=$(sqlplus -s /nolog <<EOF | grep RUN_BASE | tail -1 | cut -f2 -d:
Connect apps@PROD/$APPS_PASSWORD
Select 'RUN_BASE:'||extractvalue(xmltype(text),'//CURRENT_BASE')
FROM fnd_oam_context_files where status='S'
and name not in ('TEMPLATE','METADATA')
and extractvalue(xmltype(text),'//file_edition_type')='run';
/
EOF
)
ACTIVE_FS=$(basename $RUN_BASE)
SRC_PASSWORD=<Obfuscate the source password>
echo "export RUN_BASE='$RUN_BASE'" > $DATAOUT
echo "export ACTIVE_FS='$ACTIVE_FS'" >> $DATAOUT
echo "export SRC_PASSWORD='$SRC_PASSWORD'" >> $DATAOUT
echo "export SRC_DB='$ORACLE_SID'" >> $DATAOUT
echo "export SRC_DATE='$(date +%d-%b-%y)'" >> $DATAOUT
# Any other information you need to pass
sqlplus -s / as sysdba @$(dirname $0)/bld_recovery.sql ${SCRIPT_DIR} $TARGET
cat ${SCRIPT_DIR}/Step-1_temp.lst | sed '/^$/d' > ${SCRIPT_DIR}/Step-1_DatabaseRename.sql
ssh $TARGET_DB_HOST <target_cleanup_script>
if [ $status -ne 0 ]; then
  echo "$TARGET is not ready"
  exit $status
fi
## clone the database files to the target ##
## possibly clone the apps tier files, method may dictate this is on the target db ##
## call the target db host to continue the process ##

At this point, the rest of the scripts are done on the target systems. My cleanup script looks this

#!/bin/bash
touch $(dirname $0)/$(basename $0).started
. <FULL PATH TO CONTAINER ENVIRONMENT FILE>
. <PASSWORD FILE>
ssh ebstestapp dba/bin/apps_shutdown abort
status=$?
if [ $status -ne 0 ]; then
  echo "Apps tier not ready"
  # Non zero exit status to abort the clone process is critical!!!
  exit 1
fi;
## stop any other processes on the database node such as tomcat for Oracle Apex ##
RC=0
sqlplus / as sysdba <<EOF
shutdown abort
EOF
status=$?
RC=$(expr $RC + $status)
## cleanup files that will be recreated during recovery as well as files that are no longer valid such
## as old arechive logs
##
## example
rm -f $ORACLE_HOME/dbs/init<container>.ora
status=$?
RC=$(expr $RC + $status)
echo $RC > $(dirname $0)/$(basename $0).completed
exit $RC

Configure the Target Database

The assumption at this point is that control has been passed to a script is running on the target database node after the database files have been copied. I keep certain files that I want preserved from one clone to the next in the preserve directory.

#/bin/bash
export TARGET_PDB=UAT
export TARGET_CDB=CUAT
export LOWER_PDB=$(echo $TARGET_PDB | tr '[:upper:]' '[:lower:]')
export LOWER_CDB=$(echo $TARGET_CDB | tr '[:upper:]' '[:lower:]')
export SHORT_HOST=$(hostname | cut -f1 -d.)
. <Password File;>
#One of the values in the password file is the full path of the container environment file
#verify the file exists
if [ ! -f $ENV_FILE ]; then
  echo "Cannot open $ENV_FILE"
  exit 127
fi
. $ENV_FILE
RC=0
SOURCE_DATA=<Shared file location>/$TARGET_PDB
. ${SOURCE_DATA}/info
## if this is a disk based utility, then names will be based on the source
## my files are in /u*/oradata/<lower case of sid>
for i in $(ls -d /u*/oradata/<LC SOURCE PLUGGABLE>); do  mv $i $(dirname $i)/$LOWER_PDB; done
for i in $(ls -d /u*/oradata/<LC SOURCE CONTAINER>); do  mv $i $(dirname $i)/$LOWER_CDB; done
rm -f <control files matching the init parameters script>
export SRC_DECRYPT=<un-obfuscate the source password in the info file>
cp -p $(dirname $0)/preserve/init${TARGET_CDB}.ora.122 $ORACLE_HOME/dbs/init${TARGET_CDB}.ora
rm -f $ORACLE_HOME/dbs/spfile${TARGET_CDB}.ora
rm -f $ORACLE_HOME/dbs/orapw${TARGET_CDB}
date
echo "Step 1"
if [ ! -f ${SOURCE_DATA}/Step-1_DatabaseRename.sql ]; then
  echo "No recovery files"
  exit 1
fi
cd ${SOURCE_DATA}
sqlplus / as sysdba  <<eof
@step-1_databaserename.sql
eof
status="$?"
rc="$(expr" $rc + $status)
echo="step 2"
rman target=/ <<eof
@step-2_recoverdatabase.rman
EOF
status=$?
RC=$(expr $RC + $status)
echo "Step 3"
sqlplus / as sysdba  <<EOF
@Step-3_RecoverDatabase.sql
EOF
status=$?
RC=$(expr $RC + $status)
echo "Step 4"
sqlplus / as sysdba  <<EOF
@Step-4_OpenDatabase.sql
create spfile from memory;
shutdown immediate
startup
EOF
status=$?
RC=$(expr $RC + $status)
echo "Step 5"
sqlplus / as sysdba  <<EOF
@Step-5_renamePDB.sql
EOF
status=$?
RC=$(expr $RC + $status)
echo "Step 6"
sqlplus / as sysdba  <<EOF
@Step-6_tempPDB.sql
EOF
status=$?
RC=$(expr $RC + $status)
orapwd file=$ORACLE_HOME/dbs/orapw${TARGET_CDB} password=$SYSTEM_PASSWORD
status=$?
RC=$(expr $RC + $status)
echo "Verify listener is up"
. $ORACLE_HOME/${TARGET_CDB}_${SHORT_HOST}.env
$ORACLE_HOME/appsutil/scripts/${TARGET_PDB}_${SHORT_HOST}/adcdblnctl.sh start ${TARGET_CDB}
$ORACLE_HOME/appsutil/scripts/${TARGET_PDB}_${SHORT_HOST}/adcdblnctl.sh status ${TARGET_CDB}
echo "EBS adupdlib"
sqlplus / as sysdba  <<EOF
@$ORACLE_HOME/appsutil/install/${TARGET_PDB}_${SHORT_HOST}/adupdlib so
EOF
status=$?
RC=$(expr $RC + $status)
cd $ORACLE_HOME/appsutil/clone/bin
echo $SRC_DECRYPT | ./adcfgclone.pl dbconfig ../../${TARGET_PDB}_${SHORT_HOST}.xml
status=$?
RC=$(expr $RC + $status)
echo "EBS Non Production"
cd /home/oracle/dba/clone_sql
export ORACLE_PDB_SID=${TARGET_PDB}
sqlplus apps/${SRC_DECRYPT}@${TARGET_PDB}  <<EOF
update fnd_concurrent_requests set
logfile_node_name=replace(upper('$SHORT_HOST'),'DB','APP')
where logfile_node_name IS NOT NULL;
update fnd_concurrent_requests set
outfile_node_name=replace(upper('$SHORT_HOST'),'DB','APP')
where outfile_node_name IS NOT NULL;
update fnd_concurrent_requests
set phase_code='P',
    status_code='I',
    actual_start_date=null
where phase_code='R'
  and actual_start_date > sysdate-30;
REM other scripts to do things like place requests on hold that you do not want running in non production
REM fix profile option values, etc.
connect / as sysdba
REM we are in the container because of ORACLE_PDB_SID
alter user ebs_system identified by $EBSSYS_PASSWORD;
REM fix any directories, etc.
EOF
status=$?
RC=$(expr $RC + $status)
unset ORACLE_PDB_SID
sqlplus / as sysdba <<EOF
alter user sys identified by $SYSTEM_PASSWORD;
alter user system identified by $SYSTEM_PASSWORD;
alter user dbsnmp identified by $DBSNMP_PASSWORD;
EOF
status=$?
RC=$(expr $RC + $status)
#If you are using Apex, start the application server for ORDS
echo "Completed with status of $RC"
## If the database script should chain to apps script, use the following block
## otherwise, replace the rest of the script with the line
## exit $RC
if [ "$RC" -eq 0 ]; then
   echo "Failed" >> /tmp/$(basename $0).completed
   exit $RC
else
   ssh <apps node or user@apps node> <apps node configuration script>
   status=$?
   RC=$(expr $RC + $status)
fi
exit $RC

If your cloning method requires doing the file clone for the apps node at this point, make the source call a wrapper script which runs the configuration script , triggers the copies of the apps tier and then calls the configuration of the apps node rather than having the call in the database configuration script.

#!/bin/bash
TARGET=<TARGET PDB>
LC_TARGET=$(echo $TARGET | tr '[:upper:]' '[:lower:]')
touch $(dirname $0)/$(basename $0).started
. <PATH TO SHARED SCRIPTS FROM SOURCE>/info
LOG=$(dirname $0)/post_mount.log
$(dirname $0)/configure_${LC_TARGET} > $LOG 2>&1
status=$?
cat $LOG
if [ $status -gt 0 ]; then
  echo "Failed"
  exit $status
fi
<Create apps node copy and call configuration of apps node>
touch $(dirname $0)/$(basename $0).completed

Configure the Target Apps

At this point there is no change from previous cloning methods. I maintain two pairs files, one for each filesystem. These are copied into my configuration script dir so that I have a single clone configuration script. Sample script (see my post ADOP, Rapidclone, and JVM size for an explanation of the CONFIG_JVM_ARGS variable).

#!/bin/bash
cd $(dirname $0)
SCRIPT_DIR=$(pwd)
LOG=$(SCRIPT_DIR)/$(basename $0).log

export TARGET=<TARGET PDB>
export SHORT_HOST=$(hostname | cut -f1 -d.)
. /etc/sysconfig/$TARGET
. /stage/clone/target_data/$TARGET/info
#see https://ebs-dba.com/wp/blog/2022/06/28/adop-rapidclone-and-jvm-size/
#for an explanation of CONFIG_JVM_ARGS
export CONFIG_JVM_ARGS='-Xmx4096m -Xms1024m -XX:MaxPermSize=1024m -XX:-UseGCOverheadLimit'
export SRC_DECRYPT=<Unobfuscate SRC_PASSWORD>
SRC=<SOURCE OF CLONE>
DATE=$(date +'%d-%b-%Y %H:%M')
cdate=$DATE
PAIRS=$SCRIPT_DIR/${TARGET}_${SHORT_HOST}.txt.$ACTIVE_FS
DISPLAY=$(grep '^s_display' $PAIRS|cut -f2 -d=)
XPORT=$(echo $DISPLAY | cut -f2 -d:| cut -f1 -d.)
if ! /usr/bin/xdpyinfo &> /dev/null; then
   vncserver :$XPORT
fi
echo "VNC is running on :$XPORT" > $LOG
cd <Applications Base>
rm -rf fs_ne.old
mkdir sign
cp <Applications Base>/fs_ne/EBSapps/appl/ad/admin/* sign/
mkdir xdo
cp -rp <Applications Base>/fs_ne/xdo/* xdo/
mkdir zap
if [ "$?" -ne 0 ]; then
   echo "Problem creating zap.  Try again"
   rm -rf zap
   mkdir zap
fi
if [ $ACTIVE_FS = 'fs1' ]; then
  mv EBSapps.env fs2  zap
  mv fs1/FMW_Home fs1/inst zap
  mv ../oraInventory/* zap
elif [ $ACTIVE_FS = 'fs2' ]; then
  mv EBSapps.env fs1 zap
  mv fs2/FMW_Home fs2/inst zap
  mv ../oraInventory/* zap
else
  echo "Invalid ACTIVE_FS.  Should not be possible to get here"
  exit 1
fi
mv fs_ne fs_ne.old
date
echo "Removing files we don't need"
DIR=<Applications Base>
cd $DIR
CDIR=$(pwd)
if [ "$DIR" = "$CDIR" ]; then
  echo "Removing source patch fs, etc."
  rm -rf zap
else
  echo "MAY RUN OUT OF DISK SPACE: zap not removed"
fi
DIR=<APPLPTMP>
cd $DIR
CDIR=$(pwd)
if [ "$DIR" = "$CDIR" ]; then
  echo "Cleaning up $DIR"
  rm -f *
else
  echo "Failed to clean up $DIR"
fi
#Repeat the above block for any directories on the target apps node that should not carry over
#to the next instantiation of the target environment
date
cd <Applications Base>/$ACTIVE_FS/EBSapps/comn/java/classes/oracle/apps/media
#If you have custom logo's or replace oracle_white_log.png
#copy those files into place now
cp <TARGET LOGO>.png oracle_white_logo.png
echo "Old install is cleaned up; jar signing, xdo, and branding is preserved" >> $LOG
date
cd <Applications Base>/$ACTIVE_FS/EBSapps/comn/clone/bin
(echo $SRC_DECRYPT;echo $WLS_PASSWORD;echo $SRC_DECRYPT;echo 'n') | ./adcfgclone.pl component=appsTier pairsfile=$PAIRS dualfs=yes
status=$?
if [ "$status" -ne 0 ]; then
   echo "Something went wrong"
   date
   exit $status
fi
cd <Applications Base>
#if you are preserving access to output and log files from the source, omit the following mv
mv <Applications Base>/fs_ne.old/inst/* <Applications Base>/fs_ne/inst/
rm -rf <Applications Base>/fs_ne.old
date
. <Applications Base>/EBSapps.env run
tnsping ${TARGET}
status=$?
if [ "$status" -eq 1 ]; then
   echo "TNS NAMES is not configured properly"
   exit 404
elif [ "$status" -ne 0 ]; then
   echo "Something went wrong"
   exit $status
else
   echo "Ready to proceed"
fi
echo "Apps tier is configured" >> $LOG
cp <Applications Base>/sign/* <Applications Base>/fs_ne/EBSapps/appl/ad/admin/
cp -rp <Applications Base>/xdo <Applications Base>/fs_ne/
. /u01/app/oracle/apps/EBSapps.env run
cd /home/oracle/config
cp adopdefaults.txt $APPL_TOP_NE/ad/admin/
## Any other files you need to save between clones should be copied back at this point
echo "Preserved items are restored" >> $LOG
cd <Applications Base>/etcc
echo $SRC_DECRYPT | ./checkMTpatch.sh
cd $SCRIPT_DIR
#change_passwords should do the required steps to set the passwords 
if [ "$SRC_DECRYPT" != "$APPS_PASSWORD" ]; then
  ./change_passwords $SRC_DECRYPT $APPS_PASSWORD $WLS_PASSWORD $EBSSYS_PASSWORD
fi
echo "Apps Passwords are set" >> $LOG
#Rapidclone does not do everything required for workflow so finish the jobs
echo $APPS_PASSWORD | ./wf_config_workflow
#If you use ISG, call appropriate scripts to configure after the clone
cd $ADMIN_SCRIPTS_HOME00
echo $WLS_PASSWORD | adstrtal.sh apps/$APPS_PASSWORD
echo "Apps are running" >> $LOG

Feel free to ask questions or make suggestions in the comments.

Related Posts

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.