Tuesday, 27 May 2025

RMAN Duplicate failed with error RMAN-06136 oracle error from auxiliary database:00344 unable to re-create online log

Oracle RMAN Duplicate Fails at Open Resetlogs – Resolved

While duplicating the test database from a production backup using RMAN, the restore and recovery completed successfully. 

However, the open resetlogs step failed with the following error:

Executing: alter database enable block change tracking using file '/oradata01/db1/Block_change_tracking.dbf'


contents of Memory Script:

{

 Alter clone database open resetlogs;

}

executing Memory Script

RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

RMAN-03002: failure of Duplicate Db command at 05/27/2025 15:33:05

RMAN-05501: aborting duplication of target database

RMAN-03015: error occurred in stored script Memory Script

RMAN-06136: Oracle error from auxiliary database:

ORA-00344: unable to re-create online log '/oraredo02/db1/redo01.log'

ORA-27040: file create error, unable to create file

Linux-x86_64 Error: 2: No such file or directory


Root Cause: The error occurred because the directory /oraredo02/db1 did not exist on the test server.


[oracle@hostname db1]$ cd /oraredo02/db1

-bash: cd: /oraredo02/db1: No such file or directory


[oracle@hostname db1]$ mkdir -p /oraredo02/db1


Database status:


SQL> select name, open_mode from v$database;


NAME OPEN_MODE

-------- --------------------

DB1 MOUNTED


SQL> select member from v$logfile;


MEMBER

---------------------------------------------------------------

/oraredo02/db1/redo04.log

/oraredo02/db1/redo03.log

/oraredo02/db1/redo02.log

/oraredo02/db1/redo01.log


see above Controlfile contains references to redo logs that don’t exist physically as location was missing in test server but controlfile restored from prod having these details.


Adding New Redo Log files of same size , idea is to add 4 new logfiles and drop the old files from controlfile(as physically they dont exists)


SQL> ALTER DATABASE ADD LOGFILE '/oraredo02/db1/redo05.log' SIZE 100M;

SQL> ALTER DATABASE ADD LOGFILE '/oraredo02/db1/redo06.log' SIZE 100M;

SQL> ALTER DATABASE ADD LOGFILE '/oraredo02/db1/redo07.log' SIZE 100M;

SQL> ALTER DATABASE ADD LOGFILE '/oraredo02/db1/redo08.log' SIZE 100M;


Attempt to Drop Old Redo Logs


Dropping log 4 failed because it was the current log.


SQL> ALTER DATABASE DROP LOGFILE '/oraredo02/db1/redo04.log';

ERROR at line 1:

ORA-01623: log 4 is current log for instance db1 (thread 1) - cannot drop

ORA-00312: online log 4 thread 1: '/oraredo02/db1/redo04.log'


Check Redo Log Status


SQL> COL MEMBER FOR A50;

SQL> SET LINES 200;

SQL> SET PAGES 1000;

SQL> SELECT l.group#, l.thread#, f.member, l.archived, l.status, (bytes/1024/1024) fsize

     FROM v$log l, v$logfile f

     WHERE f.group# = l.group#;


GROUP# THREAD# MEMBER ARC STATUS FSIZE

------ ------- ---------------------------------------------- --- ---------------- ------

1 1 /oraredo02/db1/redo01.log YES CLEARING 100

2 1 /oraredo02/db1/redo02.log YES CLEARING 100

3 1 /oraredo02/db1/redo03.log YES CLEARING 100

4 1 /oraredo02/db1/redo04.log YES CLEARING_CURRENT 100

5 1 /oraredo02/db1/redo05.log YES UNUSED 100

6 1 /oraredo02/db1/redo06.log YES UNUSED 100

7 1 /oraredo02/db1/redo07.log YES UNUSED 100

8 1 /oraredo02/db1/redo08.log YES UNUSED 100


Dropping Old Redo Logs


SQL> ALTER DATABASE DROP LOGFILE '/oraredo02/db1/redo01.log';

Database altered.


SQL> ALTER DATABASE DROP LOGFILE '/oraredo02/db1/redo02.log';

Database altered.


SQL> ALTER DATABASE DROP LOGFILE '/oraredo02/db1/redo03.log';

Database altered.


SQL> ALTER DATABASE DROP LOGFILE '/oraredo02/db1/redo04.log';

ERROR at line 1:

ORA-01623: log 4 is current log for instance db1 (thread 1) - cannot drop


Attempted Log Switch


SQL> ALTER SYSTEM SWITCH LOGFILE;

ERROR at line 1:

ORA-01109: database not open


Final Fix – Clear Unarchived Redo Log


SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 4;

Database altered.


Database Open Resetlogs Successful


SQL> ALTER DATABASE OPEN RESETLOGS;

Database altered.


Conclusion: Always ensure redo log paths exist prior to RMAN duplication. If controlfile contains references to missing redo logs and you do not need recovery from them, use CLEAR UNARCHIVED LOGFILE to clear them and proceed with open resetlogs.

Saturday, 10 May 2025

How to find the historical execution of the query using sql_id

Sometimes user or application team comes to us and question what was running hostrocally at particular moment which slowdown their application , we have to use historical view to  check these details, below script can be used to find out historical details like SQL_ID, SQL_PROFILE, LIO ,PIO etc.   

You can give your time lines in below script.

select trunc(b.end_interval_time) SNAP_DATE ,a.sql_id ,a.module

    ,a.plan_hash_value ,a.sql_profile --,sum(a.PX_SERVERS_EXECS_DELTA)

    PX_PER_EXECUTIONS ,sum(a.executions_delta) EXECUTIONS

    ,sum(a.disk_reads_delta) DISK_READS_PIO ,sum(a.buffer_gets_delta)

    BUFFER_GETS_LIO ,sum(a.direct_writes_delta) direct_writes

    ,sum(a.rows_processed_delta) rows_processed

    ,trunc(sum(a.elapsed_time_delta)/1000000/60/60,2) elap_hrs

    ,trunc(sum(a.cpu_time_delta)/1000000/60/60,2) cpu_hrs

    ,trunc(sum(a.clwait_delta)/1000000/60/60,2) clw_hrs

    ,trunc(sum(a.apwait_delta)/1000000/60/60,2) apw_hrs

    ,trunc((sum(a.buffer_gets_delta)-sum(a.disk_reads_delta))/(sum(decode(a.buffer_gets_delta,0,1,a.buffer_gets_delta))/100),2)

    chr

    ,trunc((sum(a.elapsed_time_delta)-sum(a.apwait_delta))/sum(decode(a.buffer_gets_delta,0,1,a.buffer_gets_delta)))

    elap_lio

    ,trunc(sum(a.elapsed_time_delta)/1000000/60/60,2)-trunc(sum(a.apwait_delta)/1000000/60/60,2)

    act_hrs select trunc(b.end_interval_time) SNAP_DATE ,a.sql_id ,a.module

    ,a.plan_hash_valu ,a.sql_profile --,sum(a.PX_SERVERS_EXECS_DELTA)

    PX_PER_EXECUTIONS ,sum(a.executions_delta) EXECUTIONS

    ,sum(a.disk_reads_delta) DISK_READS_PIO ,sum(a.buffer_gets_delta)

    BUFFER_GETS_LIO ,sum(a.direct_writes_delta) direct_writes

    ,sum(a.rows_processed_delta) rows_processed

    ,trunc(sum(a.elapsed_time_delta)/1000000/60/60,2) elap_hrs

    ,trunc(sum(a.cpu_time_delta)/1000000/60/60,2) cpu_hrs

    ,trunc(sum(a.clwait_delta)/1000000/60/60,2) clw_hrs

    ,trunc(sum(a.apwait_delta)/1000000/60/60,2) apw_hrs

    ,trunc((sum(a.buffer_gets_delta)-sum(a.disk_reads_delta))/(sum(decode(a.buffer_gets_delta,0,1,a.buffer_gets_delta))/100),2)

    chr

    ,trunc((sum(a.elapsed_time_delta)-sum(a.apwait_delta))/sum(decode(a.buffer_gets_delta,0,1,a.buffer_gets_delta)))

    elap_lio

    ,trunc(sum(a.elapsed_time_delta)/1000000/60/60,2)-trunc(sum(a.apwait_delta)/1000000/60/60,2)

    act_hrs ,trunc(sum(a.IOWAIT_DELTA)/1000000/60/60,2) iow_hrs from

    dba_hist_sqlstat a ,dba_hist_snapshot b ,dba_hist_sqltext c where 1=1 and

    a.snap_id = b.snap_id and a.dbid = b.dbid and a.instance_number =

    b.instance_number and a.sql_id = c.sql_id and a.dbid = c.dbid and

    c.command_type != 47 --and a.snap_id between 149529 and 149533 and

    b.end_interval_time between to_date('27-JUL-2023 08:00:00','dd-mon-yyyy

    hh24:mi:ss') and to_date('04-AUG-2023 09:00:00','dd-mon-yyyy hh24:mi:ss')

    ,trunc(sum(a.IOWAIT_DELTA)/1000000/60/60,2) iow_hrs from dba_hist_sqlstat a

    ,dba_hist_snapshot b ,dba_hist_sqltext c where 1=1 and a.snap_id = b.snap_id

    and a.dbid = b.dbid and a.instance_number = b.instance_number and a.sql_id =

    c.sql_id and a.dbid = c.dbid and c.command_type != 47 --and a.snap_id

    between 149529 and 149533 and b.end_interval_time between

    to_date('27-JUL-2023 08:00:00','dd-mon-yyyy hh24:mi:ss') and

    to_date('04-AUG-2023 09:00:00','dd-mon-yyyy hh24:mi:ss') and a.sql_id in

    ('34d70ygdz5csd') --and a.plan_hash_value in ('273738736') group by

    trunc(b.end_interval_time) ,a.sql_id ,a.module ,a.plan_hash_value

    ,a.sql_profile order by 1 desc;

## Replace SQL_ID and dates of execution

Friday, 9 May 2025

How to create the standby database using the primary database backup

Creating a Standby Database Without Impacting Primary Performance


In many organizations, we often receive requests to create a standby database for the production environment. One of the common approaches is using Active Duplicate, but this method can affect the performance of the primary database, especially if the database is large.


My Preferred Strategy


To avoid any performance impact on the primary system, I generally create the standby database from an existing backup and then enable log shipping. This method allows us to restore massive amounts of data without touching the production system, and once restoration is complete, we configure log shipping as usual.



---


Step-by-Step Guide


1. Ensure You Have a Valid Backup


Make sure your daily RMAN backup is running (either on tape or disk).


If a full or incremental backup already exists, there’s no need to take another backup.


If not, take a full backup of the primary database.



2. Backup the Standby Control File from Primary


Use the following RMAN command on the primary database:

On Primary server:

RMAN> BACKUP CURRENT CONTROLFILE FOR STANDBY FORMAT '/tmp/standby_DBNAME_1.bck';


3. Transfer Control File to Standby Server


Copy the file /tmp/standby_DBNAME_1.bck from the primary to the standby server.



---


On the Standby Server


4. Restore the Control File


Create a pfile on the standby server (you can refer to the primary database’s pfile).


Start the standby instance in NOMOUNT state.


Run the following RMAN command to restore the control file:



RMAN> RESTORE STANDBY CONTROLFILE FROM '/tmp/standby_DBNAME_1.bck';



---


5. Mount the Standby Database


Once the control file is restored, mount the standby database:


SQL> ALTER DATABASE MOUNT STANDBY DATABASE;



---


6. Restore and Recover the Standby Database


Use the following RMAN script to restore and recover the standby database from backup. This example assumes you're using tape backup (TSM):


RUN {

  ALLOCATE CHANNEL t2 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  ALLOCATE CHANNEL t3 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  ALLOCATE CHANNEL t4 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  ALLOCATE CHANNEL t5 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  ALLOCATE CHANNEL t6 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  ALLOCATE CHANNEL t7 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  

  RESTORE DATABASE;

  RECOVER DATABASE;

}


7. Enable the Log shipping parameters at primary and standby side, I am not putting those things considering that those things should be known and can be setup.. we need to put some parameters at both side.

---


Final Thoughts


This method is especially useful for large databases, where active duplication would otherwise degrade the primary system’s performance. By leveraging existing backups and configuring the standby database properly, we can achieve a seamless setup with minimal impact on production.