Tuesday, 27 May 2025

RMAN Duplicate failed with error RMAN-06136 oracle error from auxiliary database:00344 unable to re-create online log

Oracle RMAN Duplicate Fails at Open Resetlogs – Resolved

While duplicating the test database from a production backup using RMAN, the restore and recovery completed successfully. 

However, the open resetlogs step failed with the following error:

Executing: alter database enable block change tracking using file '/oradata01/db1/Block_change_tracking.dbf'


contents of Memory Script:

{

 Alter clone database open resetlogs;

}

executing Memory Script

RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

RMAN-03002: failure of Duplicate Db command at 05/27/2025 15:33:05

RMAN-05501: aborting duplication of target database

RMAN-03015: error occurred in stored script Memory Script

RMAN-06136: Oracle error from auxiliary database:

ORA-00344: unable to re-create online log '/oraredo02/db1/redo01.log'

ORA-27040: file create error, unable to create file

Linux-x86_64 Error: 2: No such file or directory


Root Cause: The error occurred because the directory /oraredo02/db1 did not exist on the test server.


[oracle@hostname db1]$ cd /oraredo02/db1

-bash: cd: /oraredo02/db1: No such file or directory


[oracle@hostname db1]$ mkdir -p /oraredo02/db1


Database status:


SQL> select name, open_mode from v$database;


NAME OPEN_MODE

-------- --------------------

DB1 MOUNTED


SQL> select member from v$logfile;


MEMBER

---------------------------------------------------------------

/oraredo02/db1/redo04.log

/oraredo02/db1/redo03.log

/oraredo02/db1/redo02.log

/oraredo02/db1/redo01.log


see above Controlfile contains references to redo logs that don’t exist physically as location was missing in test server but controlfile restored from prod having these details.


Adding New Redo Log files of same size , idea is to add 4 new logfiles and drop the old files from controlfile(as physically they dont exists)


SQL> ALTER DATABASE ADD LOGFILE '/oraredo02/db1/redo05.log' SIZE 100M;

SQL> ALTER DATABASE ADD LOGFILE '/oraredo02/db1/redo06.log' SIZE 100M;

SQL> ALTER DATABASE ADD LOGFILE '/oraredo02/db1/redo07.log' SIZE 100M;

SQL> ALTER DATABASE ADD LOGFILE '/oraredo02/db1/redo08.log' SIZE 100M;


Attempt to Drop Old Redo Logs


Dropping log 4 failed because it was the current log.


SQL> ALTER DATABASE DROP LOGFILE '/oraredo02/db1/redo04.log';

ERROR at line 1:

ORA-01623: log 4 is current log for instance db1 (thread 1) - cannot drop

ORA-00312: online log 4 thread 1: '/oraredo02/db1/redo04.log'


Check Redo Log Status


SQL> COL MEMBER FOR A50;

SQL> SET LINES 200;

SQL> SET PAGES 1000;

SQL> SELECT l.group#, l.thread#, f.member, l.archived, l.status, (bytes/1024/1024) fsize

     FROM v$log l, v$logfile f

     WHERE f.group# = l.group#;


GROUP# THREAD# MEMBER ARC STATUS FSIZE

------ ------- ---------------------------------------------- --- ---------------- ------

1 1 /oraredo02/db1/redo01.log YES CLEARING 100

2 1 /oraredo02/db1/redo02.log YES CLEARING 100

3 1 /oraredo02/db1/redo03.log YES CLEARING 100

4 1 /oraredo02/db1/redo04.log YES CLEARING_CURRENT 100

5 1 /oraredo02/db1/redo05.log YES UNUSED 100

6 1 /oraredo02/db1/redo06.log YES UNUSED 100

7 1 /oraredo02/db1/redo07.log YES UNUSED 100

8 1 /oraredo02/db1/redo08.log YES UNUSED 100


Dropping Old Redo Logs


SQL> ALTER DATABASE DROP LOGFILE '/oraredo02/db1/redo01.log';

Database altered.


SQL> ALTER DATABASE DROP LOGFILE '/oraredo02/db1/redo02.log';

Database altered.


SQL> ALTER DATABASE DROP LOGFILE '/oraredo02/db1/redo03.log';

Database altered.


SQL> ALTER DATABASE DROP LOGFILE '/oraredo02/db1/redo04.log';

ERROR at line 1:

ORA-01623: log 4 is current log for instance db1 (thread 1) - cannot drop


Attempted Log Switch


SQL> ALTER SYSTEM SWITCH LOGFILE;

ERROR at line 1:

ORA-01109: database not open


Final Fix – Clear Unarchived Redo Log


SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 4;

Database altered.


Database Open Resetlogs Successful


SQL> ALTER DATABASE OPEN RESETLOGS;

Database altered.


Conclusion: Always ensure redo log paths exist prior to RMAN duplication. If controlfile contains references to missing redo logs and you do not need recovery from them, use CLEAR UNARCHIVED LOGFILE to clear them and proceed with open resetlogs.

Saturday, 10 May 2025

How to find the historical execution of the query using sql_id

Sometimes user or application team comes to us and question what was running hostrocally at particular moment which slowdown their application , we have to use historical view to  check these details, below script can be used to find out historical details like SQL_ID, SQL_PROFILE, LIO ,PIO etc.   

You can give your time lines in below script.

select trunc(b.end_interval_time) SNAP_DATE ,a.sql_id ,a.module

    ,a.plan_hash_value ,a.sql_profile --,sum(a.PX_SERVERS_EXECS_DELTA)

    PX_PER_EXECUTIONS ,sum(a.executions_delta) EXECUTIONS

    ,sum(a.disk_reads_delta) DISK_READS_PIO ,sum(a.buffer_gets_delta)

    BUFFER_GETS_LIO ,sum(a.direct_writes_delta) direct_writes

    ,sum(a.rows_processed_delta) rows_processed

    ,trunc(sum(a.elapsed_time_delta)/1000000/60/60,2) elap_hrs

    ,trunc(sum(a.cpu_time_delta)/1000000/60/60,2) cpu_hrs

    ,trunc(sum(a.clwait_delta)/1000000/60/60,2) clw_hrs

    ,trunc(sum(a.apwait_delta)/1000000/60/60,2) apw_hrs

    ,trunc((sum(a.buffer_gets_delta)-sum(a.disk_reads_delta))/(sum(decode(a.buffer_gets_delta,0,1,a.buffer_gets_delta))/100),2)

    chr

    ,trunc((sum(a.elapsed_time_delta)-sum(a.apwait_delta))/sum(decode(a.buffer_gets_delta,0,1,a.buffer_gets_delta)))

    elap_lio

    ,trunc(sum(a.elapsed_time_delta)/1000000/60/60,2)-trunc(sum(a.apwait_delta)/1000000/60/60,2)

    act_hrs select trunc(b.end_interval_time) SNAP_DATE ,a.sql_id ,a.module

    ,a.plan_hash_valu ,a.sql_profile --,sum(a.PX_SERVERS_EXECS_DELTA)

    PX_PER_EXECUTIONS ,sum(a.executions_delta) EXECUTIONS

    ,sum(a.disk_reads_delta) DISK_READS_PIO ,sum(a.buffer_gets_delta)

    BUFFER_GETS_LIO ,sum(a.direct_writes_delta) direct_writes

    ,sum(a.rows_processed_delta) rows_processed

    ,trunc(sum(a.elapsed_time_delta)/1000000/60/60,2) elap_hrs

    ,trunc(sum(a.cpu_time_delta)/1000000/60/60,2) cpu_hrs

    ,trunc(sum(a.clwait_delta)/1000000/60/60,2) clw_hrs

    ,trunc(sum(a.apwait_delta)/1000000/60/60,2) apw_hrs

    ,trunc((sum(a.buffer_gets_delta)-sum(a.disk_reads_delta))/(sum(decode(a.buffer_gets_delta,0,1,a.buffer_gets_delta))/100),2)

    chr

    ,trunc((sum(a.elapsed_time_delta)-sum(a.apwait_delta))/sum(decode(a.buffer_gets_delta,0,1,a.buffer_gets_delta)))

    elap_lio

    ,trunc(sum(a.elapsed_time_delta)/1000000/60/60,2)-trunc(sum(a.apwait_delta)/1000000/60/60,2)

    act_hrs ,trunc(sum(a.IOWAIT_DELTA)/1000000/60/60,2) iow_hrs from

    dba_hist_sqlstat a ,dba_hist_snapshot b ,dba_hist_sqltext c where 1=1 and

    a.snap_id = b.snap_id and a.dbid = b.dbid and a.instance_number =

    b.instance_number and a.sql_id = c.sql_id and a.dbid = c.dbid and

    c.command_type != 47 --and a.snap_id between 149529 and 149533 and

    b.end_interval_time between to_date('27-JUL-2023 08:00:00','dd-mon-yyyy

    hh24:mi:ss') and to_date('04-AUG-2023 09:00:00','dd-mon-yyyy hh24:mi:ss')

    ,trunc(sum(a.IOWAIT_DELTA)/1000000/60/60,2) iow_hrs from dba_hist_sqlstat a

    ,dba_hist_snapshot b ,dba_hist_sqltext c where 1=1 and a.snap_id = b.snap_id

    and a.dbid = b.dbid and a.instance_number = b.instance_number and a.sql_id =

    c.sql_id and a.dbid = c.dbid and c.command_type != 47 --and a.snap_id

    between 149529 and 149533 and b.end_interval_time between

    to_date('27-JUL-2023 08:00:00','dd-mon-yyyy hh24:mi:ss') and

    to_date('04-AUG-2023 09:00:00','dd-mon-yyyy hh24:mi:ss') and a.sql_id in

    ('34d70ygdz5csd') --and a.plan_hash_value in ('273738736') group by

    trunc(b.end_interval_time) ,a.sql_id ,a.module ,a.plan_hash_value

    ,a.sql_profile order by 1 desc;

## Replace SQL_ID and dates of execution

Friday, 9 May 2025

How to create the standby database using the primary database backup

Creating a Standby Database Without Impacting Primary Performance


In many organizations, we often receive requests to create a standby database for the production environment. One of the common approaches is using Active Duplicate, but this method can affect the performance of the primary database, especially if the database is large.


My Preferred Strategy


To avoid any performance impact on the primary system, I generally create the standby database from an existing backup and then enable log shipping. This method allows us to restore massive amounts of data without touching the production system, and once restoration is complete, we configure log shipping as usual.



---


Step-by-Step Guide


1. Ensure You Have a Valid Backup


Make sure your daily RMAN backup is running (either on tape or disk).


If a full or incremental backup already exists, there’s no need to take another backup.


If not, take a full backup of the primary database.



2. Backup the Standby Control File from Primary


Use the following RMAN command on the primary database:

On Primary server:

RMAN> BACKUP CURRENT CONTROLFILE FOR STANDBY FORMAT '/tmp/standby_DBNAME_1.bck';


3. Transfer Control File to Standby Server


Copy the file /tmp/standby_DBNAME_1.bck from the primary to the standby server.



---


On the Standby Server


4. Restore the Control File


Create a pfile on the standby server (you can refer to the primary database’s pfile).


Start the standby instance in NOMOUNT state.


Run the following RMAN command to restore the control file:



RMAN> RESTORE STANDBY CONTROLFILE FROM '/tmp/standby_DBNAME_1.bck';



---


5. Mount the Standby Database


Once the control file is restored, mount the standby database:


SQL> ALTER DATABASE MOUNT STANDBY DATABASE;



---


6. Restore and Recover the Standby Database


Use the following RMAN script to restore and recover the standby database from backup. This example assumes you're using tape backup (TSM):


RUN {

  ALLOCATE CHANNEL t2 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  ALLOCATE CHANNEL t3 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  ALLOCATE CHANNEL t4 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  ALLOCATE CHANNEL t5 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  ALLOCATE CHANNEL t6 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  ALLOCATE CHANNEL t7 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  

  RESTORE DATABASE;

  RECOVER DATABASE;

}


7. Enable the Log shipping parameters at primary and standby side, I am not putting those things considering that those things should be known and can be setup.. we need to put some parameters at both side.

---


Final Thoughts


This method is especially useful for large databases, where active duplication would otherwise degrade the primary system’s performance. By leveraging existing backups and configuring the standby database properly, we can achieve a seamless setup with minimal impact on production.


Monday, 26 August 2024

How to remove all the objects of Schema before import in oracle database

 Sometimes, we get a reqeust to export and import the schema from PROD to QA, although we can use table_exists_action parameter to replace the table which exists in QA, but there are other objects also like sequences, procedures, functions etc which get skipped if exists in QA already. 

There are two ways now:

1. Take the grants/privileges(DDL) assigned to schema ,drop it and recreate the user using DDL generated and import into the schema.

 2. Delete all the objects under the schema , it should clear the schema and we can now import into the schema. We can use below script to delete all the objects


DECLARE

    PROCEDURE dropAll (inUser VARCHAR2)

    IS

    lSQL VARCHAR2(200);

        CURSOR constr_cur IS

           SELECT table_name, constraint_name FROM DBA_CONSTRAINTS

           WHERE owner = inUser

           AND constraint_type ='R';

       CURSOR obj_cur IS

          SELECT object_type, object_name FROM DBA_OBJECTS

          WHERE owner = inUser;

       CURSOR pubsyn_cur IS

          SELECT synonym_name

          FROM DBA_SYNONYMS

          WHERE OWNER = 'PUBLIC' AND TABLE_OWNER = inUser;

BEGIN

FOR l IN constr_cur

LOOP

    EXECUTE IMMEDIATE 'ALTER TABLE ' || inUser || '.' || l.table_name

                      || ' DROP CONSTRAINT ' || l.constraint_name;

END LOOP;

FOR l IN obj_cur

LOOP

 IF l.object_type = 'SYNONYM' THEN

    EXECUTE IMMEDIATE 'DROP SYNONYM ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'TABLE' THEN

    EXECUTE IMMEDIATE 'DROP TABLE ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'SEQUENCE' THEN

    EXECUTE IMMEDIATE 'DROP SEQUENCE ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'FUNCTION' THEN

    EXECUTE IMMEDIATE 'DROP FUNCTION ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'PROCEDURE' THEN

    EXECUTE IMMEDIATE 'DROP PROCEDURE ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'PACKAGE' THEN

    EXECUTE IMMEDIATE 'DROP PACKAGE ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'VIEW' THEN

    EXECUTE IMMEDIATE 'DROP VIEW ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'TYPE' THEN

  BEGIN

    EXECUTE IMMEDIATE 'DROP TYPE ' || inUser || '.' || l.object_name;

 EXCEPTION

  WHEN OTHERS THEN NULL;

 END;

 END IF;

END LOOP;

FOR l IN pubsyn_cur

LOOP

   EXECUTE IMMEDIATE 'DROP PUBLIC SYNONYM ' || l.synonym_name;

END LOOP;

END;

BEGIN

 dropAll('AGENT_102');

 dropAll('CORE_102');

 dropAll('STG_102');

 dropAll('PUB_102');

 dropAll('SWITCH_A_102');

 dropAll('SWITCH_B_102');

END;

/

Note: you need to provide the schema name in dropAll('put schema name here'), we can put multiple schema names as above if we want to drop objects of multiple schemas in one go.




 

Wednesday, 5 June 2024

How to find the defragmentation and remove it from the Oracle database

Defragmentation:

 

When rows are inserted into the table, the high watermark(used blocks) of the table moved forward in order to accommodate new rows into the table. But during delete operation, oracle doesn’t allow high watermark to shift backward in order to decrease table size and release the free space. Ideally once we delete the data from the table, the free space should be released or reused but additionally oracle acquire new blocks to accommodate the new rows for insertion which causes hole into the table. As you can see below, table contains data ranging from low watermark to high watermark which we call it as a used blocks and last two are free blocks. When rows are inserted into the table oracle moves the high water mark to acquire free blocks to accommodate new rows into the table marking free blocks as used. But after delete operation oracle doesn’t shift high water mark backward to release the free space. How to remove table fragmentation in oracle? There are different ways to remove fragmentation like table export/import, moving tables to same or different tablespace and table recreation. But here we will discuss most easy and common ways of removing fragmentation.
Steps To Check andRemove Fragmentation: 

STEP 1: First Gather table statistics In order to find the exact difference between the total size and actual size of the table from dba_segments and dba_tables views. You can also check the last analysed date of a table. If table has recently analysed and stale_stats is no then you can skip this step. 

select table_name,last_analyzed,stale_stats from user_tab_statistics where table_name='&TABLE_NAME'; EXEC dbms_stats.gather_table_stats(ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', method_opt=> 'for all indexed columns size skewonly', granularity => 'ALL', degree => 8 ,cascade => true,estimate_percent => 15); 

Step 2: Check table size from dba_segments 

select sum(bytes)/1024/1024/1024 from dba_segments where segment_name='&TABLE_NAME'; 

STEP 3: Check actual table size, fragmented size and percentage of fragmentation in a table. 

select table_name,avg_row_len,round(((blocks*16/1024)),2)||'MB' "TOTAL_SIZE", round((num_rows*avg_row_len/1024/1024),2)||'Mb' "ACTUAL_SIZE", round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE", (round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2)/round(((blocks*16/1024)),2))*100 "percentage" from all_tables WHERE table_name='&TABLE_NAME'; 

Note: If you find more than 20% fragmentation then you can proceed for de-fragmentation. Please note that above query is for 16k block size. You can use 8 in place of 16 for 8k block size. You can also de-fragment based on size you are going to reclaim from the above mentioned query. 

STEP 4: Check the indexes on the table 

select index_name from dba_indexes where table_name='&TABLE_NAME'; 

STEP 5: Remove fragmentation by moving tables to same tablespace. You can also move tables to different tablespace.
 
alter table move; 

->For moving tables into different tablespace 

alter table enable row movement; 
alter table move tablespace; ----> create seperate tablespace and provide tablespace name in command and after that move the table again to old original tablespace 

alter table move tablespace( Old Tablespace); 

STEP 6: Rebuild the indexes. 

alter index index_name rebuild online;

STEP 7: Gather the statistics:

 EXEC dbms_stats.gather_table_stats(ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', method_opt=> 'for all indexed columns size skewonly', granularity => 'ALL', degree => 8 ,cascade => true,estimate_percent => 15); 

STEP 8: Finally check the table size and percentage of fragmentation: 

select sum(bytes)/1024/1024/1024 from dba_segments where segment_name='&TABLE_NAME'; select table_name,avg_row_len,round(((blocks*16/1024)),2)||'MB' "TOTAL_SIZE", round((num_rows*avg_row_len/1024/1024),2)||'Mb' "ACTUAL_SIZE", round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE", (round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2)/round(((blocks*16/1024)),2))*100 "percentage" from all_tables WHERE table_name='&TABLE_NAME';


 Please note that the above mentioned steps is only for non-partitioned tables. For partitioned tables you can move table partition by partition or the easiest way is to recreate the table.

Tuesday, 1 August 2023

how to enable trace for the session in oracle database

EXECUTE dbms_monitor.session_trace_enable (, , waits=>true, binds=>true); --->Check the trace file that would be created : SELECT s.sid,p.tracefile FROM v$session s JOIN v$process p ON s.paddr = p.addr WHERE s.sid in (); ---> Let it run for sometime --->and then disable the trace: EXECUTE dbms_monitor.session_trace_disable (session_id => , serial_num => );

How to find the RMAN backup run history or status

set linesize 500 pagesize 2000 col Hours format 9999.99 col STATUS format a10 select SESSION_KEY, INPUT_TYPE, STATUS, to_char(START_TIME,'mm-dd-yyyy hh24:mi:ss') as RMAN_Bkup_start_time, to_char(END_TIME,'mm-dd-yyyy hh24:mi:ss') as RMAN_Bkup_end_time, elapsed_seconds/3600 Hours from V$RMAN_BACKUP_JOB_DETAILS order by session_key;