Sunday, 5 October 2025

Patching Eats Your SYSTEM Tablespace

 

Why Oracle Patching Can Fill Up SYSAUX and SYSTEM Tablespaces

When you apply Release Updates (RUs) using datapatch, Oracle doesn’t just update binaries. It also stores rollback and apply scripts inside the database itself. These are saved as LOBs in registry tables, which means the SYSAUX or SYSTEM tablespace may steadily grow over time. For DBAs managing long-lived environments or multiple PDBs, this hidden space consumption can become noticeable.


Where Are These Scripts Stored?

The metadata lives inside dictionary views such as:

  • DBA_REGISTRY_SQLPATCH
  • DBA_REGISTRY_SQLPATCH_RU_INFO

The column PATCH_DIRECTORY in particular holds rollback/apply scripts. Since these are LOBs, their size can reach hundreds of MB for each RU cycle.


Checking the Size of Datapatch Metadata

The following query lists the patch description and its LOB size in MB:

SQL> select description,
            round(dbms_lob.getlength(patch_directory)/1024/1024,2) size_mb
     from   dba_registry_sqlpatch
     order by size_mb desc;

DESCRIPTION                                               SIZE_MB
--------------------------------------------------------  -------
Release_Update 19.20.0.0.240716                           245.13
Release_Update 19.19.0.0.240420                           243.11
Release_Update 19.18.0.0.240116                           243.01
...

You can also view information for Release Updates specifically:

SQL> select 'RU ' || ru_version description,
            round(dbms_lob.getlength(patch_directory)/1024/1024,2) size_mb
     from   dba_registry_sqlpatch_ru_info;

DESCRIPTION         SIZE_MB
-----------------   -------
RU 19.20.0.0.240716 245.13
RU 19.19.0.0.240420 243.11
RU 19.18.0.0.240116 243.01
...

To check total usage across all entries:

SQL> select round(sum(dbms_lob.getlength(patch_directory))/1024/1024,2) total_mb
     from   dba_registry_sqlpatch;

TOTAL_MB
--------
974.23

In this example, nearly 1 GB of tablespace is tied up by stored patch scripts.


How to Reclaim Space

Oracle offers a cleanup feature built into datapatch. It removes rollback/apply scripts that are no longer needed:

$ORACLE_HOME/OPatch/datapatch -purge_old_metadata

Internally, the purge process works like this:

  • It copies required rows into a temporary holding table.
  • The registry table is truncated.
  • Valid rows are reinserted, discarding unnecessary metadata.

The result is that logical space is freed within the tablespace. Keep in mind that this does not shrink the physical datafile on disk — it only makes space available for reuse within the tablespace itself.


Example: Before and After Purge

Before purge:

DESCRIPTION                       SIZE_MB
--------------------------------- -------
Release_Update 19.20.0.0.240716   245.13
Release_Update 19.19.0.0.240420   243.11
Release_Update 19.18.0.0.240116   243.01

TOTAL_MB                          974.23

After purge:

DESCRIPTION                       SIZE_MB
--------------------------------- -------
Release_Update 19.20.0.0.240716   245.13

TOTAL_MB                          245.13

The cleanup removed older rollback metadata, leaving only the latest RU, and reduced usage by ~730 MB in this case.


Considerations for Multitenant

If you use a CDB with multiple PDBs, each PDB stores its own patch metadata. That means tablespace usage scales with the number of PDBs. You’ll need to run cleanup inside each PDB to reclaim space everywhere.


Important Notes

  • Safety: The purge only deletes metadata that is no longer required. Active rollback information is preserved.
  • Timing: Because the process uses a truncate and temporary table copy, schedule it in a maintenance window or low-activity period.
  • Datafiles: Even after purge, the physical size of SYSTEM or SYSAUX datafiles won’t shrink automatically. If you need to reduce actual file size, you must follow standard datafile resize/shrink procedures.
  • PDBs: Run the cleanup in each PDB individually to reclaim space there.

Monitoring Queries

To prevent surprises, you can regularly check patch metadata size. Here are some useful scripts:

Largest entries:

set pages 200 lines 150
col description format a60

select description,
       round(dbms_lob.getlength(patch_directory)/1024/1024,2) size_mb
from   dba_registry_sqlpatch
order  by size_mb desc;

Total usage:

select round(sum(dbms_lob.getlength(patch_directory))/1024/1024,2) total_mb
from   dba_registry_sqlpatch;

Tablespace usage:

select df.tablespace_name,
       df.bytes/1024/1024 total_mb,
       (df.bytes - nvl(fs.free_bytes,0))/1024/1024 used_mb,
       nvl(fs.free_bytes,0)/1024/1024 free_mb
from   (select tablespace_name, sum(bytes) bytes 
        from dba_data_files 
        group by tablespace_name) df
left join
       (select tablespace_name, sum(bytes) free_bytes 
        from dba_free_space 
        group by tablespace_name) fs
on df.tablespace_name = fs.tablespace_name;

Final Thoughts

Patching is essential, but hidden growth in SYSTEM and SYSAUX can create pressure on tablespaces if not monitored. By running the size queries before and after each patch cycle and using datapatch -purge_old_metadata, you can keep metadata overhead under control. Add these checks to your DBA toolkit to avoid sudden tablespace alerts after patching.


Reference: Inspired by the article at dohDatabase.com (Sept 2025).

Friday, 3 October 2025

Oracle Flashback Database Automatic Deactivation

Oracle Flashback Database: Why It Can Disable Itself Automatically

When Flashback Turns Itself Off — A Closer Look

Under certain conditions, Oracle can automatically disable Flashback Database. While this behavior is not entirely new (there are posts from 2010 on the topic), it's useful to revisit with respect to newer versions like 19c, which allow toggling Flashback on/off even in READ WRITE mode (no restart needed).

What Got Me Interested

ORA-38886: WARNING: Flashback database was disabled due to error when writing flashback database logs.

This prompted me to explore the scenarios under which Oracle might disable Flashback on its own.

Flashback Behavior: A Quick Recap

  • Flashback logs are generated as needed, provided there’s available space in the Fast Recovery Area (FRA).
  • Logs older than the retention target can be deleted or reused.
  • If the FRA is full, Oracle may reuse the oldest flashback logs.
  • If there’s absolutely no space or an I/O error occurs, Flashback can be disabled automatically.

Test Case 1: FRA Mount Point Smaller than db_recovery_file_dest_size

db_recovery_file_dest = /fra
db_recovery_file_dest_size = 10 GB
db_flashback_retention_target = 1440 (24 hrs)
FLASHBACK_ON = YES
LOG_MODE = ARCHIVELOG

Mount /fra was limited to 2 GB, far below the 10 GB target. As the disk filled, Oracle logged I/O errors such as:

ORA-27072: File I/O error

The database continued to run, deleting older flashback logs. The OLDEST_FLASHBACK_TIME in V$FLASHBACK_DATABASE_LOG advanced. When I forced the mount to zero free space, the alert log showed:

ORA-27044: unable to write the header block of file
Linux-x86_64 Error: 28: No space left on device
ORA-38886: WARNING: Flashback database was disabled …

Querying the status:

SELECT flashback_on FROM v$database;

Result:

FLASHBACK_ON
------------------
NO

Test Case 2: FRA Mount Point Larger than db_recovery_file_dest_size

db_recovery_file_dest = /fra
db_recovery_file_dest_size = 1 GB
Mount point size = 2 GB

Even after generating ~50 GB of redo, Oracle only issued warnings such as:

ORA-19815: db_recovery_file_dest_size is 93.69% used

Flashback remained enabled. This shows that space pressure alone won’t disable Flashback—it requires actual I/O write failures.

Key Observations & Best Practices

  • Allocate adequate space for FRA, aligned with recovery and flashback targets.
  • Use dedicated mount points or ASM disk groups to isolate FRA.
  • Actively monitor FRA usage and flashback status.
  • Note: when Flashback is disabled, Oracle removes .flb files, but the /flashback/ directory (empty) may remain.

Final Thoughts

It would be ideal if Oracle treated Flashback like a mandatory archive destination—preventing the database from silently disabling it. As of version 19.28.0, no such feature exists. Until then, careful space planning, monitoring, and proactive alerting are the best defense against surprises.

© 2025 - Oracle DBA Insights by Saurabh

Tuesday, 27 May 2025

RMAN Duplicate failed with error RMAN-06136 oracle error from auxiliary database:00344 unable to re-create online log

Oracle RMAN Duplicate Fails at Open Resetlogs – Resolved

While duplicating the test database from a production backup using RMAN, the restore and recovery completed successfully. 

However, the open resetlogs step failed with the following error:

Executing: alter database enable block change tracking using file '/oradata01/db1/Block_change_tracking.dbf'


contents of Memory Script:

{

 Alter clone database open resetlogs;

}

executing Memory Script

RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

RMAN-03002: failure of Duplicate Db command at 05/27/2025 15:33:05

RMAN-05501: aborting duplication of target database

RMAN-03015: error occurred in stored script Memory Script

RMAN-06136: Oracle error from auxiliary database:

ORA-00344: unable to re-create online log '/oraredo02/db1/redo01.log'

ORA-27040: file create error, unable to create file

Linux-x86_64 Error: 2: No such file or directory


Root Cause: The error occurred because the directory /oraredo02/db1 did not exist on the test server.


[oracle@hostname db1]$ cd /oraredo02/db1

-bash: cd: /oraredo02/db1: No such file or directory


[oracle@hostname db1]$ mkdir -p /oraredo02/db1


Database status:


SQL> select name, open_mode from v$database;


NAME OPEN_MODE

-------- --------------------

DB1 MOUNTED


SQL> select member from v$logfile;


MEMBER

---------------------------------------------------------------

/oraredo02/db1/redo04.log

/oraredo02/db1/redo03.log

/oraredo02/db1/redo02.log

/oraredo02/db1/redo01.log


see above Controlfile contains references to redo logs that don’t exist physically as location was missing in test server but controlfile restored from prod having these details.


Adding New Redo Log files of same size , idea is to add 4 new logfiles and drop the old files from controlfile(as physically they dont exists)


SQL> ALTER DATABASE ADD LOGFILE '/oraredo02/db1/redo05.log' SIZE 100M;

SQL> ALTER DATABASE ADD LOGFILE '/oraredo02/db1/redo06.log' SIZE 100M;

SQL> ALTER DATABASE ADD LOGFILE '/oraredo02/db1/redo07.log' SIZE 100M;

SQL> ALTER DATABASE ADD LOGFILE '/oraredo02/db1/redo08.log' SIZE 100M;


Attempt to Drop Old Redo Logs


Dropping log 4 failed because it was the current log.


SQL> ALTER DATABASE DROP LOGFILE '/oraredo02/db1/redo04.log';

ERROR at line 1:

ORA-01623: log 4 is current log for instance db1 (thread 1) - cannot drop

ORA-00312: online log 4 thread 1: '/oraredo02/db1/redo04.log'


Check Redo Log Status


SQL> COL MEMBER FOR A50;

SQL> SET LINES 200;

SQL> SET PAGES 1000;

SQL> SELECT l.group#, l.thread#, f.member, l.archived, l.status, (bytes/1024/1024) fsize

     FROM v$log l, v$logfile f

     WHERE f.group# = l.group#;


GROUP# THREAD# MEMBER ARC STATUS FSIZE

------ ------- ---------------------------------------------- --- ---------------- ------

1 1 /oraredo02/db1/redo01.log YES CLEARING 100

2 1 /oraredo02/db1/redo02.log YES CLEARING 100

3 1 /oraredo02/db1/redo03.log YES CLEARING 100

4 1 /oraredo02/db1/redo04.log YES CLEARING_CURRENT 100

5 1 /oraredo02/db1/redo05.log YES UNUSED 100

6 1 /oraredo02/db1/redo06.log YES UNUSED 100

7 1 /oraredo02/db1/redo07.log YES UNUSED 100

8 1 /oraredo02/db1/redo08.log YES UNUSED 100


Dropping Old Redo Logs


SQL> ALTER DATABASE DROP LOGFILE '/oraredo02/db1/redo01.log';

Database altered.


SQL> ALTER DATABASE DROP LOGFILE '/oraredo02/db1/redo02.log';

Database altered.


SQL> ALTER DATABASE DROP LOGFILE '/oraredo02/db1/redo03.log';

Database altered.


SQL> ALTER DATABASE DROP LOGFILE '/oraredo02/db1/redo04.log';

ERROR at line 1:

ORA-01623: log 4 is current log for instance db1 (thread 1) - cannot drop


Attempted Log Switch


SQL> ALTER SYSTEM SWITCH LOGFILE;

ERROR at line 1:

ORA-01109: database not open


Final Fix – Clear Unarchived Redo Log


SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 4;

Database altered.


Database Open Resetlogs Successful


SQL> ALTER DATABASE OPEN RESETLOGS;

Database altered.


Conclusion: Always ensure redo log paths exist prior to RMAN duplication. If controlfile contains references to missing redo logs and you do not need recovery from them, use CLEAR UNARCHIVED LOGFILE to clear them and proceed with open resetlogs.

Saturday, 10 May 2025

How to find the historical execution of the query using sql_id

Sometimes user or application team comes to us and question what was running hostrocally at particular moment which slowdown their application , we have to use historical view to  check these details, below script can be used to find out historical details like SQL_ID, SQL_PROFILE, LIO ,PIO etc.   

You can give your time lines in below script.

select trunc(b.end_interval_time) SNAP_DATE ,a.sql_id ,a.module

    ,a.plan_hash_value ,a.sql_profile --,sum(a.PX_SERVERS_EXECS_DELTA)

    PX_PER_EXECUTIONS ,sum(a.executions_delta) EXECUTIONS

    ,sum(a.disk_reads_delta) DISK_READS_PIO ,sum(a.buffer_gets_delta)

    BUFFER_GETS_LIO ,sum(a.direct_writes_delta) direct_writes

    ,sum(a.rows_processed_delta) rows_processed

    ,trunc(sum(a.elapsed_time_delta)/1000000/60/60,2) elap_hrs

    ,trunc(sum(a.cpu_time_delta)/1000000/60/60,2) cpu_hrs

    ,trunc(sum(a.clwait_delta)/1000000/60/60,2) clw_hrs

    ,trunc(sum(a.apwait_delta)/1000000/60/60,2) apw_hrs

    ,trunc((sum(a.buffer_gets_delta)-sum(a.disk_reads_delta))/(sum(decode(a.buffer_gets_delta,0,1,a.buffer_gets_delta))/100),2)

    chr

    ,trunc((sum(a.elapsed_time_delta)-sum(a.apwait_delta))/sum(decode(a.buffer_gets_delta,0,1,a.buffer_gets_delta)))

    elap_lio

    ,trunc(sum(a.elapsed_time_delta)/1000000/60/60,2)-trunc(sum(a.apwait_delta)/1000000/60/60,2)

    act_hrs select trunc(b.end_interval_time) SNAP_DATE ,a.sql_id ,a.module

    ,a.plan_hash_valu ,a.sql_profile --,sum(a.PX_SERVERS_EXECS_DELTA)

    PX_PER_EXECUTIONS ,sum(a.executions_delta) EXECUTIONS

    ,sum(a.disk_reads_delta) DISK_READS_PIO ,sum(a.buffer_gets_delta)

    BUFFER_GETS_LIO ,sum(a.direct_writes_delta) direct_writes

    ,sum(a.rows_processed_delta) rows_processed

    ,trunc(sum(a.elapsed_time_delta)/1000000/60/60,2) elap_hrs

    ,trunc(sum(a.cpu_time_delta)/1000000/60/60,2) cpu_hrs

    ,trunc(sum(a.clwait_delta)/1000000/60/60,2) clw_hrs

    ,trunc(sum(a.apwait_delta)/1000000/60/60,2) apw_hrs

    ,trunc((sum(a.buffer_gets_delta)-sum(a.disk_reads_delta))/(sum(decode(a.buffer_gets_delta,0,1,a.buffer_gets_delta))/100),2)

    chr

    ,trunc((sum(a.elapsed_time_delta)-sum(a.apwait_delta))/sum(decode(a.buffer_gets_delta,0,1,a.buffer_gets_delta)))

    elap_lio

    ,trunc(sum(a.elapsed_time_delta)/1000000/60/60,2)-trunc(sum(a.apwait_delta)/1000000/60/60,2)

    act_hrs ,trunc(sum(a.IOWAIT_DELTA)/1000000/60/60,2) iow_hrs from

    dba_hist_sqlstat a ,dba_hist_snapshot b ,dba_hist_sqltext c where 1=1 and

    a.snap_id = b.snap_id and a.dbid = b.dbid and a.instance_number =

    b.instance_number and a.sql_id = c.sql_id and a.dbid = c.dbid and

    c.command_type != 47 --and a.snap_id between 149529 and 149533 and

    b.end_interval_time between to_date('27-JUL-2023 08:00:00','dd-mon-yyyy

    hh24:mi:ss') and to_date('04-AUG-2023 09:00:00','dd-mon-yyyy hh24:mi:ss')

    ,trunc(sum(a.IOWAIT_DELTA)/1000000/60/60,2) iow_hrs from dba_hist_sqlstat a

    ,dba_hist_snapshot b ,dba_hist_sqltext c where 1=1 and a.snap_id = b.snap_id

    and a.dbid = b.dbid and a.instance_number = b.instance_number and a.sql_id =

    c.sql_id and a.dbid = c.dbid and c.command_type != 47 --and a.snap_id

    between 149529 and 149533 and b.end_interval_time between

    to_date('27-JUL-2023 08:00:00','dd-mon-yyyy hh24:mi:ss') and

    to_date('04-AUG-2023 09:00:00','dd-mon-yyyy hh24:mi:ss') and a.sql_id in

    ('34d70ygdz5csd') --and a.plan_hash_value in ('273738736') group by

    trunc(b.end_interval_time) ,a.sql_id ,a.module ,a.plan_hash_value

    ,a.sql_profile order by 1 desc;

## Replace SQL_ID and dates of execution

Friday, 9 May 2025

How to create the standby database using the primary database backup

Creating a Standby Database Without Impacting Primary Performance


In many organizations, we often receive requests to create a standby database for the production environment. One of the common approaches is using Active Duplicate, but this method can affect the performance of the primary database, especially if the database is large.


My Preferred Strategy


To avoid any performance impact on the primary system, I generally create the standby database from an existing backup and then enable log shipping. This method allows us to restore massive amounts of data without touching the production system, and once restoration is complete, we configure log shipping as usual.



---


Step-by-Step Guide


1. Ensure You Have a Valid Backup


Make sure your daily RMAN backup is running (either on tape or disk).


If a full or incremental backup already exists, there’s no need to take another backup.


If not, take a full backup of the primary database.



2. Backup the Standby Control File from Primary


Use the following RMAN command on the primary database:

On Primary server:

RMAN> BACKUP CURRENT CONTROLFILE FOR STANDBY FORMAT '/tmp/standby_DBNAME_1.bck';


3. Transfer Control File to Standby Server


Copy the file /tmp/standby_DBNAME_1.bck from the primary to the standby server.



---


On the Standby Server


4. Restore the Control File


Create a pfile on the standby server (you can refer to the primary database’s pfile).


Start the standby instance in NOMOUNT state.


Run the following RMAN command to restore the control file:



RMAN> RESTORE STANDBY CONTROLFILE FROM '/tmp/standby_DBNAME_1.bck';



---


5. Mount the Standby Database


Once the control file is restored, mount the standby database:


SQL> ALTER DATABASE MOUNT STANDBY DATABASE;



---


6. Restore and Recover the Standby Database


Use the following RMAN script to restore and recover the standby database from backup. This example assumes you're using tape backup (TSM):


RUN {

  ALLOCATE CHANNEL t2 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  ALLOCATE CHANNEL t3 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  ALLOCATE CHANNEL t4 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  ALLOCATE CHANNEL t5 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  ALLOCATE CHANNEL t6 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  ALLOCATE CHANNEL t7 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  

  RESTORE DATABASE;

  RECOVER DATABASE;

}


7. Enable the Log shipping parameters at primary and standby side, I am not putting those things considering that those things should be known and can be setup.. we need to put some parameters at both side.

---


Final Thoughts


This method is especially useful for large databases, where active duplication would otherwise degrade the primary system’s performance. By leveraging existing backups and configuring the standby database properly, we can achieve a seamless setup with minimal impact on production.