Sunday, 5 October 2025

Patching Eats Your SYSTEM Tablespace

 

Why Oracle Patching Can Fill Up SYSAUX and SYSTEM Tablespaces

When you apply Release Updates (RUs) using datapatch, Oracle doesn’t just update binaries. It also stores rollback and apply scripts inside the database itself. These are saved as LOBs in registry tables, which means the SYSAUX or SYSTEM tablespace may steadily grow over time. For DBAs managing long-lived environments or multiple PDBs, this hidden space consumption can become noticeable.


Where Are These Scripts Stored?

The metadata lives inside dictionary views such as:

  • DBA_REGISTRY_SQLPATCH
  • DBA_REGISTRY_SQLPATCH_RU_INFO

The column PATCH_DIRECTORY in particular holds rollback/apply scripts. Since these are LOBs, their size can reach hundreds of MB for each RU cycle.


Checking the Size of Datapatch Metadata

The following query lists the patch description and its LOB size in MB:

SQL> select description,
            round(dbms_lob.getlength(patch_directory)/1024/1024,2) size_mb
     from   dba_registry_sqlpatch
     order by size_mb desc;

DESCRIPTION                                               SIZE_MB
--------------------------------------------------------  -------
Release_Update 19.20.0.0.240716                           245.13
Release_Update 19.19.0.0.240420                           243.11
Release_Update 19.18.0.0.240116                           243.01
...

You can also view information for Release Updates specifically:

SQL> select 'RU ' || ru_version description,
            round(dbms_lob.getlength(patch_directory)/1024/1024,2) size_mb
     from   dba_registry_sqlpatch_ru_info;

DESCRIPTION         SIZE_MB
-----------------   -------
RU 19.20.0.0.240716 245.13
RU 19.19.0.0.240420 243.11
RU 19.18.0.0.240116 243.01
...

To check total usage across all entries:

SQL> select round(sum(dbms_lob.getlength(patch_directory))/1024/1024,2) total_mb
     from   dba_registry_sqlpatch;

TOTAL_MB
--------
974.23

In this example, nearly 1 GB of tablespace is tied up by stored patch scripts.


How to Reclaim Space

Oracle offers a cleanup feature built into datapatch. It removes rollback/apply scripts that are no longer needed:

$ORACLE_HOME/OPatch/datapatch -purge_old_metadata

Internally, the purge process works like this:

  • It copies required rows into a temporary holding table.
  • The registry table is truncated.
  • Valid rows are reinserted, discarding unnecessary metadata.

The result is that logical space is freed within the tablespace. Keep in mind that this does not shrink the physical datafile on disk — it only makes space available for reuse within the tablespace itself.


Example: Before and After Purge

Before purge:

DESCRIPTION                       SIZE_MB
--------------------------------- -------
Release_Update 19.20.0.0.240716   245.13
Release_Update 19.19.0.0.240420   243.11
Release_Update 19.18.0.0.240116   243.01

TOTAL_MB                          974.23

After purge:

DESCRIPTION                       SIZE_MB
--------------------------------- -------
Release_Update 19.20.0.0.240716   245.13

TOTAL_MB                          245.13

The cleanup removed older rollback metadata, leaving only the latest RU, and reduced usage by ~730 MB in this case.


Considerations for Multitenant

If you use a CDB with multiple PDBs, each PDB stores its own patch metadata. That means tablespace usage scales with the number of PDBs. You’ll need to run cleanup inside each PDB to reclaim space everywhere.


Important Notes

  • Safety: The purge only deletes metadata that is no longer required. Active rollback information is preserved.
  • Timing: Because the process uses a truncate and temporary table copy, schedule it in a maintenance window or low-activity period.
  • Datafiles: Even after purge, the physical size of SYSTEM or SYSAUX datafiles won’t shrink automatically. If you need to reduce actual file size, you must follow standard datafile resize/shrink procedures.
  • PDBs: Run the cleanup in each PDB individually to reclaim space there.

Monitoring Queries

To prevent surprises, you can regularly check patch metadata size. Here are some useful scripts:

Largest entries:

set pages 200 lines 150
col description format a60

select description,
       round(dbms_lob.getlength(patch_directory)/1024/1024,2) size_mb
from   dba_registry_sqlpatch
order  by size_mb desc;

Total usage:

select round(sum(dbms_lob.getlength(patch_directory))/1024/1024,2) total_mb
from   dba_registry_sqlpatch;

Tablespace usage:

select df.tablespace_name,
       df.bytes/1024/1024 total_mb,
       (df.bytes - nvl(fs.free_bytes,0))/1024/1024 used_mb,
       nvl(fs.free_bytes,0)/1024/1024 free_mb
from   (select tablespace_name, sum(bytes) bytes 
        from dba_data_files 
        group by tablespace_name) df
left join
       (select tablespace_name, sum(bytes) free_bytes 
        from dba_free_space 
        group by tablespace_name) fs
on df.tablespace_name = fs.tablespace_name;

Final Thoughts

Patching is essential, but hidden growth in SYSTEM and SYSAUX can create pressure on tablespaces if not monitored. By running the size queries before and after each patch cycle and using datapatch -purge_old_metadata, you can keep metadata overhead under control. Add these checks to your DBA toolkit to avoid sudden tablespace alerts after patching.


Reference: Inspired by the article at dohDatabase.com (Sept 2025).

Friday, 3 October 2025

Oracle Flashback Database Automatic Deactivation

Oracle Flashback Database: Why It Can Disable Itself Automatically

When Flashback Turns Itself Off — A Closer Look

Under certain conditions, Oracle can automatically disable Flashback Database. While this behavior is not entirely new (there are posts from 2010 on the topic), it's useful to revisit with respect to newer versions like 19c, which allow toggling Flashback on/off even in READ WRITE mode (no restart needed).

What Got Me Interested

ORA-38886: WARNING: Flashback database was disabled due to error when writing flashback database logs.

This prompted me to explore the scenarios under which Oracle might disable Flashback on its own.

Flashback Behavior: A Quick Recap

  • Flashback logs are generated as needed, provided there’s available space in the Fast Recovery Area (FRA).
  • Logs older than the retention target can be deleted or reused.
  • If the FRA is full, Oracle may reuse the oldest flashback logs.
  • If there’s absolutely no space or an I/O error occurs, Flashback can be disabled automatically.

Test Case 1: FRA Mount Point Smaller than db_recovery_file_dest_size

db_recovery_file_dest = /fra
db_recovery_file_dest_size = 10 GB
db_flashback_retention_target = 1440 (24 hrs)
FLASHBACK_ON = YES
LOG_MODE = ARCHIVELOG

Mount /fra was limited to 2 GB, far below the 10 GB target. As the disk filled, Oracle logged I/O errors such as:

ORA-27072: File I/O error

The database continued to run, deleting older flashback logs. The OLDEST_FLASHBACK_TIME in V$FLASHBACK_DATABASE_LOG advanced. When I forced the mount to zero free space, the alert log showed:

ORA-27044: unable to write the header block of file
Linux-x86_64 Error: 28: No space left on device
ORA-38886: WARNING: Flashback database was disabled …

Querying the status:

SELECT flashback_on FROM v$database;

Result:

FLASHBACK_ON
------------------
NO

Test Case 2: FRA Mount Point Larger than db_recovery_file_dest_size

db_recovery_file_dest = /fra
db_recovery_file_dest_size = 1 GB
Mount point size = 2 GB

Even after generating ~50 GB of redo, Oracle only issued warnings such as:

ORA-19815: db_recovery_file_dest_size is 93.69% used

Flashback remained enabled. This shows that space pressure alone won’t disable Flashback—it requires actual I/O write failures.

Key Observations & Best Practices

  • Allocate adequate space for FRA, aligned with recovery and flashback targets.
  • Use dedicated mount points or ASM disk groups to isolate FRA.
  • Actively monitor FRA usage and flashback status.
  • Note: when Flashback is disabled, Oracle removes .flb files, but the /flashback/ directory (empty) may remain.

Final Thoughts

It would be ideal if Oracle treated Flashback like a mandatory archive destination—preventing the database from silently disabling it. As of version 19.28.0, no such feature exists. Until then, careful space planning, monitoring, and proactive alerting are the best defense against surprises.

© 2025 - Oracle DBA Insights by Saurabh

Tuesday, 27 May 2025

RMAN Duplicate failed with error RMAN-06136 oracle error from auxiliary database:00344 unable to re-create online log

Oracle RMAN Duplicate Fails at Open Resetlogs – Resolved

While duplicating the test database from a production backup using RMAN, the restore and recovery completed successfully. 

However, the open resetlogs step failed with the following error:

Executing: alter database enable block change tracking using file '/oradata01/db1/Block_change_tracking.dbf'


contents of Memory Script:

{

 Alter clone database open resetlogs;

}

executing Memory Script

RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

RMAN-03002: failure of Duplicate Db command at 05/27/2025 15:33:05

RMAN-05501: aborting duplication of target database

RMAN-03015: error occurred in stored script Memory Script

RMAN-06136: Oracle error from auxiliary database:

ORA-00344: unable to re-create online log '/oraredo02/db1/redo01.log'

ORA-27040: file create error, unable to create file

Linux-x86_64 Error: 2: No such file or directory


Root Cause: The error occurred because the directory /oraredo02/db1 did not exist on the test server.


[oracle@hostname db1]$ cd /oraredo02/db1

-bash: cd: /oraredo02/db1: No such file or directory


[oracle@hostname db1]$ mkdir -p /oraredo02/db1


Database status:


SQL> select name, open_mode from v$database;


NAME OPEN_MODE

-------- --------------------

DB1 MOUNTED


SQL> select member from v$logfile;


MEMBER

---------------------------------------------------------------

/oraredo02/db1/redo04.log

/oraredo02/db1/redo03.log

/oraredo02/db1/redo02.log

/oraredo02/db1/redo01.log


see above Controlfile contains references to redo logs that don’t exist physically as location was missing in test server but controlfile restored from prod having these details.


Adding New Redo Log files of same size , idea is to add 4 new logfiles and drop the old files from controlfile(as physically they dont exists)


SQL> ALTER DATABASE ADD LOGFILE '/oraredo02/db1/redo05.log' SIZE 100M;

SQL> ALTER DATABASE ADD LOGFILE '/oraredo02/db1/redo06.log' SIZE 100M;

SQL> ALTER DATABASE ADD LOGFILE '/oraredo02/db1/redo07.log' SIZE 100M;

SQL> ALTER DATABASE ADD LOGFILE '/oraredo02/db1/redo08.log' SIZE 100M;


Attempt to Drop Old Redo Logs


Dropping log 4 failed because it was the current log.


SQL> ALTER DATABASE DROP LOGFILE '/oraredo02/db1/redo04.log';

ERROR at line 1:

ORA-01623: log 4 is current log for instance db1 (thread 1) - cannot drop

ORA-00312: online log 4 thread 1: '/oraredo02/db1/redo04.log'


Check Redo Log Status


SQL> COL MEMBER FOR A50;

SQL> SET LINES 200;

SQL> SET PAGES 1000;

SQL> SELECT l.group#, l.thread#, f.member, l.archived, l.status, (bytes/1024/1024) fsize

     FROM v$log l, v$logfile f

     WHERE f.group# = l.group#;


GROUP# THREAD# MEMBER ARC STATUS FSIZE

------ ------- ---------------------------------------------- --- ---------------- ------

1 1 /oraredo02/db1/redo01.log YES CLEARING 100

2 1 /oraredo02/db1/redo02.log YES CLEARING 100

3 1 /oraredo02/db1/redo03.log YES CLEARING 100

4 1 /oraredo02/db1/redo04.log YES CLEARING_CURRENT 100

5 1 /oraredo02/db1/redo05.log YES UNUSED 100

6 1 /oraredo02/db1/redo06.log YES UNUSED 100

7 1 /oraredo02/db1/redo07.log YES UNUSED 100

8 1 /oraredo02/db1/redo08.log YES UNUSED 100


Dropping Old Redo Logs


SQL> ALTER DATABASE DROP LOGFILE '/oraredo02/db1/redo01.log';

Database altered.


SQL> ALTER DATABASE DROP LOGFILE '/oraredo02/db1/redo02.log';

Database altered.


SQL> ALTER DATABASE DROP LOGFILE '/oraredo02/db1/redo03.log';

Database altered.


SQL> ALTER DATABASE DROP LOGFILE '/oraredo02/db1/redo04.log';

ERROR at line 1:

ORA-01623: log 4 is current log for instance db1 (thread 1) - cannot drop


Attempted Log Switch


SQL> ALTER SYSTEM SWITCH LOGFILE;

ERROR at line 1:

ORA-01109: database not open


Final Fix – Clear Unarchived Redo Log


SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 4;

Database altered.


Database Open Resetlogs Successful


SQL> ALTER DATABASE OPEN RESETLOGS;

Database altered.


Conclusion: Always ensure redo log paths exist prior to RMAN duplication. If controlfile contains references to missing redo logs and you do not need recovery from them, use CLEAR UNARCHIVED LOGFILE to clear them and proceed with open resetlogs.

Saturday, 10 May 2025

How to find the historical execution of the query using sql_id

Sometimes user or application team comes to us and question what was running hostrocally at particular moment which slowdown their application , we have to use historical view to  check these details, below script can be used to find out historical details like SQL_ID, SQL_PROFILE, LIO ,PIO etc.   

You can give your time lines in below script.

select trunc(b.end_interval_time) SNAP_DATE ,a.sql_id ,a.module

    ,a.plan_hash_value ,a.sql_profile --,sum(a.PX_SERVERS_EXECS_DELTA)

    PX_PER_EXECUTIONS ,sum(a.executions_delta) EXECUTIONS

    ,sum(a.disk_reads_delta) DISK_READS_PIO ,sum(a.buffer_gets_delta)

    BUFFER_GETS_LIO ,sum(a.direct_writes_delta) direct_writes

    ,sum(a.rows_processed_delta) rows_processed

    ,trunc(sum(a.elapsed_time_delta)/1000000/60/60,2) elap_hrs

    ,trunc(sum(a.cpu_time_delta)/1000000/60/60,2) cpu_hrs

    ,trunc(sum(a.clwait_delta)/1000000/60/60,2) clw_hrs

    ,trunc(sum(a.apwait_delta)/1000000/60/60,2) apw_hrs

    ,trunc((sum(a.buffer_gets_delta)-sum(a.disk_reads_delta))/(sum(decode(a.buffer_gets_delta,0,1,a.buffer_gets_delta))/100),2)

    chr

    ,trunc((sum(a.elapsed_time_delta)-sum(a.apwait_delta))/sum(decode(a.buffer_gets_delta,0,1,a.buffer_gets_delta)))

    elap_lio

    ,trunc(sum(a.elapsed_time_delta)/1000000/60/60,2)-trunc(sum(a.apwait_delta)/1000000/60/60,2)

    act_hrs select trunc(b.end_interval_time) SNAP_DATE ,a.sql_id ,a.module

    ,a.plan_hash_valu ,a.sql_profile --,sum(a.PX_SERVERS_EXECS_DELTA)

    PX_PER_EXECUTIONS ,sum(a.executions_delta) EXECUTIONS

    ,sum(a.disk_reads_delta) DISK_READS_PIO ,sum(a.buffer_gets_delta)

    BUFFER_GETS_LIO ,sum(a.direct_writes_delta) direct_writes

    ,sum(a.rows_processed_delta) rows_processed

    ,trunc(sum(a.elapsed_time_delta)/1000000/60/60,2) elap_hrs

    ,trunc(sum(a.cpu_time_delta)/1000000/60/60,2) cpu_hrs

    ,trunc(sum(a.clwait_delta)/1000000/60/60,2) clw_hrs

    ,trunc(sum(a.apwait_delta)/1000000/60/60,2) apw_hrs

    ,trunc((sum(a.buffer_gets_delta)-sum(a.disk_reads_delta))/(sum(decode(a.buffer_gets_delta,0,1,a.buffer_gets_delta))/100),2)

    chr

    ,trunc((sum(a.elapsed_time_delta)-sum(a.apwait_delta))/sum(decode(a.buffer_gets_delta,0,1,a.buffer_gets_delta)))

    elap_lio

    ,trunc(sum(a.elapsed_time_delta)/1000000/60/60,2)-trunc(sum(a.apwait_delta)/1000000/60/60,2)

    act_hrs ,trunc(sum(a.IOWAIT_DELTA)/1000000/60/60,2) iow_hrs from

    dba_hist_sqlstat a ,dba_hist_snapshot b ,dba_hist_sqltext c where 1=1 and

    a.snap_id = b.snap_id and a.dbid = b.dbid and a.instance_number =

    b.instance_number and a.sql_id = c.sql_id and a.dbid = c.dbid and

    c.command_type != 47 --and a.snap_id between 149529 and 149533 and

    b.end_interval_time between to_date('27-JUL-2023 08:00:00','dd-mon-yyyy

    hh24:mi:ss') and to_date('04-AUG-2023 09:00:00','dd-mon-yyyy hh24:mi:ss')

    ,trunc(sum(a.IOWAIT_DELTA)/1000000/60/60,2) iow_hrs from dba_hist_sqlstat a

    ,dba_hist_snapshot b ,dba_hist_sqltext c where 1=1 and a.snap_id = b.snap_id

    and a.dbid = b.dbid and a.instance_number = b.instance_number and a.sql_id =

    c.sql_id and a.dbid = c.dbid and c.command_type != 47 --and a.snap_id

    between 149529 and 149533 and b.end_interval_time between

    to_date('27-JUL-2023 08:00:00','dd-mon-yyyy hh24:mi:ss') and

    to_date('04-AUG-2023 09:00:00','dd-mon-yyyy hh24:mi:ss') and a.sql_id in

    ('34d70ygdz5csd') --and a.plan_hash_value in ('273738736') group by

    trunc(b.end_interval_time) ,a.sql_id ,a.module ,a.plan_hash_value

    ,a.sql_profile order by 1 desc;

## Replace SQL_ID and dates of execution

Friday, 9 May 2025

How to create the standby database using the primary database backup

Creating a Standby Database Without Impacting Primary Performance


In many organizations, we often receive requests to create a standby database for the production environment. One of the common approaches is using Active Duplicate, but this method can affect the performance of the primary database, especially if the database is large.


My Preferred Strategy


To avoid any performance impact on the primary system, I generally create the standby database from an existing backup and then enable log shipping. This method allows us to restore massive amounts of data without touching the production system, and once restoration is complete, we configure log shipping as usual.



---


Step-by-Step Guide


1. Ensure You Have a Valid Backup


Make sure your daily RMAN backup is running (either on tape or disk).


If a full or incremental backup already exists, there’s no need to take another backup.


If not, take a full backup of the primary database.



2. Backup the Standby Control File from Primary


Use the following RMAN command on the primary database:

On Primary server:

RMAN> BACKUP CURRENT CONTROLFILE FOR STANDBY FORMAT '/tmp/standby_DBNAME_1.bck';


3. Transfer Control File to Standby Server


Copy the file /tmp/standby_DBNAME_1.bck from the primary to the standby server.



---


On the Standby Server


4. Restore the Control File


Create a pfile on the standby server (you can refer to the primary database’s pfile).


Start the standby instance in NOMOUNT state.


Run the following RMAN command to restore the control file:



RMAN> RESTORE STANDBY CONTROLFILE FROM '/tmp/standby_DBNAME_1.bck';



---


5. Mount the Standby Database


Once the control file is restored, mount the standby database:


SQL> ALTER DATABASE MOUNT STANDBY DATABASE;



---


6. Restore and Recover the Standby Database


Use the following RMAN script to restore and recover the standby database from backup. This example assumes you're using tape backup (TSM):


RUN {

  ALLOCATE CHANNEL t2 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  ALLOCATE CHANNEL t3 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  ALLOCATE CHANNEL t4 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  ALLOCATE CHANNEL t5 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  ALLOCATE CHANNEL t6 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  ALLOCATE CHANNEL t7 TYPE 'sbt_tape' PARMS 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo_DBNAME.opt)';

  

  RESTORE DATABASE;

  RECOVER DATABASE;

}


7. Enable the Log shipping parameters at primary and standby side, I am not putting those things considering that those things should be known and can be setup.. we need to put some parameters at both side.

---


Final Thoughts


This method is especially useful for large databases, where active duplication would otherwise degrade the primary system’s performance. By leveraging existing backups and configuring the standby database properly, we can achieve a seamless setup with minimal impact on production.


Monday, 26 August 2024

How to remove all the objects of Schema before import in oracle database

 Sometimes, we get a reqeust to export and import the schema from PROD to QA, although we can use table_exists_action parameter to replace the table which exists in QA, but there are other objects also like sequences, procedures, functions etc which get skipped if exists in QA already. 

There are two ways now:

1. Take the grants/privileges(DDL) assigned to schema ,drop it and recreate the user using DDL generated and import into the schema.

 2. Delete all the objects under the schema , it should clear the schema and we can now import into the schema. We can use below script to delete all the objects


DECLARE

    PROCEDURE dropAll (inUser VARCHAR2)

    IS

    lSQL VARCHAR2(200);

        CURSOR constr_cur IS

           SELECT table_name, constraint_name FROM DBA_CONSTRAINTS

           WHERE owner = inUser

           AND constraint_type ='R';

       CURSOR obj_cur IS

          SELECT object_type, object_name FROM DBA_OBJECTS

          WHERE owner = inUser;

       CURSOR pubsyn_cur IS

          SELECT synonym_name

          FROM DBA_SYNONYMS

          WHERE OWNER = 'PUBLIC' AND TABLE_OWNER = inUser;

BEGIN

FOR l IN constr_cur

LOOP

    EXECUTE IMMEDIATE 'ALTER TABLE ' || inUser || '.' || l.table_name

                      || ' DROP CONSTRAINT ' || l.constraint_name;

END LOOP;

FOR l IN obj_cur

LOOP

 IF l.object_type = 'SYNONYM' THEN

    EXECUTE IMMEDIATE 'DROP SYNONYM ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'TABLE' THEN

    EXECUTE IMMEDIATE 'DROP TABLE ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'SEQUENCE' THEN

    EXECUTE IMMEDIATE 'DROP SEQUENCE ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'FUNCTION' THEN

    EXECUTE IMMEDIATE 'DROP FUNCTION ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'PROCEDURE' THEN

    EXECUTE IMMEDIATE 'DROP PROCEDURE ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'PACKAGE' THEN

    EXECUTE IMMEDIATE 'DROP PACKAGE ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'VIEW' THEN

    EXECUTE IMMEDIATE 'DROP VIEW ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'TYPE' THEN

  BEGIN

    EXECUTE IMMEDIATE 'DROP TYPE ' || inUser || '.' || l.object_name;

 EXCEPTION

  WHEN OTHERS THEN NULL;

 END;

 END IF;

END LOOP;

FOR l IN pubsyn_cur

LOOP

   EXECUTE IMMEDIATE 'DROP PUBLIC SYNONYM ' || l.synonym_name;

END LOOP;

END;

BEGIN

 dropAll('AGENT_102');

 dropAll('CORE_102');

 dropAll('STG_102');

 dropAll('PUB_102');

 dropAll('SWITCH_A_102');

 dropAll('SWITCH_B_102');

END;

/

Note: you need to provide the schema name in dropAll('put schema name here'), we can put multiple schema names as above if we want to drop objects of multiple schemas in one go.




 

Wednesday, 5 June 2024

How to find the defragmentation and remove it from the Oracle database

Defragmentation:

 

When rows are inserted into the table, the high watermark(used blocks) of the table moved forward in order to accommodate new rows into the table. But during delete operation, oracle doesn’t allow high watermark to shift backward in order to decrease table size and release the free space. Ideally once we delete the data from the table, the free space should be released or reused but additionally oracle acquire new blocks to accommodate the new rows for insertion which causes hole into the table. As you can see below, table contains data ranging from low watermark to high watermark which we call it as a used blocks and last two are free blocks. When rows are inserted into the table oracle moves the high water mark to acquire free blocks to accommodate new rows into the table marking free blocks as used. But after delete operation oracle doesn’t shift high water mark backward to release the free space. How to remove table fragmentation in oracle? There are different ways to remove fragmentation like table export/import, moving tables to same or different tablespace and table recreation. But here we will discuss most easy and common ways of removing fragmentation.
Steps To Check andRemove Fragmentation: 

STEP 1: First Gather table statistics In order to find the exact difference between the total size and actual size of the table from dba_segments and dba_tables views. You can also check the last analysed date of a table. If table has recently analysed and stale_stats is no then you can skip this step. 

select table_name,last_analyzed,stale_stats from user_tab_statistics where table_name='&TABLE_NAME'; EXEC dbms_stats.gather_table_stats(ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', method_opt=> 'for all indexed columns size skewonly', granularity => 'ALL', degree => 8 ,cascade => true,estimate_percent => 15); 

Step 2: Check table size from dba_segments 

select sum(bytes)/1024/1024/1024 from dba_segments where segment_name='&TABLE_NAME'; 

STEP 3: Check actual table size, fragmented size and percentage of fragmentation in a table. 

select table_name,avg_row_len,round(((blocks*16/1024)),2)||'MB' "TOTAL_SIZE", round((num_rows*avg_row_len/1024/1024),2)||'Mb' "ACTUAL_SIZE", round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE", (round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2)/round(((blocks*16/1024)),2))*100 "percentage" from all_tables WHERE table_name='&TABLE_NAME'; 

Note: If you find more than 20% fragmentation then you can proceed for de-fragmentation. Please note that above query is for 16k block size. You can use 8 in place of 16 for 8k block size. You can also de-fragment based on size you are going to reclaim from the above mentioned query. 

STEP 4: Check the indexes on the table 

select index_name from dba_indexes where table_name='&TABLE_NAME'; 

STEP 5: Remove fragmentation by moving tables to same tablespace. You can also move tables to different tablespace.
 
alter table move; 

->For moving tables into different tablespace 

alter table enable row movement; 
alter table move tablespace; ----> create seperate tablespace and provide tablespace name in command and after that move the table again to old original tablespace 

alter table move tablespace( Old Tablespace); 

STEP 6: Rebuild the indexes. 

alter index index_name rebuild online;

STEP 7: Gather the statistics:

 EXEC dbms_stats.gather_table_stats(ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', method_opt=> 'for all indexed columns size skewonly', granularity => 'ALL', degree => 8 ,cascade => true,estimate_percent => 15); 

STEP 8: Finally check the table size and percentage of fragmentation: 

select sum(bytes)/1024/1024/1024 from dba_segments where segment_name='&TABLE_NAME'; select table_name,avg_row_len,round(((blocks*16/1024)),2)||'MB' "TOTAL_SIZE", round((num_rows*avg_row_len/1024/1024),2)||'Mb' "ACTUAL_SIZE", round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE", (round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2)/round(((blocks*16/1024)),2))*100 "percentage" from all_tables WHERE table_name='&TABLE_NAME';


 Please note that the above mentioned steps is only for non-partitioned tables. For partitioned tables you can move table partition by partition or the easiest way is to recreate the table.