Monday, 26 August 2024

How to remove all the objects of Schema before import in oracle database

 Sometimes, we get a reqeust to export and import the schema from PROD to QA, although we can use table_exists_action parameter to replace the table which exists in QA, but there are other objects also like sequences, procedures, functions etc which get skipped if exists in QA already. 

There are two ways now:

1. Take the grants/privileges(DDL) assigned to schema ,drop it and recreate the user using DDL generated and import into the schema.

 2. Delete all the objects under the schema , it should clear the schema and we can now import into the schema. We can use below script to delete all the objects


DECLARE

    PROCEDURE dropAll (inUser VARCHAR2)

    IS

    lSQL VARCHAR2(200);

        CURSOR constr_cur IS

           SELECT table_name, constraint_name FROM DBA_CONSTRAINTS

           WHERE owner = inUser

           AND constraint_type ='R';

       CURSOR obj_cur IS

          SELECT object_type, object_name FROM DBA_OBJECTS

          WHERE owner = inUser;

       CURSOR pubsyn_cur IS

          SELECT synonym_name

          FROM DBA_SYNONYMS

          WHERE OWNER = 'PUBLIC' AND TABLE_OWNER = inUser;

BEGIN

FOR l IN constr_cur

LOOP

    EXECUTE IMMEDIATE 'ALTER TABLE ' || inUser || '.' || l.table_name

                      || ' DROP CONSTRAINT ' || l.constraint_name;

END LOOP;

FOR l IN obj_cur

LOOP

 IF l.object_type = 'SYNONYM' THEN

    EXECUTE IMMEDIATE 'DROP SYNONYM ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'TABLE' THEN

    EXECUTE IMMEDIATE 'DROP TABLE ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'SEQUENCE' THEN

    EXECUTE IMMEDIATE 'DROP SEQUENCE ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'FUNCTION' THEN

    EXECUTE IMMEDIATE 'DROP FUNCTION ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'PROCEDURE' THEN

    EXECUTE IMMEDIATE 'DROP PROCEDURE ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'PACKAGE' THEN

    EXECUTE IMMEDIATE 'DROP PACKAGE ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'VIEW' THEN

    EXECUTE IMMEDIATE 'DROP VIEW ' || inUser || '.' || l.object_name;

 ELSIF l.object_type = 'TYPE' THEN

  BEGIN

    EXECUTE IMMEDIATE 'DROP TYPE ' || inUser || '.' || l.object_name;

 EXCEPTION

  WHEN OTHERS THEN NULL;

 END;

 END IF;

END LOOP;

FOR l IN pubsyn_cur

LOOP

   EXECUTE IMMEDIATE 'DROP PUBLIC SYNONYM ' || l.synonym_name;

END LOOP;

END;

BEGIN

 dropAll('AGENT_102');

 dropAll('CORE_102');

 dropAll('STG_102');

 dropAll('PUB_102');

 dropAll('SWITCH_A_102');

 dropAll('SWITCH_B_102');

END;

/

Note: you need to provide the schema name in dropAll('put schema name here'), we can put multiple schema names as above if we want to drop objects of multiple schemas in one go.




 

Wednesday, 5 June 2024

How to find the defragmentation and remove it from the Oracle database

Defragmentation:

 

When rows are inserted into the table, the high watermark(used blocks) of the table moved forward in order to accommodate new rows into the table. But during delete operation, oracle doesn’t allow high watermark to shift backward in order to decrease table size and release the free space. Ideally once we delete the data from the table, the free space should be released or reused but additionally oracle acquire new blocks to accommodate the new rows for insertion which causes hole into the table. As you can see below, table contains data ranging from low watermark to high watermark which we call it as a used blocks and last two are free blocks. When rows are inserted into the table oracle moves the high water mark to acquire free blocks to accommodate new rows into the table marking free blocks as used. But after delete operation oracle doesn’t shift high water mark backward to release the free space. How to remove table fragmentation in oracle? There are different ways to remove fragmentation like table export/import, moving tables to same or different tablespace and table recreation. But here we will discuss most easy and common ways of removing fragmentation.
Steps To Check andRemove Fragmentation: 

STEP 1: First Gather table statistics In order to find the exact difference between the total size and actual size of the table from dba_segments and dba_tables views. You can also check the last analysed date of a table. If table has recently analysed and stale_stats is no then you can skip this step. 

select table_name,last_analyzed,stale_stats from user_tab_statistics where table_name='&TABLE_NAME'; EXEC dbms_stats.gather_table_stats(ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', method_opt=> 'for all indexed columns size skewonly', granularity => 'ALL', degree => 8 ,cascade => true,estimate_percent => 15); 

Step 2: Check table size from dba_segments 

select sum(bytes)/1024/1024/1024 from dba_segments where segment_name='&TABLE_NAME'; 

STEP 3: Check actual table size, fragmented size and percentage of fragmentation in a table. 

select table_name,avg_row_len,round(((blocks*16/1024)),2)||'MB' "TOTAL_SIZE", round((num_rows*avg_row_len/1024/1024),2)||'Mb' "ACTUAL_SIZE", round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE", (round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2)/round(((blocks*16/1024)),2))*100 "percentage" from all_tables WHERE table_name='&TABLE_NAME'; 

Note: If you find more than 20% fragmentation then you can proceed for de-fragmentation. Please note that above query is for 16k block size. You can use 8 in place of 16 for 8k block size. You can also de-fragment based on size you are going to reclaim from the above mentioned query. 

STEP 4: Check the indexes on the table 

select index_name from dba_indexes where table_name='&TABLE_NAME'; 

STEP 5: Remove fragmentation by moving tables to same tablespace. You can also move tables to different tablespace.
 
alter table move; 

->For moving tables into different tablespace 

alter table enable row movement; 
alter table move tablespace; ----> create seperate tablespace and provide tablespace name in command and after that move the table again to old original tablespace 

alter table move tablespace( Old Tablespace); 

STEP 6: Rebuild the indexes. 

alter index index_name rebuild online;

STEP 7: Gather the statistics:

 EXEC dbms_stats.gather_table_stats(ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', method_opt=> 'for all indexed columns size skewonly', granularity => 'ALL', degree => 8 ,cascade => true,estimate_percent => 15); 

STEP 8: Finally check the table size and percentage of fragmentation: 

select sum(bytes)/1024/1024/1024 from dba_segments where segment_name='&TABLE_NAME'; select table_name,avg_row_len,round(((blocks*16/1024)),2)||'MB' "TOTAL_SIZE", round((num_rows*avg_row_len/1024/1024),2)||'Mb' "ACTUAL_SIZE", round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE", (round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2)/round(((blocks*16/1024)),2))*100 "percentage" from all_tables WHERE table_name='&TABLE_NAME';


 Please note that the above mentioned steps is only for non-partitioned tables. For partitioned tables you can move table partition by partition or the easiest way is to recreate the table.

Tuesday, 1 August 2023

how to enable trace for the session in oracle database

EXECUTE dbms_monitor.session_trace_enable (, , waits=>true, binds=>true); --->Check the trace file that would be created : SELECT s.sid,p.tracefile FROM v$session s JOIN v$process p ON s.paddr = p.addr WHERE s.sid in (); ---> Let it run for sometime --->and then disable the trace: EXECUTE dbms_monitor.session_trace_disable (session_id => , serial_num => );

How to find the RMAN backup run history or status

set linesize 500 pagesize 2000 col Hours format 9999.99 col STATUS format a10 select SESSION_KEY, INPUT_TYPE, STATUS, to_char(START_TIME,'mm-dd-yyyy hh24:mi:ss') as RMAN_Bkup_start_time, to_char(END_TIME,'mm-dd-yyyy hh24:mi:ss') as RMAN_Bkup_end_time, elapsed_seconds/3600 Hours from V$RMAN_BACKUP_JOB_DETAILS order by session_key;

Thursday, 29 December 2022

How to find the high water mark of the table in oracle database

 --

 -- Show the High Water Mark for a given table, or all tables if ALL is specified for Table_Name.
--

SET LINESIZE 300
SET SERVEROUTPUT ON
SET VERIFY OFF

DECLARE
CURSOR cu_tables IS
SELECT a.owner,
a.table_name
FROM all_tables a
WHERE a.table_name = Decode(Upper('&&Table_Name'),'ALL',a.table_name,Upper('&&Table_Name'))
AND a.owner = Upper('&&Table_Owner')
AND a.partitioned='NO'
AND a.logging='YES'
order by table_name;

op1 NUMBER;
op2 NUMBER;
op3 NUMBER;
op4 NUMBER;
op5 NUMBER;
op6 NUMBER;
op7 NUMBER;
BEGIN

Dbms_Output.Disable;
Dbms_Output.Enable(1000000);
Dbms_Output.Put_Line('TABLE UNUSED BLOCKS TOTAL BLOCKS HIGH WATER MARK');
Dbms_Output.Put_Line('------------------------------ --------------- --------------- ---------------');
FOR cur_rec IN cu_tables LOOP
Dbms_Space.Unused_Space(cur_rec.owner,cur_rec.table_name,'TABLE',op1,op2,op3,op4,op5,op6,op7);
Dbms_Output.Put_Line(RPad(cur_rec.table_name,30,' ') ||
LPad(op3,15,' ') ||
LPad(op1,15,' ') ||
LPad(Trunc(op1-op3-1),15,' '));
END LOOP;

END;

How to find who locked the database account

 -- Return code 1017 ( INVALID LOGIN ATTEMPT)

 -- Return code 28000 ( ACCOUNT LOCKED)

set pagesize 1299
set lines 299
col username for a15
col userhost for a13
col timestamp for a39
col terminal for a23
SELECT username,userhost,terminal,timestamp,returncode
FROM dba_audit_session
WHERE username='&USER_NAME' and returncode in (1017,28000);

Wednesday, 28 December 2022

How to convert the non-cdb database into CDB

Steps to Migrate Non-Container Database to Container Pluggable Database

Migrating to cdb database needs downtime, in order to reduce downtime we can create cdb database beforehand.

Details:

cdb database: TCDB011

non-cdb database: VSTORE

version: 19c

Step 1: Shutdown the VSTORE(non-cdb) database 

Step 2: Start the database in read only mode.

SQL > Startup open read only











Step3 - Generate  XML file for VSTORE database.

SQL> exec DBMS_PDB.DESCRIBE(pdb_descr_file => '/tmp/vstore_manifest.xml');





Step4 - Shut down the VSTORE database

SQL> Shutdown immediate










Note - From here, all steps will be performed on the Container database TCDB011

Step 5 - Start TESTDB database (container database) if not running

Step 6 - Check vstore (non-container database) compatibility with TCDB011 (container database)

SET SERVEROUTPUT ON
DECLARE
compatible CONSTANT VARCHAR2(3) := CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY(pdb_descr_file => '/tmp/vstore_manifest.xml')
WHEN TRUE THEN 'YES'
ELSE 'NO'
END;
BEGIN
DBMS_OUTPUT.PUT_LINE(compatible);
END;
/







Step7 - Check for any errors resulting from the above step
col name for a10
col cause for a10
col type for a20
col message for a80
set lines 230
SELECT name, cause, type, message, status FROM PDB_PLUG_IN_VIOLATIONS WHERE name='VSTORE';

















Note: If there are any errors, fix them before proceeding. 

common errors like parameter mismatch, component mismatch between non-cdb and cdb, patch level difference between non-cdb and cbd.

Step 8 - Plug the vstore database into container TCDB011

create pluggable database TCSTAGE using '/tmp/vstore_manifest.xml' NOCOPY tempfile reuse;








COPY - With this option, VSTORE will remain intact. All data files will remain untouched. For PDB files will be copied to the new location, provided with the parameter FILE_NAME_CONVERT.
NOCOPY - Existing files will be used and after completion of the operation, VSTORE will not remain usable. As new PDB is using the same data files.
MOVE - using parameter FILE_NAME_CONVERT, existing datafiles will be moved to the new location, hence after the operation completion, VSTORE will not be usable.


Step 9 - Run the PDB conversion script provided by Oracle
sqlplus / as sysdba
alter session set container=VSTORE;
@$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql













Step 10 - Start PDB and verify the state









ask application team to verify the database by starting application.

Make necessary changes into configuration management, OEM. Initiate the fresh backup for the vstore database.