DBA Blogs

Sequence functionality without a real sequence

Tom Kyte - Mon, 2021-12-06 13:26
Dear Tom, I am looking for a way to create a function or procedure that works like a sequence. To the background, we use a sequence to create a unique key in our application until the sequence comes to the limit. Then, we found, that most of the created keys from the sequence are not used (they where only temporarily used and nothing stored with this key). So we build a table with all the "unused" keys and create a function to get one of these keys each time we need a key. But now, we are facing massive row lock contentions because of very high frequency to that function and table. I am really interested to know, if there is a way to create a function or procedure that can cache 1000 entries for all processes. Do you have any idea to create such functionality or any other idea to fix our locking issue during the access to the table with the keys? best regards Joachim
Categories: DBA Blogs

Refreshable Clone PDB -- 3 The ArchiveLog and ArchParLog files

Hemant K Chitale - Sun, 2021-12-05 05:15

 In my previous blog post about Datafile Name mapping for a Refreshable Clone PDB, I had presented only a portion of the trace file for the ALTER PLUGGABLE DATABASE RO_PDB REFRESH command.

Here is the full listing of that trace file (I have already explained the AFN mapping of FILE#s)


Trace file /opt/oracle/diag/rdbms/cdb2/CDB2/trace/CDB2_ora_6159.trc
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
Build label: RDBMS_19.12.0.0.0DBRU_LINUX.X64_210715
ORACLE_HOME: /opt/oracle/product/19c/dbhome_1
System name: Linux
Node name: ora19cs2
Release: 4.14.35-1902.10.4.1.el7uek.x86_64
Version: #2 SMP Mon Jan 27 14:13:38 PST 2020
Machine: x86_64
Instance name: CDB2
Redo thread mounted by this instance: 1
Oracle process number: 59
Unix process pid: 6159, image: oracle@ora19cs2


*** 2021-12-04T17:39:25.431585+08:00 (RO_PDB(5))
*** SESSION ID:(377.22911) 2021-12-04T17:39:25.431608+08:00
*** CLIENT ID:() 2021-12-04T17:39:25.431613+08:00
*** SERVICE NAME:(CDB2) 2021-12-04T17:39:25.431617+08:00
*** MODULE NAME:(sqlplus@ora19cs2 (TNS V1-V3)) 2021-12-04T17:39:25.431622+08:00
*** ACTION NAME:() 2021-12-04T17:39:25.431626+08:00
*** CLIENT DRIVER:(SQL*PLUS) 2021-12-04T17:39:25.431630+08:00
*** CONTAINER ID:(5) 2021-12-04T17:39:25.431634+08:00

JIT: pid 6159 requesting stop

*** 2021-12-04T17:39:48.052938+08:00 (RO_PDB(5))
Started Serial Media Recovery
This is Foreign Recovery:- HotClone:1 Pre-Plugin:0 HotClone_for_Refresh:1. The Redo going to be applied in Current PDB(Target) will be from Foreign PDB(Source) and there will be file# conversion. Source AFN is from Foreign DB and Target AFN is on Current(local) DB. Dumping Source PDB:3 to Target PDB:5 AFN mapping.
Source AFN:26 = Target AFN:47
Source AFN:32 = Target AFN:49
Source AFN:12 = Target AFN:46
Source AFN:31 = Target AFN:48
Source AFN:9 = Target AFN:43
Source AFN:11 = Target AFN:45
Source AFN:10 = Target AFN:44
Dumping database incarnation table:
Resetlogs 0 scn and time: 0x0000000000f22944 11/30/2021 22:58:02
Dumping PDB pathvec - index 0
0000 : pdb 5, dbinc 3, pdbinc 0
db rls 0x0000000000f22944 rlc 1090018682
incscn 0x0000000000000000 ts 0
br scn 0x0000000000000000 ts 0
er scn 0x0000000000000000 ts 0
0001 : pdb 5, dbinc 2, pdbinc 0
db rls 0x0000000000e88404 rlc 1084312848
incscn 0x0000000000000000 ts 0
br scn 0x0000000000000000 ts 0
er scn 0x0000000000000000 ts 0
Recovery target incarnation = 3, activation ID = 0
Influx buffer limit = 39160 min(50% x 78320, 100000)
Start recovery at thread 1 ckpt scn 16121330 logseq 37 block 41766

*** 2021-12-04T17:39:48.430170+08:00 (RO_PDB(5))
Media Recovery add redo thread 1

*** 2021-12-04T17:39:48.494955+08:00 (RO_PDB(5))
Media Recovery Log /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2021_12_04/o1_mf_1_37_jtpcbxyv_.arc

*** 2021-12-04T17:39:48.868915+08:00 (RO_PDB(5))
Log read is SYNCHRONOUS though disk_asynch_io is enabled!

*** 2021-12-04T17:39:51.447683+08:00 (RO_PDB(5))
Media Recovery Log /opt/oracle/FRA/ORCLCDB/ORCLCDB/foreign_archivelog/ORCLPDB1/2021_12_04/o1_mf_1_38_jtpfttmz_.arc
Log read is SYNCHRONOUS though disk_asynch_io is enabled!

*** 2021-12-04T17:40:04.041793+08:00 (RO_PDB(5))
==== Redo read statistics for thread 1 ====
Total physical reads (from disk and memory): 89402Kb
-- Redo read_disk statistics --
Read rate (SYNC): 89402Kb in 15.61s => 5.59 Mb/sec
Total redo bytes: 89402Kb Longest record: 24Kb, moves: 78/213168 moved: 0Mb (0%)
Longest LWN: 2052Kb, reads: 3089
Last redo scn: 0x0000000000f7e789 (16246665)
Change vector header moves = 31243/386616 (8%)
----------------------------------------------

*** 2021-12-04T17:40:04.041915+08:00 (RO_PDB(5))
Media Recovery drop redo thread 1

*** 2021-12-04T17:40:07.389420+08:00 (RO_PDB(5))
KCBR: Number of read descriptors = 1024
KCBR: Media recovery blocks read (ASYNC) = 650
KCBR: Influx buffers flushed = 40 times
KCBR: Reads = 30 reaps (5 null, 8 wait), 2 all
KCBR: Redo cache copies/changes = 15025/15025

*** 2021-12-04T17:40:07.389466+08:00 (RO_PDB(5))
Completed Media Recovery
----- Abridged Call Stack Trace -----
(deleted by me)


The interesting parts are :
"Start recovery at thread 1 ckpt scn 16121330 logseq 37 block 41766"
and
"Media Recovery Log /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2021_12_04/o1_mf_1_37_jtpcbxyv_.arc"
and
"Media Recovery Log /opt/oracle/FRA/ORCLCDB/ORCLCDB/foreign_archivelog/ORCLPDB1/2021_12_04/o1_mf_1_38_jtpfttmz_.arc"

It identifies the SCN from which it needs to recover transactions from the source CDB (ORCLCDB) and PDB (ORCLPDB1).  In a MultiTenant environment, all the PDBs share the same Redo Thread -- so any ArchiveLog may have transactions of multple PDBs.
For the purpose of refresh RO_PDB in CDB2, it identifies the entries from the source CDB ArchiveLogs.  For Log Sequence#28, it explicitly identifies the ORCLPDB1 (not the whole CDB) entries as "foreign archivelog".  However, this "foreign archivelog" is later automaticallly purged by ORCLCDB as I can see in the ORCLCDB alert log entry

2021-12-04T17:40:07.555215+08:00
Deleted Oracle managed file /opt/oracle/FRA/ORCLCDB/ORCLCDB/foreign_archivelog/ORCLPDB1/2021_12_04/o1_mf_1_38_jtpfttmz_.arc


However, sometimes the Refresh requires to read Redo Entries where an ArchiveLog has not been completely generated.  For example, a later Refresh at 22:41 required a partial ArchiveLog for Sequence#41

Start recovery at thread 1 ckpt scn 16246668 logseq 0 block 0

*** 2021-12-04T22:41:32.365818+08:00 (RO_PDB(5))
Media Recovery add redo thread 1

*** 2021-12-04T22:41:32.370629+08:00 (RO_PDB(5))
Media Recovery Log /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2021_12_04/o1_mf_1_38_jtpzhl2s_.arc

*** 2021-12-04T22:41:32.470154+08:00 (RO_PDB(5))
Log read is SYNCHRONOUS though disk_asynch_io is enabled!

*** 2021-12-04T22:41:39.259689+08:00 (RO_PDB(5))
Media Recovery Log /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2021_12_04/o1_mf_1_39_jtpzhxck_.arc
Log read is SYNCHRONOUS though disk_asynch_io is enabled!

*** 2021-12-04T22:41:42.187854+08:00 (RO_PDB(5))
Media Recovery Log /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2021_12_04/o1_mf_1_40_jtpzhy18_.arc
Log read is SYNCHRONOUS though disk_asynch_io is enabled!

*** 2021-12-04T22:41:42.773949+08:00 (RO_PDB(5))
Media Recovery Log /opt/oracle/product/19c/dbhome_1/dbs/archparlog_1_41_f1dfc534_1084312848.arc
Log read is SYNCHRONOUS though disk_asynch_io is enabled!


Sequence#41 was still an Online Redo Log at ORCLCDB and had not yet been archived (In fact, the source ORCLPDB FRA was full and ORCLPDB was unable to Archive Log files). So, it seems that for Sequence#41, a Partial Extract was obtained so as to Refresh the Clone PDB.
This doesn't seem to be documented.
Categories: DBA Blogs

Refreshable Clone PDB -- 2 Datafile Names

Hemant K Chitale - Sat, 2021-12-04 03:57

 In my previous post, I demonstrated creating a Refreshable Clone PDB.  How do you handle datafile names ? You can use the FILE_NAME_CONVERT to change file names based on substring matching of the file names.  Or you could simply rely on the good-old (old nowadays !) feature / parameter DB_CREATE_FILE_DEST

In the ORCLPDB1 Pluggable Database at ORCLDB (Source),  these are my datafiles :

SQL> connect sys/manager@orclcdb as sysdba
Connected.
SQL> show parameter db_create_file_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest string /opt/oracle/oradata
SQL> alter session set container=ORCLPDB1;

Session altered.

SQL> col file_name format a98
SQL> select file_id, file_name
2 from dba_data_files
3 order by file_id
4 /

FILE_ID FILE_NAME
---------- --------------------------------------------------------------------------------------------------
9 /opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf
10 /opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf
11 /opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf
12 /opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf
26 /opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf
31 /opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf
32 /opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf

7 rows selected.

SQL>



Note that DB_CREATE_FILE_DEST can be modified at any time. Here you can see that it was  modified before the the last 3 tablespaces/datafiles were added to the database.  (The datafiles are automatically sized at an initial 100MB, autoextensible to 32GB).

So, what are the file names in the RO_PDB at CDB2 ?


SQL> connect sys/manager@cdb2 as sysdba
Connected.
SQL> alter pluggable database ro_pdb open read only;

Pluggable database altered.

SQL> show parameter db_create_file_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest string /opt/oracle/oradata
SQL> alter session set container=RO_PDB;

Session altered.

SQL> col file_name format a98
SQL> select file_id, file_name
2 from dba_data_files
3 order by file_id
4 /

no rows selected

SQL>


HEY ! Why I can't I see the data files in RO_PDB ?  This is likely Oracle Bug#30536162.  See MOS Document "Some Data File Missing From DBA_DATA_FILES or CDB_DATA_FILES in Refreshable PDB (Doc ID 2610743.1)".   The workaround is to use V$DATAFILE at the Instance Level.
Let me check that in both instances.

At the source :

SQL> connect sys/manager@ORCLCDB as sysdba
Connected.
SQL> select file#, name
2 from v$datafile
3 where con_id =
4 (select con_id
5 from v$pdbs
6 where name = 'ORCLPDB1')
7 order by 1
8 /

FILE# NAME
------ --------------------------------------------------------------------------------------------------
9 /opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf
10 /opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf
11 /opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf
12 /opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf
26 /opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf
31 /opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf
32 /opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf

7 rows selected.

SQL>


And, at the target :

SQL> connect sys/manager@cdb2 as sysdba
Connected.
SQL> show parameter db_create_file_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest string /opt/oracle/oradata
SQL>
SQL> select file#, name
2 from v$datafile
3 where con_id =
4 (select con_id
5 from v$pdbs
6 where name = 'RO_PDB')
7 order by 1
8 /

FILE# NAME
---------- --------------------------------------------------------------------------------------------------
43 /opt/oracle/oradata/CDB2/D240086D48C012D0E0550A00275FC834/datafile/o1_mf_system_jtncqq6j_.dbf
44 /opt/oracle/oradata/CDB2/D240086D48C012D0E0550A00275FC834/datafile/o1_mf_sysaux_jtncqq6n_.dbf
45 /opt/oracle/oradata/CDB2/D240086D48C012D0E0550A00275FC834/datafile/o1_mf_undotbs1_jtncqq6o_.dbf
46 /opt/oracle/oradata/CDB2/D240086D48C012D0E0550A00275FC834/datafile/o1_mf_users_jtncqq6r_.dbf
47 /opt/oracle/oradata/CDB2/D240086D48C012D0E0550A00275FC834/datafile/o1_mf_data_min_jtncqq6s_.dbf
48 /opt/oracle/oradata/CDB2/D240086D48C012D0E0550A00275FC834/datafile/o1_mf_hemant_d_jtncqq6t_.dbf
49 /opt/oracle/oradata/CDB2/D240086D48C012D0E0550A00275FC834/datafile/o1_mf_hemant_d_jtncqq6v_.dbf

7 rows selected.

SQL>


Unlike the source ORCLPDB1 in ORCLCDB, the datafiles in RO_PDB at CDB2 are all OMF files because I had configured DB_CREATE_FILE_DEST before creating RO_PDB.

Why are the FILE_IDs (FILE# in V$DATAFILE) different between the Source PDB and the Target PDB ?  This is because the Source Container Database and Target Container Database may have different numbers of Pluggable Databases, each with different numbers of datafiles.  So, when the Target Pluggable Database is created from the Source PDB, it is allocated FILE# values as they are available in the target Container Database.

For example, these are the numbers at the Source (4 CON_IDs, 18 Datafiles) :

SQL> select con_id, file#
2 from v$datafile
3 order by 1,2
4 /

CON_ID FILE#
---------- ------
1 1
1 3
1 4
1 7
2 5
2 6
2 8
3 9
3 10
3 11
3 12
3 26
3 31
3 32
4 27
4 28
4 29
4 30

18 rows selected.

SQL>
SQL> select con_id
2 from v$pdbs
3 where name = 'ORCLPDB1'
4 /

CON_ID
----------
3

SQL>


And these are at the Target (5 CON_ID, 23 datafiles)

SQL> select con_id, file#
2 from v$datafile
3 order by 1,2
4 /

CON_ID FILE#
---------- ----------
1 1
1 3
1 4
1 7
2 5
2 6
2 8
3 9
3 10
3 11
3 12
3 26
4 27
4 28
4 29
4 30
5 43
5 44
5 45
5 46
5 47
5 48
5 49

23 rows selected.

SQL> select con_id
2 from v$pdbs
3 where name = 'RO_PDB'
4 /

CON_ID
----------
5

SQL>


The Source ORCLPDB1 is CON_ID=3 at ORCLCDB and the Target RO_PDB is CON_ID=5 at CDB2.

If I refresh RO_PDB from the source

SQL> alter pluggable database ro_pdb close;

Pluggable database altered.

SQL> alter pluggable database ro_pdb refresh;

Pluggable database altered.

SQL> alter pluggable database ro_pdb open read only;

Pluggable database altered.

SQL>


I find this in the session's trace file at CDB2 (the Target doing the Refresh) :

Trace file /opt/oracle/diag/rdbms/cdb2/CDB2/trace/CDB2_ora_6159.trc
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
Build label: RDBMS_19.12.0.0.0DBRU_LINUX.X64_210715
ORACLE_HOME: /opt/oracle/product/19c/dbhome_1
System name: Linux
Node name: ora19cs2
Release: 4.14.35-1902.10.4.1.el7uek.x86_64
Version: #2 SMP Mon Jan 27 14:13:38 PST 2020
Machine: x86_64
Instance name: CDB2
Redo thread mounted by this instance: 1
Oracle process number: 59
Unix process pid: 6159, image: oracle@ora19cs2


*** 2021-12-04T17:39:25.431585+08:00 (RO_PDB(5))
*** SESSION ID:(377.22911) 2021-12-04T17:39:25.431608+08:00
*** CLIENT ID:() 2021-12-04T17:39:25.431613+08:00
*** SERVICE NAME:(CDB2) 2021-12-04T17:39:25.431617+08:00
*** MODULE NAME:(sqlplus@ora19cs2 (TNS V1-V3)) 2021-12-04T17:39:25.431622+08:00
*** ACTION NAME:() 2021-12-04T17:39:25.431626+08:00
*** CLIENT DRIVER:(SQL*PLUS) 2021-12-04T17:39:25.431630+08:00
*** CONTAINER ID:(5) 2021-12-04T17:39:25.431634+08:00

JIT: pid 6159 requesting stop

*** 2021-12-04T17:39:48.052938+08:00 (RO_PDB(5))
Started Serial Media Recovery
This is Foreign Recovery:- HotClone:1 Pre-Plugin:0 HotClone_for_Refresh:1. The Redo going to be applied in Current PDB(Target) will be from Foreign PDB(Source) and there will be file# conversion. Source AFN is from Foreign DB and Target AFN is on Current(local) DB. Dumping Source PDB:3 to Target PDB:5 AFN mapping.
Source AFN:26 = Target AFN:47
Source AFN:32 = Target AFN:49
Source AFN:12 = Target AFN:46
Source AFN:31 = Target AFN:48
Source AFN:9 = Target AFN:43
Source AFN:11 = Target AFN:45
Source AFN:10 = Target AFN:44
.... and there is more information in the trace file which I will cover in my next Blog Post


The Source FILE# values (from ORCLPDB1 at ORCLCDB) are : 26, 32, 12, 31, 9, 11, 10
The corresponding Target FILE# values (in RO_PDB at CDB2) are : 47, 49, 46, 48, 43, 45, 44

To match them, check that the single SYSTEM tablespace datafile  is FILE#9 in ORCLPDB1 and FILE#43 in RO_PDB.
The single USERS tablespace datafile is FILLE#12 in ORCLPDB1 and FILE#46 in RO_PDB.
The two HEMANT_DATA tablespace datafile are FILE#s 31, 32 in ORCLPDB1 and FILE#s 48, 49 in RO_PDB.
Quite unfortunately, it seems that the mapping information in the Trace File is not ordered by FILE#

The trace file for the ALTER PLUGGABLE DATABASE RO_PDB has more information which I will cover in the next Blog Post.

Categories: DBA Blogs

Transition from signle tenant Multitenant database

Tom Kyte - Fri, 2021-12-03 12:06
Hi TOM, i have been working with single tenant database from the start 8i upto Database version 11.2 , now i need to work with multitenant database 12c and above. i have installed 19c database, but i am having difficulty moving around (login etc) Can you point some resources which will make it clear to transition from single tenant to multitenant database. Thanks Fahd
Categories: DBA Blogs

Character set of processed file

Tom Kyte - Fri, 2021-12-03 12:06
Dear Asktom team, We are processing external tables, where it is agreed that the incoming files are in character set EE8SSWIN1250. Nevertheless, it may happen that the incoming file has a different character set. The file is processed, but the characters are scrambled. That leads to unwanted results. Is it possible to check from PL/SQL the character set of the file before processing? Thanks, Dusan
Categories: DBA Blogs

Problem with the pivot and unpivot functions - is it possible to merge numbers with text?

Tom Kyte - Thu, 2021-12-02 17:46
Hello everyone, I ask for help to solve a problem with the pivot and unpivot functions. is it possible to merge numbers with text? in the result I would like to add the row of the column ects_cod. Thank you <code>CDS_COD;VALORE;31;30;29;28;27;26;25;24;23;22;21;20;19;18 AG0060;vote;31;30;29;28;27;26;25;24;23;22;21;20;19;18 AG0060;tot;297;655;509;731;632;583;496;427;316;282;226;214;142;169 AG0060;%;5;12;9;11;11;10;9;8;6;5;4;4;3;3 AG0061;ects_cod;A;A-B;B;B-C;C;C;C-D;D;D;D;D;E;E;E</code>
Categories: DBA Blogs

sql tuning advisor

Tom Kyte - Wed, 2021-12-01 05:06
Hi Tom, I am not able to understand one thing, I have two test databases, and separate hardware. It is a Server class installation and Enterprise addition, with all default optimizer settings. In one database say db1, when I am running dbms sqltune advisory it says a better execution plan exists and suggests me to accept the new plan, and when I am excepting the new plan, the response time gets reduced to half. In the second database the advisory says no better plan is there for the same query. and the costs are different in both the databases. Could you please help. Regards.
Categories: DBA Blogs

Refreshable Clone PDB -- 1

Hemant K Chitale - Tue, 2021-11-30 09:51

 The facility of creating Clone PDBs that are Refreshable either "on-demand" or to a set schedule was introduced in 12.2.  The Refreshable PDB can be opened only as a Read-Only database

This is somewhat similar to having a Standby Database that can be opened Read-Only for queries and refreshed as and when on-demand (without using the Active Data Guard License)

Here is a quick demo in 19.12 on Linux

First, on the Source database ORCLCDB I verify that I have the required configuration of Local Undo and ArchiveLog mode and then setup a Common account with the requisite privileges:



oracle19c>sqlplus sys/manager@ORCLCDB as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Nov 30 23:01:41 2021
Version 19.12.0.0.0

Copyright (c) 1982, 2021, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0

SQL> col property_value format a16
SQL> select property_value, log_mode, force_logging
2 from database_properties, v$database
3 where property_name = 'LOCAL_UNDO_ENABLED'
4 /

PROPERTY_VALUE LOG_MODE FORCE_LOGGING
---------------- ------------ ---------------------------------------
TRUE ARCHIVELOG YES

SQL>
SQL> alter pluggable database orclpdb1 open -- verify or open the source pdb
2 /
alter pluggable database orclpdb1 open -- verify or open the source pdb
*
ERROR at line 1:
ORA-65019: pluggable database ORCLPDB1 already open


SQL>
SQL> create user c##for_pdb_clones identified by for_pdb_clones
2 default tablespace users temporary tablespace temp
3 container=ALL -- a common user for all PDBs
4 /

User created.

SQL> grant create session, create pluggable database, sysoper -- privileges that are required (not DBA Role)
2 to c##for_pdb_clones
3 container=ALL -- across all PDBs
4 /

Grant succeeded.

SQL>
SQL> select con_id, privilege, common
2 from cdb_sys_privs -- check across all PDBs
3 where grantee = 'C##FOR_PDB_CLONES'
4 order by 1,2
5 /

CON_ID PRIVILEGE COM
---------- ---------------------------------------- ---
1 CREATE PLUGGABLE DATABASE YES
1 CREATE SESSION YES
3 CREATE PLUGGABLE DATABASE YES
3 CREATE SESSION YES

SQL>


Next, on the Target Database CDB2 (where I want to create the Clone PDB), I setup a Database Link and then issue the CREATE PLUGGABLE DATABASE command



oracle19c>sqlplus sys/manager@CDB2 as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Nov 30 23:06:41 2021
Version 19.12.0.0.0

Copyright (c) 1982, 2021, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0

SQL> create database link for_pdb_clones -- create in the Root CDB
2 connect to c##for_pdb_clones identified by for_pdb_clones using 'ORCLCDB'
3 /

Database link created.

SQL> create pluggable database ro_pdb -- the cloned pdb will be Read-Only
2 from orclpdb1@for_pdb_clones -- create from pluggable database orclpd1
3 refresh mode manual -- not specifying an auto-refresh interval
4 -- file_name_convert=('ORCLCDB','CDB2') -- not required as I am using OMF with db_create_file_dest
5 /

Pluggable database created.

SQL>
SQL> alter pluggable database ro_pdb open -- this should be opened Read-Only
2 /
alter pluggable database ro_pdb open -- this should be opened Read-Only
*
ERROR at line 1:
ORA-65341: cannot open pluggable database in read/write mode


SQL>
SQL> alter pluggable database ro_pdb open read only -- this will succeed
2 /

Pluggable database altered.

SQL>


So, now the Source PDB ORCLPDB1 has been cloned to CDB2 as a new Pluggable Database RO_PDB.

How do I refresh it ?  I will update the Source and then query the RO_PDB at CDB2


SQL> connect hemant/hemant@ORCLPDB1
ERROR:
ORA-28002: the password will expire within 7 days


Connected.
SQL> create table list_of_objects
2 as
3 select * from dba_objects
4 /

Table created.

SQL> select count(*) from list_of_objects
2 /

COUNT(*)
----------
73645

SQL>
SQL> connect hemant/hemant@ro_pdb
ERROR:
ORA-28032: Your password has expired and the database is set to read-only


Warning: You are no longer connected to ORACLE.
SQL>
--- cannot connect to the Read Only database with an expired password
SQL> connect sys/manager@CDB2 as sysdba
Connected.
SQL> alter session set container=RO_PDB;

Session altered.

SQL> select count(*)
2 from dba_objects
3 where owner = 'HEMANT'
4 /

COUNT(*)
----------
32

SQL> select max(created)
2 from dba_objects
3 where owner = 'HEMANT'
4 /

MAX(CREAT
---------
04-SEP-21

SQL>
SQL> desc hemant.list_of_objects
ERROR:
ORA-04043: object hemant.list_of_objects does not exist


SQL>


So, the RO_PDB does not have the new table and does not even allow login with an Expired Password. I will first update my password at the source ORCLPDB1 and then refresh RO_PDB and query again.

SQL> connect hemant/hemant@ORCLPDB1
ERROR:
ORA-28002: the password will expire within 7 days


Connected.
SQL> password hemant
Changing password for hemant
Old password:
New password:
Retype new password:
Password changed
SQL>
SQL> connect sys/manager@CDB2 as sysdba
Connected.
SQL> alter pluggable database ro_pdb close; -- must CLOSE to allow a REFRESH

Pluggable database altered.

SQL> alter pluggable database ro_pdb refresh; -- REFRESH command

Pluggable database altered.

SQL> alter pluggable database ro_pdb open read only;

Pluggable database altered.

SQL>
SQL> connect hemant/newhemant@ro_pdb
Connected.
SQL> select count(*) from list_of_objects;

COUNT(*)
----------
73645

SQL>


The Refresh of the RO_PDB is a simple process. I simply CLOSE it, issue an ALTER ... REFRESH command and then OEPN READ ONLY again.
Now, RO_PDB allows connections with the new Password and has the updated data *as of the time the REFRESH command is issued*.   Any new data populated in the source or any change made in the source are not available in RO_PDB  until a fresh REFRESH is executed.

Categories: DBA Blogs

While studying RedoLogs OP Code, I found uncommon OP Code

Tom Kyte - Sun, 2021-11-28 22:06
Hello, I am studying and analyzing Oracle Redo Logs. They have a pattern for each DML such as OP 5.1/ OP 11.2 for INSERT Statement. Few days ago, I found new OP Code that I have not seen before, which is OP 11.22 and 11.28. Unfortunately, the only information about the database producing that case, it consists of ASM and RAC of 3 Nodes. It would be appreciated that if you have opinions for that OP Codes.
Categories: DBA Blogs

weather prediction

Tom Kyte - Sun, 2021-11-28 22:06
What info fo weather forecasters have that make them not agree with the computer models?
Categories: DBA Blogs

Checking maximum usage for SGA and PGA

Tom Kyte - Sun, 2021-11-28 22:06
Hello Tom, Is there a way to find out the maximum SGA and PGA ever used in an instance ? We need to do some strict assessment where we need to check whether the SGA and PGA assigned by us is being completely utilised or not. If not 100% utilised then how to find it ? so that we can released the unused memory back to the OS ? In a nutshell how to find the maximum SGA and PGA ever used by an instance ? Similar to checking maximum temp ever used, is it possible to get those details for SGA and PGA ? Thanks, Vaibhav
Categories: DBA Blogs

Replacing BMC Fast Unload with another software

Tom Kyte - Sun, 2021-11-28 22:06
Hello, Any advice or suggestion on what tool or software (3rd party) i can use to replace BMC fast unload? i have several scripts running using it but with upgrade in oracle, sometimes i am having problem with the software (compatibility i guess). i learned that BMC fast unload is no longer supported so i am looking for a solution. I am new to the team and only familiar with running queries/sql. Just excited and wanted to know if their is a better or best software to replace the one used in scripts. hoping something that wont take much effort or has minor impact to scripts. Appreciate any advice/help you can provide.
Categories: DBA Blogs

How can a DBA excel in a Zero Trust environment?

Tom Kyte - Sun, 2021-11-28 22:06
Hi, I support a comparatively small project of around twenty OLTP instances on AWS. My customer expects both full auditing and least necessary privileges in all environments -- which hasn't been a problem for code development and artifact promotion. My team lead explains Zero Trust, however, that even with full-monty auditing of everything, all DBA activity outside of development is limited to pre-written scripts. No SQLcli, no TOAD, no SQL*Developer, no Putty. And because we're homed on AWS, SYS or SYSDBA commands are available only through the RDSADMIN account. I'm asking to learn from you experts whether this description is typical of a Zero Trust shop; and if not, point me toward sources that might help improve my options. TIA.
Categories: DBA Blogs

Question from USER399341 for Database In-Memory Office Hours - 2021-11-24

Tom Kyte - Wed, 2021-11-24 08:06
The following question was asked, but got attached to the Nov 24th session which was re-scheduled to this Nov 18th session: Question from USER399341 for Database In-Memory Office Hours - 2021-11-24 I have a Table with all transaction amounts (both +ve and -ve amount). I have to find a combination of these amounts that equate to a given sum transaction amount stored in another table. There will be 10s of thousands of transactions int he first table. I am using Oracle 19c. Appreciate your help.
Categories: DBA Blogs

Unnest a nested table with the extracted data in single row

Tom Kyte - Tue, 2021-11-23 13:46
Hi, I have a nested table with three columns within the nested column. I have 3 entries for the same ID within the nested column. I want to unnest this table and get the 3 entries as separate columns in single row. How do I do it? Below is the code: <code>create or replace TYPE "TEST" AS OBJECT ( point NUMBER(3), latitude NUMBER(10), longitude NUMBER(10) ) create or replace TYPE "TESTS" IS TABLE OF TEST; CREATE TABLE TEST_TABLE ( "ID" NUMBER(3,0), "LOCATION" "SYS"."TESTS" ) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "SYSTEM" NESTED TABLE "LOCATION" STORE AS "LOCATIONS" (PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) NOCOMPRESS TABLESPACE "SYSTEM" ) RETURN AS VALUE; Insert into TEST_TABLE (ID,LOCATION) values (161,SYS.TESTS(SYS.TESTS(0, 4009716, 50056416), SYS.TESTS(1, 4324450, 51769233), SYS.TESTS(2, 5570283, 51604983), SYS.TESTS(3, 5845666, 49989300))); Insert into TEST_TABLE (ID,LOCATION) values (162,SYS.TESTS(SYS.TESTS(0, 4862133, 43994149), SYS.TESTS(1, 3183550, 43960533), SYS.TESTS(2, 3970383, 45314300), SYS.TESTS(3, 5032600, 44909200)));</code> Expected Output: <code>ID POINT1 LATITUDE1 LONGITUDE1 POINT2 LATITUDE2 LONGITUDE2 POINT3 LATITUDE3 LONGITUDE3 POINT4 LATITUDE4 LONGITUDE4 --- ------- --------- ---------- ------ --------- ---------- ------ --------- ---------- ------ --------- ---------- 161 0 4009716 50056416 1 4324450 51769233 2 5570283 51604983 3 5845666 49989300 162 0 4862133 43994149 1 3183550 43960533 2 3970383 45314300 3 5032600 44909200</code>
Categories: DBA Blogs

Json_Transform in Oracle 21c

Tom Kyte - Tue, 2021-11-23 13:46
Team, Given this JSON, how do i increment the quantity by two in each level? tried the below using json_transform but ended up with error. is that possible using json_transform function? kinldy help. <code> demo@XEPDB1> select json_serialize(y pretty) y 2 from t 3 where x =2; Y ---------------------------------------- { "produce" : [ { "fruit" : "apple", "quantity" : 10 }, { "fruit" : "orange", "quantity" : 15 } ] } demo@XEPDB1> select json_serialize( 2 json_transform( y, set '$.produce[*].quantity' = 3 '$.produce[*].quantity' + 2 ) pretty ) 4 from t 5 where x =2; '$.produce[*].quantity' + 2 ) pretty ) * ERROR at line 3: ORA-01722: invalid number demo@XEPDB1> </code> Was able to get this done using JSON object types from PL/SQL. would like to know if the same can be done using JSON_TRANSFROM function in sql? <code> demo@XEPDB1> create or replace function update_json( p_input json ) 2 return json 3 as 4 l_data json_object_t; 5 l_size number := 0; 6 l_ele json_element_t; 7 l_array json_array_t; 8 l_obj json_object_t; 9 l_qty number := 0; 10 begin 11 l_data := json_object_t( p_input ); 12 13 if l_data.has('produce') then 14 l_ele := l_data.get('produce'); 15 if l_ele.is_array then 16 l_size := l_ele.get_size()-1; 17 l_array := json_array_t( l_ele ); 18 for i in 0..l_size 19 loop 20 l_obj := treat( l_array.get(i) as json_object_t ); 21 l_qty := l_obj.get_Number('quantity'); 22 l_obj.put( 'quantity', l_qty+2 ); 23 end loop; 24 end if; 25 end if; 26 return l_data.to_json; 27 end; 28 / Function created. demo@XEPDB1> select json_serialize(y) 2 from t t1 3 where x =2; JSON_SERIALIZE(Y) -------------------------------------------------------------------------------------- {"produce":[{"fruit":"apple","quantity":10},{"fruit":"orange","quantity":15}]} demo@XEPDB1> select update_json(y) 2 from t t1 3 where x =2; UPDATE_JSON(Y) -------------------------------------------------------------------------------- {"produce":[{"fruit":"apple","quantity":12},{"fruit":"orange","quantity":17}]} demo@XEPDB1> </code>
Categories: DBA Blogs

SQL Server Migration Assistant for Oracle

Hemant K Chitale - Sat, 2021-11-20 08:08

 Here's a Video Demo of the SQL Server Migration Assistant for Oracle   https://youtu.be/zNTF1ncr45g  


The tool is available for download here

Categories: DBA Blogs

How to change the file name dynamically for an external table creation (ex ABC_YYYYMMDDHH24MISS)

Tom Kyte - Fri, 2021-11-19 18:06
<code></code>Team, <i>The version of oracle we use is :" Oracle Database 19c Enterprise Edition Release 19.0.0.0.0". </i> Lets consider our basic scott.emp table for this scenario limited to few columns. The external table creation script goes something like this : <code>CREATE TABLE employee ( empno NUMBER(4,0), ename VARCHAR2(10), job VARCHAR2(9), mgr NUMBER(4,0) ) ORGANIZATION EXTERNAL ( TYPE oracle_loader DEFAULT DIRECTORY scott ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE SKIP 1 FIELDS TERMINATED BY "," MISSING FIELD VALUES ARE NULL REJECT ROWS WITH ALL NULL FIELDS ) LOCATION ( 'EMPLOYEE_20211116203018.csv' ) ) REJECT LIMIT UNLIMITED; The file name passed over here is : 'EMPLOYEE_20211116203018.csv'</code> If we see the file name structure it is some thing like EMPLOYEE_YYYYMMDDHH24MISS YYYY--2021,MM-11,DD-16,HH24:MI:SS --203018 if the file name had only extension of only YYYYMMDD , this could have been handled using dynamic sql like including execute immediate <code>select 'EMPLOYEE_'||to_char(sysdate,'YYYYMMDD')||'.csv' AS tablename from dual CREATE TABLE employee ( empno NUMBER(4,0), ename VARCHAR2(10), job VARCHAR2(9), mgr NUMBER(4,0) ) ORGANIZATION EXTERNAL ( TYPE oracle_loader DEFAULT DIRECTORY scott ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE SKIP 1 FIELDS TERMINATED BY "," MISSING FIELD VALUES ARE NULL REJECT ROWS WITH ALL NULL FIELDS ) LOCATION ( tablename ) ) REJE?CT LI?MIT UNLIMITED;</code> But unfortunately the file extension is including hh24miss along with YYYYMMDD extension. The timestamp is not a fixed value , as it may vary atleast in minutes and seconds. Any code snippet to handle timestamp as well in the external creation script is most welcome and appreciated. Regards, Satyam Reddy.
Categories: DBA Blogs

Move data to warehouse

Tom Kyte - Thu, 2021-11-18 23:46
looking for the best way to move data from oracle 19c oltp system(size 13tb) to oracle 19c warehouse db. Currently using stantby database to fetch the modified rows and add to warehouse system. This process is taking so much time.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs