Feed aggregator

Recover Dropped User Using Standby Database

Michael Dinh - Thu, 2021-10-21 19:18

A critical user was accidentally dropped around one week ago and will need be recovered.

Here are semi detailed steps used to flashback standby database, export user from standby, and import to primary.

============================================================
### Recover user that was deleted about week ago.
============================================================

Request Created: 14/Oct/2021 6:41 AM

DB version: 12.1.0.2.0 

==============================
### CURRENT FLASHBACK INFO:
==============================

SQL>

NAME  DB_UNIQUE_NAME OPEN_MODE DATABASE_ROLE
----- -------------- --------- ----------------
ORC1  STANDBY1       MOUNTED   PHYSICAL STANDBY

 INST OLDEST_FLASHBACK_SCN OLDEST_FLASHBACK_TIME RETENTION_TARGET FLASHBACK_MB EST_FLASHBACK_MB
----- -------------------- --------------------- ---------------- ------------ ----------------
    1 984681951010         04-OCT-2021 18:44:28  7200             1767000      895861

CURRENT_SCN
------------
985044762265

SQL>

==============================
### STOP STANDBY APPLY:
==============================

DGMGRL> edit database 'STANDBY1' set state='APPLY-OFF';
Succeeded.

DGMGRL>

========================================
### FLASHBACK USING TIMESTAMP FAILED:
========================================

SQL> shutdown abort;
SQL> startup mount restrict exclusive;

SQL> flashback database to timestamp TO_TIMESTAMP('2021-10-05 00:00:00','YYYY-MM-DD HH24:MI:SS');
flashback database to timestamp TO_TIMESTAMP('2021-10-05 00:00:00','YYYY-MM-DD HH24:MI:SS')
*
ERROR at line 1:
ORA-38729: Not enough flashback database log data to do FLASHBACK.
SQL>

========================================
### FLASHBACK USING SCN: 
to_timestamp can't convert to SCN properly
========================================

SQL> flashback database to scn 984681951011;

========================================
### FLASHBACK WILL EVENTUALLY FAIL:
========================================

SQL> flashback database to scn 984681951011;

flashback database to scn 984681951011
*
ERROR at line 1:
ORA-38861: flashback recovery stopped before reaching recovery target

SQL> 

========================================
### CHECK ALERT LOG:
========================================

Thu Oct 14 13:59:52 2021
Errors in file /u01/app/oracle/diag/rdbms/STANDBY1/ORC1/trace/ORC1_pr00_12838.trc:
ORA-00283: recovery session canceled due to errors
ORA-38861: flashback recovery stopped before reaching recovery target
ORA-16016: archived log for thread 1 sequence# 477183 unavailable
ORA-38861: signalled during: flashback database to scn 984681951011...

========================================
### RESTORE ARCHIVELOG from alert log:
========================================

RMAN> restore archivelog logseq 477183;

========================================
### CONTINUE FLASHBACK:
========================================

SQL> flashback database to scn 984681951011;

========================================
### DO NOT OPEN READ ONLY 
WITHOUT CANCELING REDO APPLY WITHOUT ADG.
========================================

Active Data Guard enables read-only access to a physical standby database while Redo Apply is active.

SQL> recover managed standby database cancel;
SQL> alter database open read only;

========================================
### CHECK FOR DROPPED USER:
========================================

SQL> select created, username from dba_users where username='XXX';

CREATED                    USERNAME
-------------------------- --------
2008-SEP-18 20:19:33       XXX
SQL>

========================================
### USING DATAPUMP DID NOT WORK:
========================================

How To Use DataPump Export (EXPDP) To Export From Physical Standby Database (Doc ID 1356592.1)

--- For physical standby
We can execute exp in physical standby database when it is in read only
https://dbaminds.wordpress.com/2016/01/07/perform-export-expdp-from-physical-standby-and-logical-standby/

--- Use Snapshot Standby - did not test.
https://dohdatabase.com/2021/04/22/datapump-export-from-data-guard/

========================================
### EXP WORKED: PERFORMED BY CLIENT
========================================

### From STANDBY:
$ cat exp.par
file=exp.dmp
compress=n
grants=y
indexes=y
direct=y
log=exp.log
rows=y
consistent=y
owner=schema
triggers=y
constraints=y

exp userid=system parfile=exp.par
scp exp.dmp oracle@target:/home/oracle/

### From PRIMARY:
$ cat imppar
file=exp.dmp
grants=y
indexes=y
rows=y
log=imp.log
fromuser=schema
commit=n
constraints=y
compile=y

imp userid=system parfile=imp.par

========================================
### RECOVER STANDBY TO CURRENT TIME:
========================================

SQL> recover managed standby database using current logfile parallel 4 disconnect;
Media recovery complete.
SQL>

========================================
### RESTORE ARCHIVELOG TO FRA IS BEING DELETED:
========================================

Example:
Thu Oct 14 18:02:30 2021
Deleted Oracle managed file +FRA/STANDBY1/ARCHIVELOG/2021_10_14/thread_1_seq_477816.752.1085939417

Thu Oct 14 18:02:31 2021
Deleted Oracle managed file +FRA/STANDBY1/ARCHIVELOG/2021_10_14/thread_1_seq_477794.1120.1085939341

========================================
### RESTORE ARCHIVELOG TO NEW DESTINATION:
========================================

mkdir -p /ubb1/rman/ORC1/archivelog

========================================
### CHECK ALERT LOG FOR GAP SEQUENCE:
========================================

grep 'Fetching gap sequence in thread' /u01/app/oracle/diag/rdbms/STANDBY1/ORC1/trace/alert_ORC1.log

========================================
### RESOLVE ARCHIVE GAP:
========================================

Fetching gap sequence in thread 1, gap sequence 477604-477604

RMAN> run {
set archivelog destination to '/ubb1/rman/ORC1/archivelog';
restore archivelog from sequence 477604 until sequence 477604;}2> 3>

========================================
### FOR REFERENCE: 157 gap sequences
========================================

$ grep -c 'Fetching gap sequence in thread' /u01/app/oracle/diag/rdbms/STANDBY1/ORC1/trace/alert_ORC1.log
157

========================================
### RECOVER ALL ARCHIVELOG TO SAVE TIME:
========================================
 
SQL> r
  1  select PID,inst_id inst,thread#,client_process,process,status,sequence#,block#,DELAY_MINS
  2  from gv$managed_standby
  3  where BLOCK#>1
  4  and status not in ('CLOSING','IDLE')
  5  order by status desc, thread#, sequence#
  6*

                                        CLIENT                                                  DELAY
PID                       INST  THREAD# PROCESS      PROCESS   STATUS       SEQUENCE#   BLOCK#   MINS
------------------------ ----- -------- ------------ --------- ------------ --------- -------- ------
8723                         1        1 LGWR         RFS       RECEIVING       483532   119582      0
29047                        1        1 N/A          MRP0      APPLYING_LOG    477715  1545345      0

SQL>

========================================
### RESTORE ARCHIVELOG UNTIL SEQUENCE 483515:
========================================

RMAN> run {
set archivelog destination to '/ubb1/rman/ORC1/archivelog';
restore archivelog from sequence 477715 until sequence 483515;}2> 3>

ALTERNATIVE: 
set archivelog destination to '/ubb1/rman/ORC1/archivelog';
restore archivelog from sequence 477715 until sequence 483515;

========================================
### ENABLE REDO APPLY USING DGMGRL:
========================================

SQL> recover managed standby database cancel;
Media recovery complete.
SQL>

DGMGRL> show configuration

Configuration - linkdg

  Protection Mode: MaxPerformance
  Members:
  PRIMARY - Primary database
    STANDBY1 - Physical standby database 
      Error: ORA-16810: multiple errors or warnings detected for the database

    STANDBY2 - Physical standby database 
    SBY3     - Physical standby database 
      Warning: ORA-16532: Oracle Data Guard broker configuration does not exist

Fast-Start Failover: DISABLED

Configuration Status:
ERROR   (status updated 28 seconds ago)

DGMGRL> show database STANDBY1

Database - STANDBY1

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-OFF
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          1 minute 6 seconds (computed 0 seconds ago)
  Average Apply Rate: (unknown)
  Real Time Query:    OFF
  Instance(s):
    ORC1

Database Status:
SUCCESS

DGMGRL> edit database 'STANDBY1' set state='APPLY-ON';
Succeeded.

DGMGRL> show database STANDBY1

Database - STANDBY1

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          0 seconds (computed 0 seconds ago)
  Average Apply Rate: 6.01 MByte/s
  Real Time Query:    OFF
  Instance(s):
    ORC1

Database Status:
SUCCESS

DGMGRL> validate database STANDBY1;

  Database Role:     Physical standby database
  Primary Database:  PRIMARY

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

DGMGRL> show configuration

Configuration - linkdg

  Protection Mode: MaxPerformance
  Members:
  PRIMARY - Primary database
    STANDBY1 - Physical standby database
    STANDBY2 - Physical standby database
    SBY3     - Physical standby database
      Warning: ORA-16532: Oracle Data Guard broker configuration does not exist

Fast-Start Failover: DISABLED

Configuration Status:
WARNING   (status updated 15 seconds ago)

DGMGRL>

Q.E.D.

__ATA.cmd.push(function() { __ATA.initDynamicSlot({ id: 'atatags-26942-61720865790c7', location: 120, formFactor: '001', label: { text: 'Advertisements', }, creative: { reportAd: { text: 'Report this ad', }, privacySettings: { text: 'Privacy', } } }); });

Visit the link when the event is about to start.

Tom Kyte - Thu, 2021-10-21 18:26
Please can I get the zoom link for today's conference?
Categories: DBA Blogs

index

Tom Kyte - Thu, 2021-10-21 00:26
Tom: I have a question regarding index. can you explain me in detail what the following means. If the index is a concatenation of multiple columns and one of the columns contains a NULL value, the row will be in the index column containing the NULL value and will be left empty. +++++++++++++++++++++++++++++++++++++++++++++ above is exactly what it says in a book. I can't even understand it's english."the row will be in the index column containing the NULL value and will be left empty" what does that mean?
Categories: DBA Blogs

Request for script to import data

Tom Kyte - Thu, 2021-10-21 00:26
Dear Ask TOM Team, Thank you for your appreciated efforts and providing the guidance to all concerned clients. My question: I have running and in production oracle DB, 11g ... and would like to import and update the data from other DB with the same structure and version, Please: 1- provide me the script to do such required import data to the running DB. 2- provide me the script to do such required export data to the running DB. Regards, Reda Eltayef
Categories: DBA Blogs

i want to export only the objects owned by given schema using EXPDP

Tom Kyte - Thu, 2021-10-21 00:26
Hi, i want to export only the objects owned by given schema using EXPDP, but unfortunately it is exporting all the objects which schema has access to. i am using below syntax. <code>expdp school/school@orcl schemas=(demo) directory=DATA_PUMP_DIR dumpfile=demo.dmp logfile=expdp.log</code> Please help. Thanks Fahd
Categories: DBA Blogs

Autonomous transactions and commit point optimization

Tom Kyte - Thu, 2021-10-21 00:26
Hi We have an application written with PL/SQL and thus with commit point optimization. However AWR reports show a lot of log file sync waits. How do autonomous transactions, which we are using for logging purposes operate with pl/sql code. Is the commit point optimization utilized? If some procedure is called which is defined as an autonomous transaction and no actual dml operations are done (which could happen in logging operations), is there still some commit handling done ? If so, what ? lh
Categories: DBA Blogs

How to calculate timestamp from an old ora_rowsn taking as a reference a current ora_rowscn timestamp_to_scn(sysdate) from DUAL

Tom Kyte - Thu, 2021-10-21 00:26
As you know, the association between an SCN and a timestamp when the SCN is generated is remembered by the database for a limited time period and an error is returned if the SCN specified for the argument to SCN_TO_TIMESTAMP is too old. Example: <code>select scn_to_timestamp(max(ora_rowscn)) from MY_TABLE</code> <i><b>ORA-08181: el numero especificado no es un numero de cambio del sistema valido ORA-06512: en "SYS.SCN_TO_TIMESTAMP", linea 1 08181. 00000 - "specified number is not a valid system change number" *Cause: supplied scn was beyond the bounds of a valid scn. *Action: use a valid scn.</b></i> Nevertheless, I can obtain the max(ora_rowscn) from MY_TABLE without any kind of error: <code> select max(ora_rowscn) from MY_TABLE MAX(ORA_ROWSCN) --------------- 99464620 </code> I would like to obtain the corresponding timestamp of this ORA_ROWSCN approximately (I don't care if it may vary some hours), <b>without using any AUDIT tool but using PL/SQL statements</b>. Does exist any relationship between the above MAX(ORA_ROWSCN) from MY_TABLE and the current ORA_ROWSCN from sysdate? <code> select timestamp_to_scn(sysdate), TO_CHAR(SYSDATE,'dd-mm-yyyy hh24:mi:ss') from dual TIMESTAMP_TO_SCN(SYSDATE) TO_CHAR(SYSDATE,'dd-mm-yyyy hh24:mi:ss') ------------------------- ---------------------------------------- 116631465 13-10-2021 12:12:02 </code> Could I calculate, using any kind of algorithm, the corresponding timestamp to <b>99464620</b> value? Thanks in advance.
Categories: DBA Blogs

Different sql id/sql text showing up in v$session

Tom Kyte - Wed, 2021-10-20 06:06
Hello! I have a particular ETL job that fires below 4 select queries on a view definition to our Oracle database - <code>select * from view where mod(id,4) = 1; select * from view where mod(id,4) = 2; select * from view where mod(id,4) = 3; select * from view where mod(id,4) = 0; </code> However when the queries start executing and I check v$session for the 4 sessions running these 4 queries, I see a common sql id showing up for all these 4 queries and the sql text for this sql id seems completely unfamiliar/unrelated to the queries that are actually being executed. So I am a bit lost as I have not come across such scenario earlier. What could this sql id be and why would it be showing up in v$session for all 4 different queries?
Categories: DBA Blogs

Failed to start The nginx HTTP and reverse proxy server on RedHat EC2 Linux

Pakistan's First Oracle Blog - Tue, 2021-10-19 21:55

 I needed a simple reverse proxy to redirect connections to a RDS database in a private subnet, so I quickly created a Redhat Linux EC2 instance, installed NGINX, and setup the nginx.conf file for session redirection. My nginx.conf looked like following:

user nginx;

worker_processes auto;

error_log /var/log/nginx/error.log;

pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {

    worker_connections 1024;

}

stream {

    upstream target_server {

        server targetdb:1521;

    }

    server {

        listen 1521;

        proxy_pass target_server; }

}


But starting ngnix process was giving following error:


[root@test nginx]# systemctl start nginx

Job for nginx.service failed because the control process exited with error code. See "systemctl status nginx.service" and "journalctl -xe" for details.

[root@test nginx]# systemctl status nginx.service
● nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Wed 2021-10-20 13:40:57 AEDT; 5s ago
  Process: 14702 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=1/FAILURE)
  Process: 14700 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)

Oct 20 13:40:57 test systemd[1]: Starting The nginx HTTP and reverse proxy server...
Oct 20 13:40:57 test nginx[14702]: nginx: [emerg] unknown directive "stream" in /etc/nginx/nginx.conf:9
Oct 20 13:40:57 test nginx[14702]: nginx: configuration file /etc/nginx/nginx.conf test failed
Oct 20 13:40:57 test systemd[1]: nginx.service: control process exited, code=exited status=1
Oct 20 13:40:57 test systemd[1]: Failed to start The nginx HTTP and reverse proxy server.
Oct 20 13:40:57 test systemd[1]: Unit nginx.service entered failed state.
Oct 20 13:40:57 test systemd[1]: nginx.service failed.

Solution: Just install nginx-mod-stream

[root@test nginx]# ls -ltr /usr/lib/nginx/modules/ngx_stream_module.so
ls: cannot access /usr/lib/nginx/modules/ngx_stream_module.so: No such file or directory
[root@ip-10-219-40-147 nginx]# yum install nginx-mod-stream

Now if you start nginx service, it should work.
Categories: DBA Blogs

Out Parameter using the scheduler

Tom Kyte - Tue, 2021-10-19 11:46
Hi, There is no test case needed, it's more of a question. We have a stored proc which on success of the execution of the stored proc there is either a 1 or 0 returned. Can the 1 or 0 be captured by the oracle scheduler? Thanks Vic
Categories: DBA Blogs

Running Kubernetes on Oracle Cloud OCI

Andrejus Baranovski - Mon, 2021-10-18 03:42
Oracle Cloud OCI provides a good environment to run your Kubernetes workloads. In this video, I show how to access Kubernetes cluster in OCI, explain artifacts related to the cluster. I show how Skipper API runs on Kubernetes deployed on OCI. Cluster runtime is accessed through cloud shell.

 

The role of Data Lakes or Staging Areas for Data Warehouse ETL

Rittman Mead Consulting - Mon, 2021-10-18 03:21
The role of Data Lakes or Staging Areas for Data Warehouse ETL


We were asked recently by a client, about the role of a staging area for ETL processes when loading a Data Warehouse, specifically they wanted to know if this was still required if using a Data Lake.


TLDR: Data Lakes and Staging areas could be interchangeable in terms of ETL processes, the key consideration is who else and what else will make use of the data within a Data Lake and do you have the right policies and procedures in place to ensure good data governance.


As with so many things people often see Data Lakes as a technology solution, but the reality is that its is a service. Data Lakes provide a method of surfacing data in it's raw/native form to a variety of users and down stream processes, these are intented to use relatively cheap storage and to help accelerate insights into business decisions. We see clients opting to implement Data Lakes on a variety of different technolgies which have various individual benifits, drawbacks and considerations, however the previaling trend in terms of operating an effective Data Lake and in terms of controlling cost is the need for careful goverance in terms of various aspects of data quality and secuirty including items such as data retention, and data dictionary.

A staging area for a Data Warehouse serves a single focused purpose of holding raw data from source systems and providing a location for transient tables that are part of the transformation steps. Depending on the design methodology and ETL toolset the purpose of the staging area varies slightly but the target audience is always simply the ETL process and the Data Engineers who are responible for developing and maintaing the ETL. This doesn't negate the need the data governance that is required in a Data Lake but it does simplify it significantly when compared to the multitude of users and processes which may access a Data Lake.



The role of Data Lakes or Staging Areas for Data Warehouse ETLTraditional direct ETL from source system to Data Warehouse The role of Data Lakes or Staging Areas for Data Warehouse ETLData Warehousing with the inclusion of a Data Lake



CONCLUSIONS

  1. Depending on the toolset chosen for the Data Lake, ETL, and Data Warehouse the location and method for performing transformations and storing transient intimidate tables could be either in the Data Lake or within a sub schema of the Data Warehouse database.

  2. If your ETL and Data Warehouse is the only downstream user of a Data Lake is it even a Data Lake?

  3. Get your processes and policies right in terms of data governance, retention, and security.

Categories: BI & Warehousing

My Posts on Standby Database[s] -- Data Guard

Hemant K Chitale - Sun, 2021-10-17 04:10
Although I have, in previous posts, covered Data Guard in 12c (and 12cR1 RAC), this is a list of recent Blog Posts on Data Guard. 





















Categories: DBA Blogs

My Posts on Database Flashback

Hemant K Chitale - Sun, 2021-10-17 03:43

 

My Blog Posts on Database Flashback

(The first 5 demonstrations are in 11gR2.  The last demonstration is in 19c)


(not in the series)



Categories: DBA Blogs

Blue-Green Deployment in Azure

Online Apps DBA - Sun, 2021-10-17 00:48

When you are deploying a new change into production, the associated deployment should be in a predictable manner. In simple terms, this means no disruption and zero downtime! The safe strategy can be achieved by working with two identical infrastructures – the “green” environment hosting the current production and the “blue” environment with the new […]

The post Blue-Green Deployment in Azure appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

OpenShift on Azure for Beginners

Online Apps DBA - Sun, 2021-10-17 00:30

 Architecture and Features Azure Red Hat OpenShift ➪ provides fully managed and highly available OpenShift clusters on-demand that are jointly monitored and operated by Microsoft and Red Hat.  Azure Red Hat OpenShift  ➪ follows a simple architecture that depends on smaller decoupled entities called Microservices. These decoupled entities work together and run on the Kubernetes cluster. All of […]

The post OpenShift on Azure for Beginners appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Oracle Cloud Pay As You Go (PAYG) Subscription Model

Online Apps DBA - Sun, 2021-10-17 00:10

Pay As You Go (PAYG) model lets you quickly provision services with no commitment, and you’re only charged for what service you use. There is no upfront commitment and no minimum service period. Any cloud infrastructure (IaaS) and platform (PaaS) services consumed are metered and billed based on that consumption.This model Produced bills for outsourced […]

The post Oracle Cloud Pay As You Go (PAYG) Subscription Model appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[Recap] Day 3: Manage Identity and Access in Cloud, Part- II

Online Apps DBA - Sat, 2021-10-16 23:47

With each passing day, gigabytes of data are generated, and hackers have a banquet to choose from unless we as security specialists, intervene .One quick question before we get into the third session’s cloud security reviews! Are you going to let anyone inside your home?No, no, no!This demonstrates the importance of security and managing identity and access even in the […]

The post [Recap] Day 3: Manage Identity and Access in Cloud, Part- II appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[Recap] Day 4: Implement Platform Protection In Cloud

Online Apps DBA - Sat, 2021-10-16 23:36

If you work in IT, you’ve probably heard about cloud computing. If you work in the , you’ve probably heard of Cloud Security, and if you work in Cloud Security, you’ve probably heard of Platform Security. So it doesn’t matter if you work in IT or Sales, Cloud or Operations, security or deployment. If you are […]

The post [Recap] Day 4: Implement Platform Protection In Cloud appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator