select 'alter view '|| object_name||' compile;' from all_objects where owner='Owner id' and object_type='VIEW' and STATUS='INVALID';
select DBMS_METADATA.GET_DDL('TABLE','<table_name>') from DUAL;
set long 1000
select TEXT from DBA_VIEWS where OWNER = 'FS890' and VIEW_NAME = 'PERSON_VW';
1) Querying DBA_DATAPUMP_JOBS view:-
1
| select * from dba_datapump_jobs;
select JOB_NAME,OPERATION,STATE from dba_datapump_jobs; |
The STATE column of the above view would give you the status of the JOB to show whether EXPDP or IMPDP jobs are still running, or have terminated with either a success or failure status.
2) Querying V$SESSION_LONGOPS & V$SESSION views:-
1
2
3
4
5
| SELECT b.username, a.sid, b.opname, b.target, round(b.SOFAR*100/b.TOTALWORK,0) || '%' as "%DONE", b.TIME_REMAINING, to_char(b.start_time,'YYYY/MM/DD HH24:MI:SS') start_time FROM v$session_longops b, v$session a WHERE a.sid = b.sid ORDER BY 6; |
3) Querying V$SESSION_LONGOPS & V$DATAPUMP_JOB views:-
1
2
3
4
| SELECT sl.sid, sl.serial#, sl.sofar, sl.totalwork, dp.owner_name, dp.state, dp.job_mode FROM v$session_longops sl, v$datapump_job dp WHERE sl.opname = dp.job_name AND sl.sofar != sl.totalwork;
SELECT sl.sid, sl.serial#, sl.sofar, sl.totalwork, dp.owner_name, dp.state, dp.job_mode
FROM v$session_longops sl, v$datapump_job dp
WHERE sl.opname = dp.job_name
AND sl.sofar != sl.totalwork; |
4) Querying all the related views with a single query:-
1
2
3
4
5
6
7
8
9
10
11
12
| select x.job_name,b.state,b.job_mode,b.degree , x.owner_name,z.sql_text, p.message , p.totalwork, p.sofar , round((p.sofar/p.totalwork)*100,2) done , p.time_remaining from dba_datapump_jobs b left join dba_datapump_sessions x on (x.job_name = b.job_name) left join v$session y on (y.saddr = x.saddr) left join v$sql z on (y.sql_id = z.sql_id) left join v$session_longops p ON (p.sql_id = y.sql_id) WHERE y.module='Data Pump Worker' AND p.time_remaining > 0; |
5) Use the following procedure and replace the JOB_OWNER & JOB_NAME as per your env. which you fetch from import.log:-
1
2
3
4
5
6
7
8
| ;;; Import: Release 12.1.0.2.0 - Production on Thu Jun 29 00:29:09 2017 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. ;;; Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_FULL_04" successfully loaded/unloaded |
Here the JOB_OWNER is SYSTEM and JOB_NAME is SYS_IMPORT_FULL_04.
And below is the procedure:-
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
| SET SERVEROUTPUT ON DECLARE ind NUMBER; h1 NUMBER; percent_done NUMBER; job_state VARCHAR2(30); js ku$_JobStatus; ws ku$_WorkerStatusList; sts ku$_Status; BEGIN h1 := DBMS_DATAPUMP.attach('JOB_NAME', 'JOB_OWNER'); dbms_datapump.get_status(h1, dbms_datapump.ku$_status_job_error + dbms_datapump.ku$_status_job_status + dbms_datapump.ku$_status_wip, 0, job_state, sts); js := sts.job_status; ws := js.worker_status_list; dbms_output.put_line('*** Job percent done = ' || to_char(js.percent_done)); dbms_output.put_line('restarts - '||js.restart_count); ind := ws.first; while ind is not null loop dbms_output.put_line('rows completed - '||ws(ind).completed_rows); ind := ws.next(ind); end loop; DBMS_DATAPUMP.detach(h1); end; / |
6) Also for any errors you can check the alert log and query the DBA_RESUMABLE view.
1
| select name, sql_text, error_msg from dba_resumable; |
That’s all what I can think of at the moment, would add the queries to this post if I find another view which can be used to get the information of the datapump jobs.
Single Client Access Name (SCAN) in Oracle RAC 11g and 12c
SCAN (Single Client Access Name) commands
cluvfy comp scan -> Use the cluvfy comp scan component verification command to check the Single Client Access Name (SCAN) configuration.
crsctl status resource -w "TYPE = ora.scan_listner.type"
READ ONLY Database
2- Will the following work?
SHUTDOWN IMMEDIATE;
...host out and remove all online redolog groups/members
...permanently.
STARTUP MOUNT
ALTER DATABASE OPEN READ ONLY;
SHUTDOWN IMMEDIATE;
...host out and remove all online redolog groups/members
...permanently.
STARTUP MOUNT
ALTER DATABASE OPEN READ ONLY;
Oracle – AWR, ASH and ADDM reports for RAC
Following are the scripts that can be executed as sysdba in order to get the AWR, ASH and ADDM reports on Oracle RAC:
SQL script for getting AWR Report on RAC database:
SQL>@$ORACLE_HOME/rdbms/admin/awrgrpt.sql
SQL script for getting AWR Report for single instance: SQL>@$ORACLE_HOME/rdbms/admin/awrrpt.sql
SQL script for getting ASH Report on RAC database: SQL>@$ORACLE_HOME/rdbms/admin/ashrpti.sql
SQL script for getting ASH Report for single Instance: SQL>@$ORACLE_HOME/rdbms/admin/ashrpt.sql
SQL script for getting ADDM Report on RAC database:SQL>@$ORACLE_HOME/rdbms/admin/addmrpti.sql
SQL script for getting ADDM Report for single instance:SQL>@$ORACLE_HOME/rdbms/admin/addmrpt.sql
How to create awr snapshot manually
To view
the current retention and interval settings of the AWR, use the
DBA_HIST_WR_CONTROL
view. Here is an example of how to use this view:
SELECT * FROM dba_hist_wr_control;
You can modify the snapshot collection interval using the dbms_workload_repository package
Exec dbms_workload_repository.modify_snapshot_settings
(retention=>20160, interval=> 15);
You can use the
dbms_workload_repository package to create or remove snapshots.
EXEC
dbms_workload_repository.create_snapshot;
You can
see what snapshots are currently in the AWR by using.
SELECT snap_id,
begin_interval_time, end_interval_time
FROM dba_hist_snapshot
ORDER BY 1;
Sometimes you might want to drop snapshots manually.
EXEC
dbms_workload_repository.drop_snapshot_range -
(low_snap_id=>1107, high_snap_id=>1108);
AWR Baselines
It is
frequently a good idea to create a baseline in the AWR. A baseline is
defined by <Robert, should this be "by"
or "as"> a range of snapshots that can be used to
compare to other pairs of snapshots. The Oracle database server will
exempt the snapshots assigned to a specific baseline from the
automated purge routine. Thus, the main purpose of a baseline is to
preserve typical runtime statistics in the AWR repository, allowing
you to run the AWR snapshot reports on the preserved baseline
snapshots at any time and compare them to recent snapshots contained
in the AWR.
EXEC
dbms_workload_repository.create_baseline -
(start_snap_id=>1109, end_snap_id=>1111, -
baseline_name=>'EOM Baseline');
Baselines can be seen using the
DBA_HIST_BASELINE view
as seen in the following example:
SELECT baseline_id,
baseline_name, start_snap_id, end_snap_id
FROM dba_hist_baseline;
Removing baselines
You can remove a baseline using the
dbms_workload_repository.drop_baseline procedure as seen in
this example that drops the "EOM Baseline" that we just created.
EXEC
dbms_workload_repository.drop_baseline
(baseline_name=>'EOM Baseline', Cascade=>FALSE);
SQL Trace and RAC |
A poorly tuned application may actually perform worse on Oracle RAC. If the poorly tuned application should perform well on Oracle RAC, it may not scale well over the long term. It is so important that it must be stated again; you must tune an application well before deploying them on Oracle RAC. This includes making sure SQL statements are using efficient execution plans, and that queries can benefit from proper indexes. The application should also be able to leverage partitioning or Materialized Views to reduce processing time. These are just a few examples of application tuning that apply to applications even if RAC is not used. This chapter helps tune applications for RAC-specific issues.
SQL Trace and RAC
Armed with Oracle's SQL Trace facility, (also called the 10046 trace), the database administrator can start a trace of a user's session and obtain a vast amount of information on the SQL statements executed by that session. This book assumes that most readers of this book already have a rudimentary understanding of SQL Trace. As such, this book will be covering SQL Trace as it pertains to Oracle RAC databases.
Even with Oracle RAC databases, a trace is started at the session level. First, we need to identify the sid and serial# of the session. We also need to know which instance the session is connected to.
SQL> select
2 instance_number,
3 instance_name
4 from
5 v$instance;
INSTANCE_NUMBER INSTANCE_NAME
--------------- ----------------
2 orcl2
SQL> select
2 inst_id,
3 sid,
4 serial#
5 from
6 gv$session
7 where
8 username='SCOTT';
INST_ID SID SERIAL#
---------- ---------- ----------
1 42 219
The first query shows that the current session is connected to the second instance, orcl2. We need to identify a session for the user SCOTT but do not know to which instance the user is connected. A query of the gv$session view can search to find all sessions across all instances. The example above gives us the instance id, session identifier, and serial number of the session.
Just like a single-instance database, the dbms_monitor supplied package is used to start a trace in a session. The only difference with Oracle RAC is that calls to dbms_monitor must be executed in the instance of the session that needs to be traced. In the example above, the current session was connected to instance orcl2 but the session to be traced is in orcl1. We need to connect to orcl1 to start the trace, similar to the following.
SQL> connect system@orcl1
Enter password:
Connected.
SQL> begin
2 dbms_monitor.session_trace_enable(42,219);
3 end;
4 /
PL/SQL procedure successfully completed.
To end the trace, the disable procedure of dbms_monitor is used.
SQL> begin
2 dbms_monitor.session_trace_disable(42,219);
3 end;
4 /
PL/SQL procedure successfully completed.
Another item to remember is that the trace file will be created on the node of the traced session's instance. The trace file will be found in the diag/rdbms/db_name/instance_name/trace subdirectory of the diagnostic_destinitialization parameter. For single instance databases, the instance name will equal the database name. For Oracle RAC databases, the instance name will be the database name appended with the instance ID. Your life can be made easier if the diagnostic destination is on shared storage accessible by all instances. This way, all trace files will be on the same disk, no matter which RAC instance generated the file.
The dbms_monitor supplied package additionally provides the ability to start SQL traces for a specific service with the serv_mod_act_trace_enableprocedure.
SQL> begin
2 dbms_monitor.serv_mod_act_trace_enable('hr_svc');
3 end;
4 /
PL/SQL procedure successfully completed.
With the command above, all connections to the hr_svc service will automatically be traced. Tracing can be turned off for the service with the serv_mod_act_trace_disable procedure:
SQL> begin
2 dbms_monitor.serv_mod_act_trace_disable('hr_svc');
3 end;
4 /
PL/SQL procedure successfully completed.
The serv_mod_act_trace_enable procedure has additional parameters for the module name, action name, and instance name. The module and action names are set by the dbms_application_info supplied package if the application is coded to set the module and action names. Tracing can be restricted to a specific RAC instance by specifying that instance name as a parameter. Any combination of these parameters can be used to refine the sessions to be traced. For example, all connections using a specific service on a specific instance can be traced, leaving sessions using that service on another instance untraced. Services are discussed in more detail later on in this chapter.
When tracing a service, the end result will most likely have multiple trace files being generated. Most readers should be familiar with using the tkprof utility to turn a raw trace file into a more readable format. One of the problems with tkprof is that it will take one trace file as input and generate one output file. This limitation is perfectly acceptable when dealing with only one trace file but multiple trace files would need numerous runs of the tkprof utility. Furthermore, the results from all of those tkprof output files may have to be manually combined. Handling multiple trace files is where the trcsess utility can be used. An example can illustrate how to use the trcsess utility to combine multiple trace files. In one session, the following SQL statements are executed.
alter session set sql_trace=true;
select
first_name,
last_name
from
hr.employees
where
employee_id=100;
select
department_id,
department_name,
manager_id
from
hr.departments;
select
first_name,
last_name
from
hr.employees
where
employee_id=200;
In another session, additional SQL statements are executed.
alter session set sql_trace=true;
select
first_name,
last_name
from
hr.employees
where
employee_id=100;
select
city,
state_province,
country_id
from
locations
where
location_id=1700;
Notice that each session queried for the same employee but the rest of the SQL statements were different. The two resulting trace files will be combined into one file. The SQL statements were executed in SQL*Plus. As such, the trace file contains similar lines near the top.
*** 2015-07-22 23:40:05.205
*** SESSION ID:(31.459) 2015-07-22 23:40:05.205
*** CLIENT ID:() 2015-07-22 23:40:05.205
*** SERVICE NAME:(SYS$USERS) 2015-07-22 23:40:05.205
*** MODULE NAME:(SQL*Plus) 2015-07-22 23:40:05.205
*** ACTION NAME:() 2015-07-22 23:40:05.205
SQL*Plus automatically uses the dbms_application_info package to set the module name. Knowing this, we can instruct trcsess to look for any trace files that were generated with the SQL*Plus module.
trcsess output=trcsess.out module="SQL*Plus"
The trcsess utility will comb through the current directory. If there is a trace file with this defined module, trcsess will add its contents to the output file. Now that the trace files have been combined into a single file, the tkprof utility can be invoked to make the combined trace file more readable.
tkprof trcsess.out tkprof.out aggregate=yes
Examining the tkprof output file shows all of the SQL statements are executed once except for the one query that was executed twice.
***************************************************************************
select
first_name,
last_name
from
hr.employees
where
employee_id=100
call count cpu elapsed disk query current rows
------- ------ -------- -------- -------- ---------- -------- -------
Parse 2 0.01 0.04 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 4 0.00 0.00 1 4 0 2
------- ------ -------- ------- -------- ---------- -------- -------
total 8 0.01 0.04 1 4 0 2
The execute count is 2 as expected.
The trcsess utility has other useful parameters. If the dbms_application_info supplied package is used to define the module or action, the trcsess utility can combine trace files with those parameters. The trcsess utility can also be given a specific service name, a very common way to use trcsess in Oracle RAC traces. We will discuss services in more detail later in this chapter. Note that the trcsess parameters accepts the asterisk wildcard symbol. The following is a list of parameters for the trcsess utility.
output=<output file name >
session=<session ID >
clientid=<clientid>
service=<service name>
action=<action name>
module=<module name>
<trace file names>
As was discussed earlier in this section, thedbms_monitor.serv_mod_act_trace_enable procedure is used to start traces for sessions based on any combination of service, action, and/or module. Once those trace files have been generated, the database administrator will most likely want to use thetrcsess utility to aggregate the trace files into one large output file by executing one simple, easy command.
With Oracle RAC, each instance maintains its own diagnostic dump destination for its trace files. There is no requirement to have the diagnostic dump destination on shared storage. That being said, the database administrator's life will be easier if all trace files are available on all nodes otherwise they will end up moving trace files generated on one node to another node to be able to combine the files together. Remember, the trcsess utility will not cross nodes in the cluster.