NoCOUG – 21.May.2015 Spring Conference – DBA / Developer

Download Report

Transcript NoCOUG – 21.May.2015 Spring Conference – DBA / Developer

NoCOUG – 21.May.2015 Spring Conference – DBA / Developer Tips
Govindan Katteri , Database Administrator , Pearson
[email protected]
If the html , text files in the document do not open when in the slideshow
mode, exit out of the slideshow  Invoke Powerpoint  File  Open 
Open the presentation  Click on Home on top Left and navigate to the
individual slide and double click on the link
Programming and Administration since 1986
Oracle Ver 5.1  12c + SQL Server
Safe Harbor Statement:
The information presented in this document is for information purpose only. The
opinions expressed are based on observations made at different environments. These
statements would need to be verified for your environment. Where possible the
author has provided MOS (My Oracle Support) document id and/or documentation
URL for reference. The author does not hold Oracle Corporation responsible for the
opinions / suggestions indicated in this document.
Agenda
1. Tips
2. Q & A
Tip #01 – Caveat to be aware of with regards to
Disabling Constraint
When creating constraints based on an index, avoid creating the constraint in a
single statement using the USING clause. This is because , when the constraint
is disabled , the corresponding index gets dropped.
For suggestion on how to handle this , double click on the icon shown in the
next slide.
Tip #02 – Make DDL Re-Runnable
• On occasions the implementations do overlap each other or modules
do overlap each other. This invariably leads to situations where the
DBA/Developer confronts with table / index already exists message.
Searching for ORA- error in the log and skipping such errors makes
the validation process look less fool proof. We will see ways of
avoiding such a situation.
--The below code shows how make a block
SET SERVEROUTPUT ON SIZE UNLIMITED;
DECLARE
ctr NUMBER(6);
BEGIN
BEGIN
SELECT COUNT(*) INTO ctr
FROM user_tables
WHERE table_name = 'EMPLOYEE';
IF ctr = 0 THEN
DBMS_OUTPUT.PUT_LINE(' CREATING
EXECUTE IMMEDIATE 'CREATE TABLE
EMPID
...
, WHOCREATED
, WHENCREATED
, WHOMODIFIED
, WHENMODIFIED
)
TABLESPACE payroll';
ELSE
NULL;
DBMS_OUTPUT.PUT_LINE(' SKIPPING
END IF;
END;
of code re-runnable.
TABLE EMPLOYEE .. ');
EMPLOYEE (
NUMBER(10) NOT NULL
VARCHAR2(30) DEFAULT USER
DATE
DEFAULT SYSDATE
VARCHAR2(30)
DATE
NOT NULL
NOT NULL
CREATING TABLE EMPLOYEE ');
- It is recommended to check for column list while checking indexes and not go by the index name.
- Take into account one column appearing in more than one index as leading column.
Tip #03 – Tracing Session / Code Origin
• Code - Applications do have code involving more than one underlying
framework
• Code - Same SQL code might get generated from multiple framework
• Session – In an application involving many users running one part of
the application, it helps to know who initiated the DML directly or
indirectly. In times of troubleshooting , it helps to know the source of
the session.
• Session – When the application allows the query to be constructed
dynamically at runtime.
SUGGESTION - Make sure to use DBMS_APPLICATION_INFO.set_client_info /
set_action / set_module in every logically different module.
It is recommended you pass UserLoginID:CodeLocation: . UserLoginID is useful in
scenarios where the application allows the end user to dynamically construct the
query and in case of problems in response time will allow us to communicate back to
the user. Examples of CodeLocation could be Java0010 , PHP0010, Pro*C-Batch350
etc. In scenarios where the framework dynamically constructs the code it will help to
have the last location from where the SQL got constructed. v$session can be
referenced to retrieve these values at runtime.
If the framework does not allow call to DBMS_ packages but allows only SQL
statements , the location can be hardcoded as part of DML as hint. SELECT /*
Java9010 */ .. would be an example.
Tip #04 – Unseen area with regards to Object Status
• Prior to Migration / Upgrade or any major implementation , it is
useful to take a snapshot of object status – and compare it
subsequent to the implementation.
SUGGESTION
1. Make it a point to look for UNUSABLE indexes when you look for INVALID objects in the
schema. In addition to looking at DBA_OBJECTS view also take a look at DBA_INDEXES. This is
because certain commands like ALTER TABLE MOVE PARTITION invalidate the index since the
underlying ROWID changes. Same applies to ALTER TABLE MOVE tablespace operation.
2. An unusable index is not necessarily treated as an INVALID object. In user/dba_objects view it
will show as VALID.
Tip #05 – Where more than 2 people have access to production
database
• During system life cycle with 24x7 support , on-call , fire fighting it is
not uncommon for Database Administrators to issue ALTER SYSTEM
command. The last thing you want is to see an alter system in the
alert log but you don’t know which of the team member carried it OR
it was issued as part of an application implementation OR during
startup of the system by the application.
SUGGESTION
Always try to make use of the comment option of the ALTER SYSTEM command. This is useful in
scenarios where more than one set of teams have access to Production database. It is recommended
to include something like REMEDY Ticket or JIRA Ticket number. This way you will know why the
change was made when you look at the database alert log or if it was made by the application or by
an human. In the case of a human, the comment should reflect the cause and who carried it out.
SYSTEM >ALTER SYSTEM SET OPTIMIZER_ADAPTIVE_FEATURES=FALSE COMMENT='TKT-1234' SCOPE=BOTH;
System altered.
SYSTEM >ALTER SYSTEM SET OPTIMIZER_ADAPTIVE_FEATURES=FALSE COMMENT='TKT-1234' SCOPE=SPFILE;
System altered.
SYSTEM >ALTER SYSTEM SET OPTIMIZER_ADAPTIVE_FEATURES=FALSE COMMENT='TKT-1234' SCOPE=MEMORY;
System altered.
SYSTEM >
Alert log will show entries as follows :
Tue Nov 11 17:22:43 2014
ALTER SYSTEM SET optimizer_adaptive_features=FALSE COMMENT='TKT-1234' SCOPE=BOTH;
Tue Nov 11 17:22:48 2014
ALTER SYSTEM SET optimizer_adaptive_features=FALSE COMMENT='TKT-1234' SCOPE=SPFILE;
Tue Nov 11 17:23:14 2014
ALTER SYSTEM SET optimizer_adaptive_features=FALSE COMMENT='TKT-1234' SCOPE=MEMORY;
Tip #06 – Prevent human error / run away
code from consuming database processes
• It is not unusual to find connection pool leak occurring. When this
happens existing sessions are not closed , new sessions are grabbed.
Depending on how much idle timeout is defined (if it is defined) the
connections will stay open and sooner or later you run into ORA00020: maximum number of processes (%n) exceeded message. Once
this happens the DBA will have difficulty connecting to the database
to investigate.
SUGGESTION
Use profile SESSIONS_PER_USER to avoid ORA-00020: maximum number of processes (%n) exceeded
message; You should then be getting ORA-02391: exceeded simultaneous SESSIONS_PER_USER limit
message which will affect only the specific user / application schema. Useful in scenarios where
there is bug in the code and connection pool leak occurs. If not you will have SYSTEM / SYSDBA /
RMAN connections being affected. Debugging the scenario by the DBA becomes difficult. Existing
connections will continue to work but new connections cannot be grabbed.
The profile you are setting gets activated when the database parameter RESOURCE_LIMIT is set to
TRUE.
If more than one application is connecting to a single database using different SQL*Plus users (say
HR , ADMIN , ACCOUNTING) then the each of these users should be assigned a profile and the sum
of SESSIONS_PER_USER should be less than PROCESSES to the tune of Oracle Background processes
+ RMAN backup processes defined by parallelism + OEM + any adhoc/3rd party connections that
would be required.
SUGGESTION (Continued)
For RAC, this cannot be directly followed since the connections can go to more than one node.
Processes is set at instance level but the connections can spawn more than one instance.
http://docs.oracle.com/cd/B19306_01/rac.102/b14197/dbinstmgt.htm#RACAD806
Each instance maintains its own SESSIONS_PER_USER count. If SESSIONS_PER_USER is set to 1 for a
user, the user can log on to the database more than once as log as each connection is from a
different instance.
RECOMMENDATION :
1. It is recommended creating a Non-Default RAC Service as indicated in
http://docs.oracle.com/cd/E11882_01/rac.112/e41960/srvctladmin.htm#i1008562. You should use
the srvctl executable located in your database's Oracle_Home, not the CRS_Home.
$ORACLE_HOME/bin/srvctl add service -d PRODDB -s app1srv -r PROD1 -p BASIC
2. Edit the client side tnsnames.ora to include (CONNECT_DATA=(SERVICE_NAME=app1srv)) , test
connectivity. For jdbc client , use the entire connecting string instead of tnsnames.ora entry.
Downside : The application will connect to only one node and scalability will be lost. But at least you
can be sure session count will not be exceeded.
Tip #07 - Tracing Session(s)
• One of the DBMS packages introduced earlier and subsequently
omitted from Manual
• DBMS_SYSTEM.set_sql_trace_in_session(sid,serial#,true/false) can be
used to trace any session causing bottleneck. The trace file with OS
PID will be created in background_dump_dest location. By default the
user SYSTEM / DBA role does not have grants execute on this
package. Hence you would need to connect as SYSDBA once and issue
GRANT EXECUTE ON dbms_system TO system; Subsequently it can be
referred as sys.dbms_system proves useful in tracing sessions.
From Session #1
SYSTEM@PSPRODDB >select s.sid , s.serial# , p.spid from v$session s , v$process p where s.sid=(select sid from v$mystat where rownum = 1)
2 and p.addr = s.paddr;
SID
SERIAL# SPID
---------- ---------- -----------------------1213
6173 4288
SYSTEM@PSPRODDB >
From Session #2
SYSTEM@PSPRODDB >EXEC sys.DBMS_SYSTEM.set_sql_trace_in_session(1213,6173,TRUE);
PL/SQL procedure successfully completed.
SYSTEM@PSPRODDB >
From Session #1
SYSTEM@PSPRODDB >select sysdate from dual;
SYSDATE
-----------------19-MAY-15
SYSTEM@PSPRODDB >
From Session #2
SYSTEM@PSPRODDB >EXEC sys.DBMS_SYSTEM.set_sql_trace_in_session(1213,6173,FALSE);
PL/SQL procedure successfully completed.
SYSTEM@PSPRODDB >
Trace file C:\ORACLE\PRODUCT\12.1.0\diag\rdbms\testdb\testdb\trace\testdb_ora_4288.trc
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Windows NT Version V6.1 Service Pack 1
CPU
: 8 - type 8664, 4 Physical Cores
Process Affinity
: 0x0x0000000000000000
Memory (Avail/Total): Ph:2063M/8148M, Ph+PgF:8755M/16295M
Instance name: testdb
Redo thread mounted by this instance: 1
Oracle process number: 71
Windows thread id: 4288, image: ORACLE.EXE (SHAD)
***
***
***
***
***
***
2014-11-14 21:59:44.342
SESSION ID:(1213.6173) 2014-11-14 21:59:44.342
CLIENT ID:() 2014-11-14 21:59:44.342
SERVICE NAME:(SYS$USERS) 2014-11-14 21:59:44.342
MODULE NAME:(SQL*Plus) 2014-11-14 21:59:44.342
ACTION NAME:() 2014-11-14 21:59:44.342
CLOSE #515993544:c=0,e=28,dep=0,type=0,tim=734182738355
=====================
PARSING IN CURSOR #515993544 len=24 dep=0 uid=8 oct=3 lid=8 tim=734182773521 hv=2343063137 ad='7ff8c9114a0' sqlid='7h35uxf5uhmm1'
select sysdate from dual
END OF STMT
PARSE #515993544:c=0,e=7434,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=1388734953,tim=734182773518
EXEC #515993544:c=0,e=11441,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1388734953,tim=734182785353
FETCH #515993544:c=0,e=42,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=1,plh=1388734953,tim=734182785681
STAT #515993544 id=1 cnt=1 pid=0 pos=1 obj=0 op='FAST DUAL (cr=0 pr=0 pw=0 time=8 us cost=2 size=0 card=1)'
FETCH #515993544:c=0,e=5,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=1388734953,tim=734182797383
This package is not shown in 12c https://docs.oracle.com/database/121/ARPLS/toc.htm manual (Database PL/SQL Packages and Types Reference) ;
Not in 11g https://docs.oracle.com/cd/E11882_01/appdev.112/e40758/toc.htm ;
Not in 10g https://docs.oracle.com/cd/B19306_01/appdev.102/b14258/toc.htm ;
Not in 9i https://docs.oracle.com/cd/B10501_01/appdev.920/a96612/toc.htm
Tip #08 – Oracle Managed Files (OMF) and Event
based addition of Datafile
• If you need to work on deployed solutions (where you may not have
access to the production database) , it helps to have OMF so that you
can just issue ALTER TABLESPACE {tablespace_name} ADD DATAFILE
SIZE {size} + setting autoextend ON besides Max size based on OS
platform limit. This greatly helps if you have to add datafiles based on
the SUSPEND EVENT based trigger.
• Watch out with regards to the file limit on temporary tablespace since
a run away query can cause addition of more than 1 datafile.
• The last of the datafile(s) in the tablespace would need to have
autoextend set to ON.
• Quota limits if any on the user will not result in event triggering.
Tip #09 – Oracle Managed Files – Destination usage
• Usual practice is to use DB_CREATE_FILE_DEST = directory | disk
group for OMF.
• Problem comes when you try to move around / allot different
directory or disk group for online log files. So it is recommended using
DB_CREATE_ONLINE_LOG_DEST_[1 | 2 | 3 | 4 | 5] = directory | disk
group along with DB_CREATE_FILE_DEST.
• If this parameter is not set in the database parameter file, you can use
ALTER SESSION SET DB_CREATE_ONLINE_LOG_DEST_n statement to
set the value for the current session and create the log group. In the
absence of this parameter , log file will be created in
db_create_file_dest.
Tip #10 – Oracle Managed Files – One aspect to be aware
of in Non-Production environment
• OMF quirk : If you are using Oracle Managed Files and restoring a RMAN backup of a database
with same name but different DBID, a different set of files get created. This is how it happens :
• Install Oracle Software (say 11g or 12c)
• Create database TESTDB with OMF
• Backup using RMAN
• Run your application , for some reason you have to reinstall Oracle and recreate TESTDB using
dbca ; This gets created with a different dbid.
• Restore using RMAN from earlier backup. Shutting down will “touch” the relevant files with latest
timestamp ; You can then remove the old ones. This is applicable to data files , online log files,
and temp files.
Tip #11 – Standby Related – Password file
• If the sys password is changed, make it a point to copy the password
file to the Standby(s). Otherwise you are likely to encounter ORA16191: Primary log shipping client not logged on standby error. This
can be avoided by creating another SQL user => grant it relevant role
/ privilege => introducing the init.ora parameter
REDO_TRANSPORT_USER ; Details are available in Doc ID 1416595.1 How to make log shipping to continue work without copying
password file from primary to physical standby when changing sys
password on primary?
Tip #12 – Standby Related – RMAN Backup
• It is recommended to set RMAN deletion to ‘applied on standby
database(s)’. This way backup delete will not delete archive logs
unless they are applied on at least one of the mandatory standby
destination.
• Test backing up Standby database using RMAN at least once. If you
need a consistent , self-contained (no redo) backup then it is
recommended you use standby database for this purpose.
Tip #13 – Standby Related – Logging operations
• For Standby , FORCE LOGGING should be turned ON. Depending on how much of
NOLOGGING the application is using, you can expect to encounter performance
hit when you introduce standby at a later time. This is something one needs to be
aware of.
• Not all operations in an application require logging. It is not unusual to create
tables for reporting purposes or intermediate processing followed by data
retrieval. Many a times a) these tables do not contain high volume of rows OR b)
if the volume is low , the frequency of creation of such tables is high.
• Such tables usually get dropped after processing and do not hold any rows that
would be required at the standby site in case a switchover or failover takes place.
• In the next slide (double click on the html) we will see how to make use of PL/SQL
object to store , retrieve rows , and project space usage. This way logging
operations can be minimized.
Tip #14(a) - Replication Related – Initialize at
the Destination
• Not all applications have same requirements. On few occasions it is
required the value from the source be either blanked out or some
other transformation be carried out. This is true if the data at the
replicated site is being updated or populated locally.
1. The function DBMS_REPUTIL.FROM_REMOTE can be checked at the destination trigger to modify the incoming
value of a source from column.
2. The same logic can be extended to make sure a trigger at the destination is not effective during
propagation.
SQL >CREATE OR REPLACE TRIGGER trg_bef_my_employee
2 BEFORE INSERT OR UPDATE ON my_employee
3 FOR EACH ROW
4 BEGIN
5
IF DBMS_REPUTIL.FROM_REMOTE = TRUE
6
THEN
7
:new.emp_ssn := NULL;
8
ELSE
9
--DO REQUIRED TASKS TO TREAT AS LOCALLY FIRED TRIGGER
10
NULL;
11
END IF;
12 END;
13 /
Trigger created.
SQL >show errors;
No errors.
SQL >
Tip #14(b) - Replication Related
• One of the mistakes sometime carried out by DBAs is to disable the
constraints at the Replicated Site to make it run ‘faster’.
• SUGGESTION : Do not disable constraints in the replicated
(destination) database , even though the source data might have met
the referential integrity , check/not null conditions. This can result in
hanging when you try to quiesce a group after operation like altering
a table. You can find this by looking at the trace file that gets created.
Tip #15 – Use of Flashback in combination with Database
Upgrade / Migration for databases having Physical Standby
• It helps to know in advance whether your database will run into any
unforeseen issues during upgrade / migration. In case of errors , you
either restore or activate physical standby. Restoring involves
downtime , activating involves creation of new standby.
• The usual practice is to stop propagation to the standby, upgrade the
database. MoS Document 805438.1 has relevant details : How To
Open Physical Standby For Read Write Testing and Flashback
Tip # 16 : Database Upgrade / Migration – New
parameters , default = TRUE to be aware of
• During migration one is expected to come across deprecated
parameters. But with 12c at least , one new parameter has been
introduced and one other parameter has it’s default value set to
TRUE.
* Prior to migrating make sure if any new parameters are introduced in the higher version / release and if so make sure the
default value will not affect existing set up. (Example) 12c has a new parameter , OPTIMIZER_ADAPTIVE_FEATURES which
enables or disables all of the adaptive optimizer features , including adaptive plan (adaptive join methods and bit map plans),
automatic re-optimization , SQL plan directives , and adaptive distribution methods. Default value is TRUE. Recommendation is
to turn off this parameter in 12c environment during testing , turn it on and assess the plan change , performance impact if
any.
http://docs.oracle.com/database/121/TGSQL/tgsql_optcncpt.htm#TGSQL221
An adaptive plan enables the optimizer to defer the final plan decision for a statement until execution time. The ability of the
optimizer to adapt a plan, based on information learned during execution, can greatly improve query performance.
Adaptive plans are useful because the optimizer occasionally picks a suboptimal default plan because of a cardinality
misestimate. The ability to adapt the plan at run time based on actual execution statistics results in a more optimal final plan.
After choosing the final plan, the optimizer uses it for subsequent executions, thus ensuring that the suboptimal plan is not
reused.
SUGGESTION : If any particular query runs with this parameter set to true, it is recommended hint be used
SELECT /*+ OPT_PARAM ( 'OPTIMIZER_ADAPTIVE_FEATURES' , 'FALSE' */. Though the documentation indicates this OPT_PARAM works for
only selected parameters , it works for other + underscore parameters as well.
https://docs.oracle.com/database/121/SQLRF/sql_elements006.htm#BABEBAID
The OPT_PARAM hint lets you set an initialization parameter for the duration of the current query only. This hint is valid only for the
following parameters: OPTIMIZER_DYNAMIC_SAMPLING, OPTIMIZER_INDEX_CACHING, OPTIMIZER_INDEX_COST_ADJ, OPTIMIZER_SECURE_VIEW_MERGING, and
STAR_TRANSFORMATION_ENABLED.
Tip #17 – DB Restore – NLS DATE format
• When restoring in Windows the NLS_DATE_FORMAT requires a colon
and no blank space.
Unix:
export NLS_DATE_FORMAT='dd-mon-rr hh24:mi:ss'
rman target /
Windows:
set NLS_DATE_FORMAT=YYYY-MM-DD:HH24:MI:SS Note down the colon (:) in between
rman target /
rman cmdfile restore_020_database.rcv log=restore_020_database.log
connect target /;
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO BACKUPSET PARALLELISM 24;
<= Note down parallelism , usually one forgets
run {
SET UNTIL TIME "2011-06-01:00:38:00";
SET NEWNAME FOR DATAFILE 'I:\ORADATA\TESTDB\DATAFILE\O1_MF_SYSTEM_86NCJPY7_.DBF' TO ‘D:\oradata\RETAILDB\DATAFILE\O1_MF_SYSTEM_86NCJPY7_.DBF';
SET NEWNAME FOR DATAFILE 'I:\ORADATA\TESTDB\DATAFILE\O1_MF_SYSAUX_86NCJPY7_.DBF' TO ‘D:\oradata\RETAILDB\DATAFILE\O1_MF_SYSAUX_86NCJPY7_.DBF';
...
...
RESTORE DATABASE;
SWITCH DATAFILE ALL;
#RECOVER DATABASE;
}
•
•
Parallelism is also defined by the number of channels defined in the restore script
Verify that the checkpoint_time / checkpoint_change# is in line with your intended UNTIL TIME/SCN. If not, recover the database
further if you have more archived logs available.
Tip #18 – DB Restore – Role of SQL*Plus
• You might drive a car without ever making use of Spare Tire. But that does
not mean the vehicle will be sold without one. In the same way , a DBA
may never run into a situation where in he/she would be needed to
recover a production database in an emergency situation. If the need for
the same arises, the DBA should have a reliable script which will indicate
what-is-what just before the open resetlogs is issued.
• Despite the pressure you might be facing when something goes wrong with
your production environment and when you are trying to restore / recover
a crashed database , avoid issuing sql “alter database open resetlogs” as
part of RMAN restore script; It is recommended you check if the data files
are in consistent state after RMAN restore database command prior to a
reset logs being issued. Instead check the consistency of the datafile after a
recover database and prior to issuing open resetlogs statement. If you have
the script handy , it takes only a few seconds to check and confirm.
a) Prior to open resetlogs check the checkpoint_scn for the datafiles ; With the exception of READ ONLY tablespaces, it should
be consistent. If not the below query will tell you which datafiles need media recovery.
b) Always backup the control file prior to open RESETLOGS. Take a binary / physical copy and not backup to trace (ALTER
DATABASE BACKUP CONTROLFILE TO 'Location/myDatabase_B4ResetLogs.CTL'; ). If you run into any issues during writing of
the control file , you might need to restore the control file again and bring it up to date by restoring the data again. This is
because the v$ entries in the control file get updated when the resetlogs is issued.
c) Do remember to make use of parallelism while restoring in case of a crash. Example  CONFIGURE DEVICE TYPE DISK
BACKUP TYPE TO BACKUPSET PARALLELISM 24;
d) The script I am using to run prior to RESETLOGS is given below. In addition , two good Metalink references are provided in
this regard.
Double click on the html file in slide #40
set pagesize 045;
set trimspool on;
set linesize 180
set trimspool on
column checkpoint_scn format 9999999999999999
column checkpoint_time format a25 wrap;
column name
format a90 wrap;
set echo off;
spool showdfscn.html;
PROMPT <PRE>;
SELECT name
,TO_CHAR(checkpoint_time,'dd.mon.yyyy Dy hh24:mi:ss') checkpoint_time
,checkpoint_change# checkpoint_scn
FROM v$datafile
/
PROMPT XXXXXXXXXXXXXXXXXXXXXXXXXXX
PROMPT X DATAFILES IN FUZZY MODE X
PROMPT XXXXXXXXXXXXXXXXXXXXXXXXXXX
-- The view v$datafile_header will show 'YES' for the fuzzy column when the datafile is online and in read write mode.
-- If you change the tablespace to read only mode then the fuzzy value changes to NO.
-- Tablespace END backup will put this as YES in 9i+ , 8i=NULL ; It is PREFERRED no rows selected.
COLUMN
COLUMN
COLUMN
COLUMN
SELECT
error
FORMAT a50 WRAP;
stauts
FORMAT a12 WRAP PRINT;
checkpoint_change# FORMAT 9999999999999999;
checkpoint_time_x FORMAT a25 WRAP;
fuzzy, status, error, recover, checkpoint_change#
,TO_CHAR(checkpoint_time,'DD.Mon.YYYY Dy HH24:MI:SS') checkpoint_time_x
,COUNT(*)
FROM v$datafile_header
GROUP BY fuzzy, status, error, recover, checkpoint_change#
, TO_CHAR(checkpoint_time,'DD.Mon.YYYY Dy HH24:MI:SS')
/
COLUMN
COLUMN
COLUMN
SELECT
tablespace_name
FORMAT a30 WRAP;
name
FORMAT a90 WRAP;
,undo_opt_current_change# FORMAT 9999999999999999;
file# , SUBSTR(tablespace_name, 1, 30) tablespace_name, SUBSTR(name, 1, 90)
,undo_opt_current_change#
FROM v$datafile_header
WHERE fuzzy='YES'
/
SELECT status, enabled, COUNT(*)
FROM v$datafile
GROUP BY status, enabled
/
PROMPT XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
PROMPT X DATAFILES IN RECOVER MODE X
PROMPT XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
SELECT file#, SUBSTR(name, 1, 90), status, error, recover
FROM v$datafile_header
WHERE status = 'RECOVER'
ORDER BY file#
/
Note : Interestingly 12c documentation lists only ONLINE , OFFLINE as two valid status of a datafile header and is
not listing RECOVER as one such status (see the sample query output in the subsequent slide).
http://docs.oracle.com/database/121/REFRN/GUID-23BA7CDD-D642-4CE7-83E2-69B7CFC328A1.htm#REFRN30052
But documentation lists RECOVER as a valid status for datafile
http://docs.oracle.com/database/121/REFRN/GUID-7BF7955C-9705-40F4-B2F6-5D7F3A32DD30.htm#REFRN30050
In the attached output ,the datafile headers have consistent checkpoints but the datafile is not fully synched and
would require media recovery subsequent to restore.
PROMPT XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
PROMPT X FUZZY MODE ADDITIONAL CHECK X
PROMPT XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
COLUMN name
FORMAT a90 WRAP;
COLUMN Absolute_Fuzzy_SCN FORMAT 9999999999999999;
COLUMN Min_PIT_SCN
FORMAT 9999999999999999;
SELECT hxfil file#
, SUBSTR(hxfnm, 1, 90) name
, fhscn checkpoint_change#
, fhafs Absolute_Fuzzy_SCN
, max(fhafs) over () Min_PIT_SCN
FROM x$kcvfh
WHERE fhafs!=0
/
PROMPT </PRE>;
spool off
The above SQL is for Non-RAC only. For RAC you would need to refer to the MoS documents
mentioned in the last slide for this Tip.
How to quickly check that Database is consistent after incomplete recovery (Point in Time Recovery) before OPEN RESETLOGS (Doc ID 1354256.1)
How to resolve ORA-01547 warning: RECOVER succeeded but OPEN RESETLOGS would get error below (Doc ID 1684437.1)
Tip #20 – Maintain one script while working across
multiple versions of the database
• On many occasions DBAs have to work across databases of differing
verions (8i , 9i , 10g, 11g , 12c). With new releases , few columns get
added in the v$ , dba_ views.
• 12c has container id in almost all of the v$views. So if the script is
changed it will not work against a version less than 12c.
• This can be achieved by declaring the variable as REFCURSOR at
SQL*Plus prompt and by using a PL/SQL block.
Tip #21 : If Tip#20 cannot be implemented
immediately
• If you are having hundreds of scripts , it will not be possible to modify them
within a short span of time to handle different versions of database. In
such a scenario an OS environment SQLPATH variable comes to the rescue.
• Environment variable or Windows registry entry to specify the location of
SQL scripts. SQL*Plus searches for SQL scripts, including login.sql, in the
current directory and then in the directories specified by SQLPATH, and in
the subdirectories of SQLPATH directories. SQLPATH is a colon separated list
of directories. There is no default value set in UNIX installations.
• In Windows, SQLPATH is defined in a registry entry during installation. For
more information about the SQLPATH registry entry, see SQLPATH Registry
Entry.
Linux:
TESTDB:/home/gkatteri>cat ./x/a.sql
prompt x/a
TESTDB:/home/gkatteri>cat ./y/a.sql
prompt y/a
TESTDB:/home/gkatteri>
TESTDB:/home/gkatteri>SQLPATH=/home/gkatteri/x:/home/gkatteri/y:
TESTDB:/home/gkatteri>
TESTDB:/home/gkatteri>
TESTDB:/home/gkatteri>
TESTDB:/home/gkatteri>export SQLPATH
TESTDB:/home/gkatteri>sqlplus sqluser1 @a.sql
SQL*Plus: Release 11.1.0.7.0 - Production on Tue Nov 11 21:36:00 2014
Copyright (c) 1982, 2008, Oracle.
All rights reserved.
Enter password:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
x/a
sqluser1@TESTDB> quit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
TESTDB:/home/gkatteri>
TESTDB:/home/gkatteri>SQLPATH=/home/gkatteri/y:/home/gkatteri/a:
TESTDB:/home/gkatteri>export SQLPATH
TESTDB:/home/gkatteri>sqlplus sqluser1 @a.sql
SQL*Plus: Release 11.1.0.7.0 - Production on Tue Nov 11 21:36:18 2014
Copyright (c) 1982, 2008, Oracle.
All rights reserved.
Enter password:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
y/a
sqluser1@TESTDB>
WINDOWS:
SET SQLPATH= c:\opt\oracle\12c\bin;c:\opt\oracle\11g\bin;  Note the semi-colon here
The windows SQLPATH variable is separated by semi-colon and can be set at the command level / batch file.
Tip #22 – Database Time zone
Requirement:
• There should be no need to worry about Day Light Savings with regards to incoming transactions
• Checking logs across multiple Middle Tier servers should be easier ; especially when viewed in conjunction
with database logs
Consider the following scenarios.
• 1. You have a internet facing application
• 2. A non-internet database with customers connecting to the same database from different timezones each
with their own set of data on separate schemas
• 3.Multiple databases on a single server [Linux /Solaris / Windows]
• 4.<You have multiple 11g databases on different servers each with its own different timezone and you want
to move them to a single 12c Container database as PDBs.>
For #1 , 2 & 3 it is recommended to set the server time zone to GMT. The user time zone can
be obtained from client machine or by associating a user login with a particular time zone. For
#3 you can make use of TZ variable if you desire the databases in differing timezones (Windows
– n/a)
Oracle has a function called NEW_TIME which can be used to display the time in user’s time
zone format. So upon login you associate the timezone or it can be associated at the database
level.
SQL >SELECT TO_CHAR(TO_DATE(TRUNC(SYSDATE)||' 21:00','DD-MON-YY HH24:MI')
2
,'DD-MON-YYYY HH24:MI Dy') california_time
3
from dual
4 /
CALIFORNIA_TIME
--------------------------------------------------------------------------04-DEC-2014 21:00 Thu
SQL >SELECT TO_CHAR(NEW_TIME(TO_DATE(TRUNC(SYSDATE)||' 21:00','DD-MON-YY HH24:MI')
2
,'PST','GMT'
3
)
4
,'DD-MON-YYYY HH24:MI Dy') GMT_Time_for_above_time
5
from dual
6 /
GMT_TIME_FOR_ABOVE_TIME
--------------------------------------------------------------------------05-DEC-2014 05:00 Fri
SQL >
For #4 you can use ALTER PLUGGABLE DATABASE set_time_zone clause.
Tip #23 – Role Transition (Standby Database)
– Prior Verification
• Limitation : Prior to 12c , it was difficult to know in advance if a switchover
OR failover will succeed.
• 12c has a new verify option that can be used to prior to switching over to
Standby.
• https://docs.oracle.com/database/121/SBYDB/role_management.htm#SBYDB5170
• The new switchover statement has a VERIFY option that results in checks
being performed of many conditions required for switchover. Some of the
items checked are: whether Redo Apply is running on the switchover
target; whether the release version of the switchover target is 12.1 or later;
whether the switchover target is synchronized; and whether it has MRP
running.
Suppose the primary database has a DB_UNIQUE_NAME of BOSTON and the switchover target standby
database has a DB_UNIQUE_NAME ofCHICAGO. On the primary database BOSTON, issue the following SQL
statement to verify that the switchover target, CHICAGO, is ready for switchover:
SQL> ALTER DATABASE SWITCHOVER TO CHICAGO VERIFY;
ERROR at line 1:
ORA-16470: Redo Apply is not running on switchover target
If this operation had been successful, a Database Altered message would have been returned but in
this example an ORA-16470 error was returned. This error means that the switchover target CHICAGO is
not ready for switchover. Redo Apply must be started before the switchover operation.
Q&A
*End of Document