Local Installation

Download Report

Transcript Local Installation

Configuring Warehouse Builder
in RAC Environments
Copyright © 2009, Oracle. All rights reserved.
Objectives
After reading this appendix, you should be familiar with:
• Devising a plan for installing and configuring OWB in your
RAC environment
• Using Oracle Universal Installer and the OWB Repository
Assistant to install the OWB repository and register it on all
cluster nodes
• Replicating files among nodes as necessary
• Changing a database configuration parameter to enable
nodes to synchronize more quickly
• Using Repository Browser to monitor node activity
• Locating log files and collecting other information to
troubleshoot node failures and monitor recovery
B-2
Copyright © 2009, Oracle. All rights reserved.
Scope of RAC Discussed in This Appendix
• It is assumed that the RAC architecture has already been
designed, installed, and configured in your workplace.
• Warehouse Builder will be installed and configured to fit the
given RAC architecture.
• Warehouse Builder will be installed by staff with some
degree of RAC experience and expertise.
• The Oracle Warehouse Builder 11g Release 2 Installation
Guide has a detailed section on RAC.
• There is detailed Oracle documentation
on configuring Oracle for RAC.
B-3
Copyright © 2009, Oracle. All rights reserved.
Clusters
• An Oracle RAC system is composed of a
group of independent servers, or nodes.
• Interconnected nodes
act as a single server.
Interconnect
• Cluster software
Node
Node
Node
hides the structure.
• Disks are available
for read and write
by all nodes.
• The operating system
Clusterware
is the same on
on each node
each machine.
Disks
B-4
Copyright © 2009, Oracle. All rights reserved.
Node
Oracle Real Application Clusters (RAC)
• Multiple instances
accessing the same
database
• Instances spread on
each node
• Physical or
logical access
to each
database file
• Software-controlled
data access
Interconnect
Node
Node
Node
Shared
Cache
Instances
spread
across nodes
Database
files
B-5
Node
Copyright © 2009, Oracle. All rights reserved.
Benefits of RAC
• High availability: Survive node and instance failures.
• No scalability limits: Add more nodes as you need them
tomorrow.
• Pay as you grow: Pay for just what you need today.
• Key grid computing feature:
– Growth and shrinkage on demand
– Single-button addition and removal of servers
– Automatic workload management for services
• Goals of RAC
– High availability through:
—
—
B-6
Load balancing
Failover handling
Copyright © 2009, Oracle. All rights reserved.
OWB Certification on RAC
• OWB versions certified for RAC:
–
–
–
–
OWB 10.1.0.4 is certified (both Database 10.1 and 10.2).
OWB 10g R2 is certified.
OWB 11g R1 is certified.
OWB 11g R2 will be certified.
• OWB for RAC is certified on a listed set of database
platforms.
• For the latest certification
and porting information,
check OTN and MetaLink.
B-7
Copyright © 2009, Oracle. All rights reserved.
Typical Connect Failover Scenarios
• Database instance dies (crashes or is brought down
for maintenance).
– Control Center Service dies if a control center is on a node where
the database instance dies (rest of RAC is all right).
– Connections may fail if a target instance for a request is down.
• Listener dies.
– Control center reconnections may fail if the control center resides on
a node where the listener is down.
– Connections may fail if the target listener for a request is down.
• Node dies (crashes or brought down for maintenance).
– Control Center Service dies if the control center is on a dead node.
– Connection may fail if the target node for a request is down.
• Session death must be handled by client.
B-8
Copyright © 2009, Oracle. All rights reserved.
Control Center Service Failover on RAC
1.
2.
3.
4.
Control Center Service (CCS) node fails.
CCS goes down.
CCS comes up on a different node of the RAC.
CCS resumes activities.
Control
center
Single
logical
instance
B-9
Copyright © 2009, Oracle. All rights reserved.
Control Center
Service
Supported and Unsupported RAC Features
• Load balancing
– Connection load balancing based on server (supported)
– Client-side load balancing (not supported)
• Failover types
– Connect failover: node, listener, instance down (supported
with properly configured tnsnames.ora)
– Transparent Application Failover (not supported)
– Multiple concurrent Control Center Services (not supported)
B - 10
Copyright © 2009, Oracle. All rights reserved.
Lesson Progress Report
Install
OWB.
1. Decide whether to use OWB with shared storage or with nonshared
local disks on each RAC node.
2. Select Cluster or Local Installation; run root.sh on each node.
Configure
database
and OWB.
3. Install OWB repository only once, to one node. If using a shared
disk, install repository there.
4. Use Runtime Assistant on other nodes to register the repository.
5. On the database, set MAX_COMMIT_PROPAGATION_DELAY to 0.
6. Replicate RTREPOS.PROPERTIES and TNSNAMES.ORA on each node.
7. Define OWB locations with TNS names, not host:port:service.
Monitor nodes.
Troubleshoot.
B - 11
8. Use OWB Browser to monitor, enable, or disable Control Center
Service on a node.
9. Search logs on nodes, run helpful utilities, avoid common mistakes.
Copyright © 2009, Oracle. All rights reserved.
Single Shared Disk Versus Local Disk Per Node
Single shared disk or
local disk on each node?
Shared disk
Local disks
• This decision is usually made before OWB is considered; it
is not an OWB decision.
• Usually, most of the RAC architecture has been decided;
OWB only needs to fit.
• Advantages of shared disk:
– Centralized product installation and maintenance
– No need to replicate files among nodes
– No need to find and collect log files among nodes
B - 12
Copyright © 2009, Oracle. All rights reserved.
Extra Tasks Required of Non-Shared
Local Disk Environments
• Warehouse Builder must be installed on every node.
• rtrepos.properties and tnsnames.ora must be
copied to each node (tnsnames must be placed in two
places per node, in addition to the Oracle home).
• In a nonshared disk environment, you have a log directory
on every nonshared disk.
– Log directory primarily contains Control Center Service log
files, written to whichever node was active at the time.
– You must look in all nodes to find the most common log file,
possibly by using time stamps.
B - 13
Copyright © 2009, Oracle. All rights reserved.
Lesson Progress Report
Install
OWB.
1. Decide whether to use OWB with shared storage or with nonshared
local disks on each RAC node.
2. Select Cluster or Local Installation; run root.sh on each node.
Configure
database
and OWB.
3. Install OWB repository only once, to one node. If using a shared
disk, install repository there.
4. Use Runtime Assistant on other nodes to register the repository.
5. On the database, set MAX_COMMIT_PROPAGATION_DELAY to 0.
6. Replicate RTREPOS.PROPERTIES and TNSNAMES.ORA on each node.
7. Define OWB locations with TNS names, not host:port:service.
Monitor nodes.
Troubleshoot.
B - 14
8. Use OWB Browser to monitor, enable, or disable Control Center
Service on a node.
9. Search logs on nodes, run helpful utilities, avoid common mistakes.
Copyright © 2009, Oracle. All rights reserved.
Installing OWB on Real Application Clusters
• OWB server software must be
present on every node of the
RAC cluster.
– OWB Control Center Service
requires this.
OWB run-time software
• The Control Center browser
enables you to nominate one
node and register all other
nodes.
• You need to install OWB
only on the nodes that you want
OWB to run on.
• Only one Control Center Service
is running at a time.
B - 15
Copyright © 2009, Oracle. All rights reserved.
OWB Installation Decisions for RAC
Shared disk or
local disk on each node?
Cluster Installation
Shared disk
Local disks
Select “Local
Installation”
Cluster or Local
Installation?
(install to shared disk;
not to nodes).
This lesson shows a
cluster installation
to two nodes.
Local Installation
B - 16
“Cluster Installation”
installs OWB locally
to all chosen nodes
simultaneously.
Copyright © 2009, Oracle. All rights reserved.
“Local Installation”
installs OWB one
node at a time.
Installing OWB with Oracle Universal Installer
NOTE: these slides
show OWB being
installed to an
Oracle 10g R2
database.
B - 17
Copyright © 2009, Oracle. All rights reserved.
Specifying Oracle Home for All Nodes
B - 18
Copyright © 2009, Oracle. All rights reserved.
Selecting Cluster or Local Installation
When installing to a local disk
Cluster or Local on each node, rather than to a
Installation?
shared disk, there are two choices:
• Cluster Installation
• Local Installation
“Cluster Installation”
installs OWB locally
to all chosen nodes
simultaneously.
Install once
B - 19
“Local Installation”
installs OWB one
node at a time.
Install
Copyright © 2009, Oracle. All rights reserved.
Install
Selecting Cluster Installation
OWB will be installed
to both nodes.
B - 20
Secured Shell
Environment (SSH)
is required for
Cluster Installation
(allows connecting to each
node without a password).
Copyright © 2009, Oracle. All rights reserved.
Installing to the Same Path on All Nodes
OWB will be
installed to this
home on both
cluster nodes.
B - 21
Copyright © 2009, Oracle. All rights reserved.
Installation Includes Copying to Remote Nodes
In this example, OWB is first
installed locally to the OWB
home of node stbde03.
Because Cluster Installation
was selected, that OWB home
is then copied to node
stbde04.
B - 22
Copyright © 2009, Oracle. All rights reserved.
Executing the root.sh Configuration Script in
Each Cluster Node
This dialog box is merely
a reminder to run the
root.sh script on each
node as “root.” You cannot
use it to execute the script.
B - 23
Copyright © 2009, Oracle. All rights reserved.
Lesson Progress Report
Install
OWB.
1. Decide whether to use OWB with shared storage or with nonshared
local disks on each RAC node.
2. Select Cluster or Local Installation; run root.sh on each node.
Configure
database
and OWB.
3. Install OWB repository only once, to one node. If using a shared
disk, install repository there.
4. Use Runtime Assistant on other nodes to register the repository.
5. On the database, set MAX_COMMIT_PROPAGATION_DELAY to 0.
6. Replicate RTREPOS.PROPERTIES and TNSNAMES.ORA on each node.
7. Define OWB locations with TNS names, not host:port:service.
Monitor nodes.
Troubleshoot.
B - 24
8. Use OWB Browser to monitor, enable, or disable Control Center
Service on a node.
9. Search logs on nodes, run helpful utilities, avoid common mistakes.
Copyright © 2009, Oracle. All rights reserved.
Installing the Repository to One Node and
Registering It to Other Nodes
Shared disk
Disk drive on a
network server
Node 1
Node 2
Database or
listener or node
might die.
DB instance:
OWBRAC1
OWB repository
installed on this node
B - 25
DB instance:
OWBRAC2
OWB repository
registered on this node
Host name:
STDB03
Host name:
STDB04
Net service name:
SN_OWBRAC1
Net service name:
SN_OWBRAC2
Copyright © 2009, Oracle. All rights reserved.
Installing the OWB Repository
with the Repository Assistant
Install OWB repository only once,
to one node.
Then register the repository on
all other nodes.
B - 26
Copyright © 2009, Oracle. All rights reserved.
Connection Information for the Node
Installing OWB
repository to
node 1, STBDE03
Physical machine name
for STBDE03 node
Database instance name
on STBDE03 node
Net service name must be used
for RAC; stored in tnsnames.ora.
OWB on RAC requires multiple service names, one for the cluster as a
whole, plus service names for each node on the cluster. The OWB 10g
R2 Installation Guide has a RAC section detailing this.
B - 27
Copyright © 2009, Oracle. All rights reserved.
Finishing Installation of the Repository to a Node
Use the Repository
Assistant as you normally
would in a non-RAC setup.
Net Service Name
of node 1
B - 28
Copyright © 2009, Oracle. All rights reserved.
Rerunning the Repository Assistant to Register
the Repository on All Other Nodes
B - 29
Copyright © 2009, Oracle. All rights reserved.
Finishing OWB Repository Registration
B - 30
Copyright © 2009, Oracle. All rights reserved.
Only One Database Parameter
Specifically Recommended for RAC
• MAX_COMMIT_PROPAGATION_DELAY
– Change default of 99 to 0.
– A value of 0 aligns the nodes for faster synchronization.
• This change is recommended, not required.
• For details, see MetaLink note 341963.1, Part 1, Page 51,
item 13.
B - 31
Copyright © 2009, Oracle. All rights reserved.
rtrepos.properties Must Be
Replicated to All Nodes
in owb/bin/admin path
B - 32
Copyright © 2009, Oracle. All rights reserved.
Moving a Copy of rtrepos.properties
to Each Node
Connecting from node 1 to
node 2 to copy the file
Copying the file to node 2
B - 33
Copyright © 2009, Oracle. All rights reserved.
OWB RAC Locations Use Net Service Names
Log in using Net
Service Name for
RAC protection,
if running jobs.
Define OWB location
using Net Service Name.
B - 34
Copyright © 2009, Oracle. All rights reserved.
Sample TNSNAMES.ORA File
SN_OWBRAC2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = stbde04-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = stbde03-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
Place file in these three folders:
(SERVICE_NAME = owbrac.us.oracle.com)
ORACLE_HOME/network/admin
(INSTANCE_NAME = owbrac2)
)
OWB_INSTALLED_HOME /network/admin
)
(for OWB Import)
SN_OWBRAC1 =
OWB_INSTALLED_HOME
(DESCRIPTION =
/owb/network/admin
(ADDRESS = (PROTOCOL = TCP)(HOST = stbde03-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = stbde04-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = owbrac.us.oracle.com)
(INSTANCE_NAME = owbrac1)
)
)
B - 35
Copyright © 2009, Oracle. All rights reserved.
RAC Deployment
Control
center
Locations are defined
using net service
names, not
host:port:service.
Design
repository
B - 36
Single
logical
instance
Control center
Copyright © 2009, Oracle. All rights reserved.
RAC Deployment: Alternate Node
Control
center
Single
logical
instance
Design
repository
B - 37
Control center
Copyright © 2009, Oracle. All rights reserved.
Lesson Progress Report
Install
OWB.
1. Decide whether to use OWB with shared storage or with nonshared
local disks on each RAC node.
2. Select Cluster or Local Installation; run root.sh on each node.
Configure
database
and OWB.
3. Install OWB repository only once, to one node. If using a shared
disk, install repository there.
4. Use Runtime Assistant on other nodes to register the repository.
5. On the database, set MAX_COMMIT_PROPAGATION_DELAY to 0.
6. Replicate RTREPOS.PROPERTIES and TNSNAMES.ORA on each node.
7. Define OWB locations with TNS names, not host:port:service.
Monitor nodes.
Troubleshoot.
B - 38
8. Use OWB Browser to monitor, enable, or disable Control Center
Service on a node.
9. Search logs on nodes, run helpful utilities, avoid common mistakes.
Copyright © 2009, Oracle. All rights reserved.
Logging In to OWB Browser
B - 39
Copyright © 2009, Oracle. All rights reserved.
Select the Service Node Report
B - 40
Copyright © 2009, Oracle. All rights reserved.
Service Node Report Shows
the Status of Nodes
Node 1
Node 2
B - 41
Copyright © 2009, Oracle. All rights reserved.
Disabling a Node
Disable node 1 by clearing
the Enabled check box and clicking
the Update Node Details button.
Node 1
Node 2
B - 42
Copyright © 2009, Oracle. All rights reserved.
Enabling a Node
Click Refresh.
Node 2 is now the
active node.
Node 1
Node 2
B - 43
Copyright © 2009, Oracle. All rights reserved.
Lesson Progress Report
Install
OWB.
1. Decide whether to use OWB with shared storage or with nonshared
local disks on each RAC node.
2. Select Cluster or Local Installation; run root.sh on each node.
Configure
database
and OWB.
3. Install OWB repository only once, to one node. If using a shared
disk, install repository there.
4. Use Runtime Assistant on other nodes to register the repository.
5. On the database, set MAX_COMMIT_PROPAGATION_DELAY to 0.
6. Replicate RTREPOS.PROPERTIES and TNSNAMES.ORA on each node.
7. Define OWB locations with TNS names, not host:port:service.
Monitor nodes.
Troubleshoot.
B - 44
8. Use OWB Browser to monitor, enable, or disable Control Center
Service on a node.
9. Search logs on nodes, run helpful utilities, avoid common mistakes.
Copyright © 2009, Oracle. All rights reserved.
Useful Diagnostics for OWB RAC Problems
• The main diagnostic is service_doctor.sql.
• Show_service.sql will show whether the Control Center
Service is available (running) or not available.
– You can determine the same by using the Service Node
Report; if the node is marked as enabled, you know that the
service is available.
• In SQL*Plus, select * on user_jobs.
– Dbms_job checks every six minutes on RAC.
– It identifies the node on which each job is monitoring
• owb_collect.sql has system information on the
database and repository.
• Check for errors defining and replicating tnsnames.ora.
B - 45
Copyright © 2009, Oracle. All rights reserved.
Using SQL*Plus Scripts to Test Availability
of Control Center Service
Host name of node 2
Stop service > not available
Start service > available
B - 46
Copyright © 2009, Oracle. All rights reserved.
Result of Not Replicating rtrepos.properties
to a Node
Hint: Think of replicating
the rtrepos.properties
file to all RAC nodes every
time you create a
repository by using the
Repository Assistant.
B - 47
Copyright © 2009, Oracle. All rights reserved.
Using OWB With or Without
a Control Center Service
• You can execute OWB mappings with or without a Control
Center Service (CSS).
• An RDBMS-only installation on RAC,
No CSS
without a CSS, is possible.
– With a CSS, you can:
—
—
—
—
Use Control Center Manager
Use process flows
Run PL/SQL or SQL*Loader mappings
Have transparent failover and auditing support
– Without a CSS, you can:
—
—
—
B - 48
Run only PL/SQL mappings
Just save generated mapping code to file and execute later
Still have transparent failover and auditing support
Copyright © 2009, Oracle. All rights reserved.
Further Study of RAC
For in-depth study of RAC, consider these Oracle University
courses:
• Oracle Database 10g: Real Application Clusters
(D17276GC10)
• Oracle 10g Database: RAC Deployment Workshop
(D44424GC10)
• Oracle Database 10g: RAC Basic Concepts and
Architecture Seminar (D44422GC10), an in-class one-day
course
• Oracle Database 11g: RAC Administration (D50311GC11)
• Oracle Database 11g: RAC Overview and Architecture
Seminar (D53954GC10)
B - 49
Copyright © 2009, Oracle. All rights reserved.
Summary
In this appendix, you should have become familiar with:
• Devising a plan for installing and configuring OWB in your
RAC environment
• Using Oracle Universal Installer and the OWB Repository
Assistant to install the OWB repository and register it on all
cluster nodes
• Replicating files among nodes as necessary
• Changing a database configuration parameter to enable
nodes to synchronize more quickly
• Using Repository Browser to monitor node activity and
enable or disable nodes
• Locating log files and collecting other information to
troubleshoot node failures and monitor recovery
B - 50
Copyright © 2009, Oracle. All rights reserved.