PRESENTATION TITLE/SIZE 30 - Institut Teknologi Bandung

Download Report

Transcript PRESENTATION TITLE/SIZE 30 - Institut Teknologi Bandung

FUNDAMENTALS OF NETWORKING
FOR BUSINESS CONTINUANCE
Disaster Recovery Overview
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
1
In a Nutshell
Elements of the Solution
Site Selection
Routing End Users to Applications
Different Site Selection Mechanisms
DR/BC Metrics
Policies are adopted after
Business Risk Assessment,
determine tolerance for data
loss and recovery time.
Metrics to measure business
impact
• RTO
• RPO
• RAO
Data Center
Inter-Connect
Campus
Metro
Regional & National
Data Protection
Continuous Data Protection
Array Based Data Replication
Synchronous
Asynchronous
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
2
Enterprise RPO, RTO and RAO Policy
• Recovery Point Objective (RPO)
What is the cost and impact of data loss?
How much data loss is tolerable in event of disaster or failure?
• Recovery Time Objective (RTO)
What is the maximum tolerable outage?
When must operations resume after a disaster?
• Recovery Access Objective (RAO)
How long to access recovered data and applications?
RPO + RTO  measurable targets for BC/DR, and underlying Data Center,
Application and Storage
RAO  measurable target for underlying Network Infrastructure convergence
and client access to Applications in the Data Center
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
3
Recovery Time Objective and Recovery
Point Objective
Critical data is
recovered
Systems recovered
and operational
Disaster
strikes
time
Recovery point
time t0
days
Tape
backup
Recovery time
time t1
hours
mins
secs
secs mins
Periodic
Asynchronous Synchronous
Replication
Replication
Replication
$$$ Increasing cost
• How current or fresh is the
data after recovery?
DC-1102
11324_05_2005_X2
time t2
© 2005 Cisco Systems, Inc. All rights reserved.
Extended
Cluster
hours days
weeks
Manual
Migration
Tape
Restore
$$$ Increasing cost
• How quickly can systems
and data be recovered?
4
Recovery Access Objective (RAO)
User to Applications
Disaster
strikes
Systems recovered
and operational
Accessing recovered &
operational systems
time
Recovery time
time t1
time t2
time t3
(t2) Recovery Time Objective
(t3 – t2) Recovery Access Objective
Networks have converged to provide a path to the
applications and data
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
5
DATA CENTER INTERCONNECT
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
6
Data Center Interconnect Options
GE
Campus
Core
DC Interconnect
WAN
Data Center
Core
Metro
Ethernet
Aggregation
Access
Servers
IBM
SONET/SDH
Network
Access
Core
Storage
DC-1102
11324_05_2005_X2
1/2 Gb
FC/FICON
DWDM
Network
IBM
GDPS
© 2005 Cisco Systems, Inc. All rights reserved.
7
Data Center Transport
Interconnect Technologies for IP Transport
Increasing Distance
IP
GigE over
Optical
Data
Center Campus Metro
DC-1102
11324_05_2005_X2
Regional
National
Dark Fiber
CWDM 2Gbps
DWDM 2Gbps lambda
SONET/SDH
Multi-Services
FCIP
T1 or E1, T3 or E3, HSSI, ATM, PoS
iSCSI
T1 or E1, T3 or E3, HSSI, ATM, PoS
© 2005 Cisco Systems, Inc. All rights reserved.
8
Data Center Transport
Interconnect Technologies for SAN Extension
Increasing Distance
Data
Center Campus Metro
Optical
Dark Fiber Sync
National
Distance dependent on
available BB_Credits
CWDM Sync (2Gbps)
DWDM Sync (2Gbps lambda)
SONET/SDH Sync
Async
Sonet/GigE Sync (Metro Eth)
FCIP
Regional
Various WAN
transports…
….
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
BB_Credit
Spoofing for
extended
distance
Async (1Gbps+)
Async (< OC-12/STM4)
Async (< OC-12/STM4)
Async (< DS3/E3)
9
CWDM: Coarse WDM
1470nm
1490nm
1510nm
1530nm
1550nm
1570nm
1590nm
1610nm
1470nm
1490nm
1510nm
1530nm
1550nm
1570nm
1590nm
1610nm
Mux/
Demux
Mux/
Demux
• “Colored” CWDM SFPs (or GBICs) used in FC switches
(no transponder)
• Optical multiplexing done in CWDM OADM (optical add/drop multiplexer)
Passive (unpowered) device—Just mirrors and prisms
• Up to 30dB power budget (36dB typical) on SM fiber
~100km point-to-point or ~40km ring
• Provides for Client protection
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
10
DWDM: Dense WDM
• Higher density than CWDM
32 lambdas or more (protected) channels in narrow band
around 1550nm at 100GHz spacing (0.8nm)
EDFA amplifiable  longer distances
Carrys 1, 2, 4 Gbps FC, FICON, GigE, 10GigE, ESCON, IBM
GDPS
• Data Center to Data Center
• Protection options: Client, splitter, or linecard
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
11
Coarse vs. Dense WDM
Coarse
(CWDM)
Dense
(DWDM)
Wavelengths
Max 8
>8 (32 or more Protected)
Spacing
20 nm (1470nm–1610nm)
0.8 nm
Amplifiable
Not w/ conventional EDFA
(1550nm only)
YES
Cost
LOW
HIGH
Application
Metro Access
Campus and Data Center
Large Enterprise/Service
Provider
Protection Available
No
Yes
Type
Characteristic
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
12
Metro Ethernet Option
What Does Ethernet as a LAN/MAN/WAN Transport Offer?
• Ethernet becomes the ubiquitous interface: single
technology for LAN, MAN and WAN
• Efficient frame-based infrastructure: IP friendly
• Cost effective interface with flexible bandwidth
offerings: 10/100/1000/10000 Mbps
• Geographical independence: Ethernet over Optical,
IP or MPLS
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
13
Ethernet Wire Service (EWS)
CPE
NON-Service
Multiplexed UNI
PE
SP
IP/MPLS/
SONET/SDH
Network
PE
CPE
PE
Pseudowires
CPE
802.1Q Tunneling
ALL to One Bundling
• Defines a point-to-point, port-based service
• No service multiplexing—“all-to-one” bundling
• Transparent to customer BPDUs
• Routers and switches can safely connect
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
14
Ethernet Relay Service (ERS)
CPE
Service
Multiplexed UNI
SP
IP/MPLS/
SONET/SDH
Network
PE
802.1Q Trunk
PE
CPE
CPE
PE
VLANs
CPE
Pseudowires
• Defines a VLAN-based point-to-point service
(analogous to Frame Relay using VLAN tags as VC IDs)
• Service multiplexed UNI (e.g. 802.1Q trunk)
• Opaque to customer PDUs (e.g. BPDUs)
• Router as CPE edge device
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
15
SITE SELECTION
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
16
Site Selection Technologies
Recovering the Front End Network
• Front End Network: DNS, RHI and BGP
DNS (Application Aware)
Used for load distribution,
and proximity
Content Router
• Active Active Sites
• Different Load Distribution algorithms
Route Health Injection
(Application Aware)
BGP
Content Switch
• Active Standby Sites
• Load Distribution using IP Routing
(application unaware)
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
17
Overview
Site-Selection
Site
Selection
Employee
Customer/
Partner
Internet
WAN
or
Intranet
FC
SAN
FC
SAN
FCIP
FCIP
Storage
Network
Primary
DC-1102
11324_05_2005_X2
RAID
Data Center
© 2005 Cisco Systems, Inc. All rights reserved.
Storage
Network
RAID
Secondary Data Center
18
Active/Standby
Internet
ISPA
ISPB
Each application can
have a unique IP
address
Primary for
application 1
Corporate WAN
Secondary for
application 1
Secondary for
application 2
DC1
DC-1102
11324_05_2005_X2
Primary for
Application 2
DC2
© 2005 Cisco Systems, Inc. All rights reserved.
19
Active/Standby (cont)
• Advantages
Typical Phase I deployment
Could be implemented without the intelligent site selection
front end (GSLB)
• Disadvantages
Delay in failover manual switchover if without GSLB
Under utilization of resources with no load sharing
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
20
Active/Active
Internet
ISPA
ISPB
Each application has 2
IP addresses
Active for
application 1
Corporate WAN
Active for
application 2
Active for
application 2
DC1
DC-1102
11324_05_2005_X2
Active for
application 1
DC2
© 2005 Cisco Systems, Inc. All rights reserved.
21
Active/Active (cont)
• Advantages
Better use of resources due to load sharing
Quick failover with no manual intervention
• Disadvantages
Data mirroring in both directions
Session persistence needs special care
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
22
Load Distribution
DNS resolution
Internet
ISPA
ISPB
Load distribution
Content
Router
Content
Router
Corporate WAN
DNS server
DC1
DC-1102
11324_05_2005_X2
DNS server
DC2
© 2005 Cisco Systems, Inc. All rights reserved.
23
Load Distribution Considerations
• Is the application stateful or stateless?
Stateful applications need dns source-ip-hash methods or
ACLs for static DNS mappings
Stateless applications are easier to implement
• Are the clients coming from a mega-proxy (NAT’ed)
environment?
This might break the dns source-ip-hash methods
Consider static DNS mappings with ACLs
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
24
DATA REPLICATION
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
25
Data Replication Objectives
• Get the data to a recovery site – RPO
• Enable rapid restoration – RTO
• Facilitated by the SAN extension transport
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
26
Replication and Mirroring Alternatives
• Continuous Data Protection (CDP)
e.g. SANTap
• Disk replication
Transparent to host
Managed by disk subsystem
e.g. EMC SRDF, HP CA EVA, HDS
Truecopy, IBM PPRC, and others
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
27
Array Based Replication: Concept
Host
Server
• Two arrays located on extended
fibre channel fabric
• Read from local array
1. Host Writes to
local intelligent
storage array
• Changes (Writes) replicated to
remote array
Normally
Involves Two
Round Trips per
Write over Fibre
Channel
Replication managed by software in
storage arrays
Host server is unaware of replication
Implementations are proprietary
EMC: SRDF
2. Storage array software
replicates changes
(writes) to remote array
Local
Storage
Array
DC-1102
11324_05_2005_X2
HDS: Truecopy
HP: CA EVA
Remote
Storage
Array
© 2005 Cisco Systems, Inc. All rights reserved.
IBM: PPRC
And others …
• Multiple modes of operation
28
Replication: Modes of Operation
Synchronous—All data written to cache of local
and remote arrays before I/O is complete and
acknowledged to host
Asynchronous—Write acknowledged after write to
local array cache; changes (writes) are replicated
to remote array asynchronously
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
29
Synchronous Replication: I/O Detail
Remote
Storage Array
Local
Storage Array
Host
Server
SAN Extension
Network
Write, LUN=5, LBA=12345, DL=8kB
Transfer Ready
I/O
Service
Time
FCP Data (2kB frames)
Write, LUN=5, LBA=12345, DL=8kB
Round
Trip
Transfer Ready
FCP Data (2kB frames)
Round
Trip
SCSI Status=good
SCSI Status=good
t
DC-1102
11324_05_2005_X2
Write I/O Is Complete
at This Point—Local
and Remote Arrays
Identical
© 2005 Cisco Systems, Inc. All rights reserved.
t
t
30
Asynchronous Replication: I/O Detail
Local
Storage Array
Host
Server
Remote
Storage Array
SAN Extension
Network
Write, LUN=5, LBA=12345, DL=8kB
Transfer Ready
I/O
Service
Time
FCP Data (2kB frames)
SCSI Status=good
Write, LUN=5, LBA=12345, DL=8kB
Transfer Ready
Round Trip
FCP Data (2kB frames)
Response from Local Array
Returned Independently of
Replication Process; IO
Complete, But Arrays Not
Identical
t
Round Trip
SCSI Status=good
t
t
Replication Process and Protocol
Is proprietary; Example Shows One
Implementation
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
31
SAN Extension for Data Replication
• Extend the normal reach
of a Fibre Channel fabric
Shared Data Cluster
Remote Host
Access to Storage
FC over SONET
FC over IP (FCIP)
Optical (DWDM, CWDM)
SAN Extension
Network
FC
FC
Replication
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
32
Fibre Channel Write Acceleration (FC-WA)
• Problem
Performance of DR/BC
applications inhibited by
distance
Extend Distances for DR/BC
Applications
Primary Data Center
• Solution
DR Data Center
FC WA
Overcome limitations of
SCSI writes
FC write acceleration with SSM
module on both ends
Minimizes application latency
SSM
SSM
• Primary applications
Synchronous replication
• Benefits
Up to 25% increase in
performance on synchronous
application
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
33
Fibre Channel Write Acceleration
Without FC Write Acceleration
With FC Write Acceleration
SSM Module
FC
FC
SSM Module
FC
FC
FC
FC
WRITE
WRITE
XFER_RDY
XFER_RDY
DATA
DATA
STATUS
STATUS
XFER_RDY
Reduction in Latency of
at Least One I/O
• Requirements for FC write acceleration
SSM module
Both initiator and target must be directly attached to the SSM module
• Benefits of FC write acceleration
Improves response time for the storage applications
Extended distance for BC/DR applications without performance impact
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
34
SAN Extension Design: High Availability
Site A
• Port channels increase
resilience for high
availability with FC or FCIP
links
FC
Appears as a single logical link
(Up to 16 member links)
PortChannels
Protecting the fabric from
network failure
Route portchannel member
links over diverse
geographic paths
FC
SCSI exchange is smallest
atomic unit, so frame order
kept intact
Site B
DC-1102
11324_05_2005_X2
Load balancing on SRCID/
DESTID or SRCID/DESTID/
OXID basis (Unidirectional
per VSAN)
© 2005 Cisco Systems, Inc. All rights reserved.
35
Summary
• Determine the right RPO, RTO, and RAO for your business needs
• Recovering the front end mechanisms:
BGP
RHI
DNS
• Recovering the back end:
Data Replication & SAN Extension
• Transport options between Data Centers
CWDM
DWDM
SONET/SDH
Pure IP (e.g.: IP VPN)
Metro Ethernet (Ethernet/GigE/10GigE)
DC-1102
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
36
11324_05_2005_X2
© 2005 Cisco Systems, Inc. All rights reserved.
37