High Availability Architecture

Download Report

Transcript High Availability Architecture

Microsoft.com
Design for Resilience
The Infrastructure of
www.microsoft.com, Microsoft
Update, and the Download Center
Sunjeev Pandey
Paul Wright
Senior Director
Microsoft.com Operations
[email protected]
Technology Architect Manager
Microsoft.com Operations
[email protected]
Agenda





Microsoft.com Introduction
Size and Scale
Network and System Architecture
How Do We Do It?
Questions
2
A Brief History Of Microsoft.com
Microsoft launches
www.microsoft.com
Focus on MSCOM Network
Programming and campaignto-Web integration
Enable an innovative customer
experience online & in-product
Standardization effort
begins, consolidation
hosted systems
Single MSCOM group formed
Brand, content, site std’s,
Privacy, brand compliance
Product Info, Support, Dev /
ITPro Experience, Customer
Intelligence, Profile Mgmt &
Enterprise Downloads
30k users / day
4M UUsers / day
6.5M UUsers / day
1995
2001
2003
Information & support
publishing; hosting
Microsoft combines
Web platform, ops, and
content teams
17.1M UUsers / day
2006
3
History Of Microsoft.com for Geeks
Win2k SP2 5/01
Win2k3 SP1
3/04
Win2k3 4/02
Win2k3 x64
5/05
LongHorn
5/06
DoS Attack 3/05
Network Failure 4/05
PDU Failure 4/05
DC Power Issue 5/05
Doomjuice/
MyDoom 2/04
SQL Slammer
1/03
DNS Outage 1/01
Nisqually Earthquake
2/01
2001
Jan-01
Cisco CEF
Bug 10/02
Network Failure 10/02
Network Failure 11/02
Jan-02
2002
YTD Site
Availability
Network Disruption 3/03
DoS Attack 6/03
DoS Attack 6/03
DoS Attack 7/03
DoS Attack 8/03
DoS Attack 8/03
Blaster 8/03
Application Failure 3/04
Major DNS Issue 3/04
Network Failure 4/04
CDN Disruption 6/04
DoS Attack 7/04
DC Power Issue 7/04
Regional Power Outage 7/04
DoS Attack 8/04
Power Grid Failure 10/04
Network Failure 12/04
Network Failure 12/04
Hardware Failure 12/04
Network Failure 8/03
Jan-03
2003
Jan-04
2004
International
Network
Disruption 5/05
DoS Attack 6/05
Network Failure 6/05
DoS Attack 9/05
Network Failure 9/05
International
Network
Disruption 9/05
CDN Disruption 9/05
CDN Disruption 10/05
DoS Attack 10/05
DoS Attack 11/05
Jan-05
2005
CDN Disruption 2/06
Publishing Error
2/06
DoS Attack 3/4/06
Network Failure 5/06
Network Failure 5/06
Network Failure 6/06
Network Failure 8/06
Network Failure 8/06
Network Failure 8/06
Application Failure 9/06
DoS Attack 9/06
Jan-06
2006
Jan-07
99.5
99.62
99.70
99.78
99.80
99.87
4
Characteristics
Type of
Failover
Resiliency vs. Disaster Recovery
Disaster
Recovery
Reactive
Static
Manual
Backup/Restore
Resiliency
Proactive
Dynamic
Automatic
Data Mirroring
Pros:
Cons:
* Increased Availability
* Higher Initial Costs
* Improved Performance * More Complexity
5
Microsoft.com Operations Team
Operations
Internal
Customers
MXPS MSDN TechNet
Security
Windows
24x7 Tier 1-4
Support,
Engineering
& Debug
External
Customers
& Partners
Vlabs
Microsoft
Windows
Update
Media
ICPs
EVANGELISM
- EBCs & MCS Partners
- Publications & Studies
- Share Best Practices
End Users
Vendors
Business Partners
Hosting Services
- Partner with Providers
- Drive Standards
MS PRODUCT TEAMS
- Manage Costs
- Consulting & Support - Early Adopter Program
- Production Behavior Studies
- Bugs: Find, Report & Run
- Ship Partners
6
Microsoft.com Corporate Reach

Reach Overview – June 06




#6 overall site in U.S; 55.7M UU for 36% reach*
#4 site worldwide; reaching 248.5M UU**
Avg 280M UU/month July 05 to Jun 06
Reach Surpasses All Corporate Sites





Apple ranked #22: 17.8M UU, 11.5% reach
Netscape ranked #67: 9.6M UU, 6.2% reach
Sony ranked #217: 3.9M UU, 2.6% reach
SUN ranked #307: 3.1M UU, 2.0% reach
IBM ranked #485: 2.1M UU, 1.4% reach
(US data provided for relative comparison*)
*Nielsen/NetRatings June 2006 - (unique users in millions);
**Worldwide data from comScore Media Metrix June 2006 – (unique users in millions)
7
Microsoft.com – Quick Facts
Infrastructure and Application Footprint



6 Internet Data Centers & 3 CDN Partnerships
120+ Web Sites, 1000’s App's and 2138 Databases
120+ Gigabit/sec Bandwidth
Solutions at High Scale

www.Microsoft.com




17.1M UUsers/Day & 70M Page Views/Day
10K Req/Sec, 300K CC Conn’s on 80 Servers
350 Vroots, 190 IIS Web App’s & 12 App Pools
Microsoft Update



250M UScans/Day, 18K ASP.NET Req/Sec, 1.1M ConCurrent
28.2 Billion Downloads for CY 2005
Egress – MS, Akamai & Savvis (30-100+ Gbit/Sec)
8
Web Site Availability



Externally Measured by Keynote Systems, Inc.
Benchmark Against Other Large Sites
Driving Cross-Team Maturity - Positive Trend in Availability:
 2003 – 99.70
 2004 – 99.78
Operations Workbench Plugin
 2005 – 99.83
 2006 – 99.87 YTD
Keynote
95.5
96
95
5/7/2006
4/23/2006
4/9/2006
3/26/2006
3/12/2006
2/26/2006
2/12/2006
1/29/2006
1/15/2006
1/1/2006
8/27/2006
8/13/2006
7/30/2006
8/27/2006
8/13/2006
7/30/2006
7/16/2006
97
7/16/2006
96.5
7/2/2006
98
7/2/2006
97.5
6/18/2006
99
6/18/2006
98.5
6/4/2006
100
6/4/2006
99.5
5/21/2006
DailyAvailability [%]
5/21/2006
5/7/2006
4/23/2006
4/9/2006
3/26/2006
3/12/2006
2/26/2006
2/12/2006
1/29/2006
1/15/2006
1/1/2006
Web Site Availability
TotalErrorsPerDay
30
25
20
15
10
5
0
Web Site Availability
Total Errors YTD
400
373
350
300
250
200
150
81
100
27
50
24
24
3
3
1
0
Content
Errors
Connection
Timed Out
Page
Timeout
Cannot find DNS Lookup Connection
Server or
Failure
Reset
DNS
Service
Server Error
Unavailable
Web Site Availability

Total Errors and Daily Availability of
www.microsoft.com - ’06 YTD



Constantly monitored and analyzed
Corrective actions taken as needed
Total Errors ’06 YTD grouped per error type


Content errors - #1 hit on availability
Only 1.3% of the total errors due to server issues (Service
unavailable; Server Error; Connection Reset)
Resilient Against What?
Security
Power /
Cooling
ISP /
Telco
Infrastructure
Virus
ISP x
ISP 5
B
a
c
k
b
o
n
e
Content
Creation
DDoS
Attack
Ad
o n mi n
ito &
rin
g
M
th
Au
Unauthorized
Access
S
Sc e c
an uri
ni ty
ng
Distribution Network
Application
Development
Reporting
ISP 4
Data
Center
ISP y
ISP 6
Distribution Network
Extranet
Dependencies
HW / SW
Failure
R Con
en t
de en
rin t
g
Savvis Akamai
Pr C o
op nt
ag en
at t
io
n
Ba
ck
up
s
ISP 3
SQ
L
ISP 2
ISP 1
Application
Testing
System/Data
Corruption
Application
Provide Predictable Service
Cost
Availability
Performance
13
Infrastructure Architecture
Technologies
ISP 4
ISP 3
Savvis Akamai
B
a
c
k
b
o
n
e
Distribution Network
CorpNet
Content
Creation
Ad
o n mi n
ito &
rin
g
M
th
Au
S
Sc e c
an uri
ni ty
ng
Front End
Network
Back End
Network
ISP 6
Application
Development
Reporting
GLBS
DNS
Caching
WALB
DDoS
BGP
Broad Peering
HSRP, OSFP
Distribution Network
Spanning
Tree
Clustering
WLB
Extranet
Dependencies
R Con
en t
de en
rin t
g
ISP 1
ISP y
Pr C o
op nt
ag en
at t
io
n
Ba
ck
up
s
ISP 2
SQ
L
Internet
ISP 5
ISP x
Application
Testing
HSRP, OSFP
Spanning
Tree
Clustering
WLB
14
High Availability Architecture
- Global Solutions & Networking
CDN Partnerships
- Edge Cache: Akamai & Savvis
- Cluster Load Balancing
- FirstPoint (Akamai)
- ITM (Savvis)
Internet
Cisco Guard
Data Center 1
Cisco Guard Devices
- Packet Filtering
- Anomaly Tracking
Cisco Guard
Data Center 2
Access Layer Networking
- Allow Ports 80 & 443
- Dedicated LANs for Unique Req’s
- Cookie Cutter
15
High Availability Architecture
- Global Solutions & Networking



Global Solutions
 Content Caching Partners: Akamai & Savvis
 Global Load Balancing via DNS – Web Cluster Level Mgmt
 Health Checking and Automatic Fail-over
Security Infrastructure
 Cisco Guards – Anomaly Detection & DOS Filtering
 Router ACLs Allow HTTP/S Only – Exceptions Require
Review
Router Architecture – Cookie Cutter
 Redundant Router and Switch Pairs with VLAN Segregation
 Simple, Scalable, Manageable, Repeatable
 Agility – Quickly Repurpose VLANs as Required
16
Enhanced DDos Protection
SYN flood
Valid Traffic
Filtered
17
High Availability Architecture
- Web & Database Hosting
Data Center 1
ASP.NET, ASP, HTML
NLB Load Balancing
3 Servers Min/Cluster
Cluster 1
Synch’d Code & Content
Read Only or Read/Write
NLB
Online
P2P
Repl
Cluster 2 Cluster 3
Redundant
WebServices
NLB Cluster
Read/Write
NLB
Data Center 2
Cluster 4
Redundant
WebServices
NLB Cluster
Mirroring
Online
Online
Offline
Online
Cluster 5 Cluster 6
Online
Log Ship
Secondary
Cluster
P2P
Online
Log Shipping
Peer-to-Peer Replication
18
High Availability Architecture
- Web & Database Hosting

Standard Hosting Models
Agility - Quickly Reallocate from System to
System
 Efficiency - Less Staffing & Equipment
Required

Consistent Configurations
 Repeatable Infrastructure Architecture

19
High Availability Architecture
- Web & Database Hosting

Server Configurations


Standard Server Hardware – Flexibility
Identical Baseline O/S, IIS, ASP.NET
Configurations



Build Scripts for consistent site builds
Application Code & Content Unique per Site
File, Registry, Service, and Local Security
Attributes Collected for Configuration Auditing
and Reporting
20
High Availability Architecture
- Web & Database Hosting

Network Load Balancing (NLB) Clusters



Main Load Balancing Solution Today
Server Cluster Sizes: 3 – 8 Servers/Cluster
Positives:



Easy Mgmt – Knowledge within Team
Free with Windows SKU’s
Challenges:



Switch Overhead
Connection Affinity
Application Layer Switching
21
High Availability Architecture
- Web & Database Hosting

Hardware Load Balancing



Limited Use for App Layer Load Balancing
Future – Greater Adoption for Non-NLB Features
Positives:



App Layer Load Balancing
Connection Affinity
Challenges:


Added Complexity/Risks
Costs – Hardware & People
22
High Availability Architecture
- Collecting, Monitoring, &
Reporting
SMTP
IMQ
IIS Log
Monitor
GAL
Cluster
Sentinel
SE
Annotations
Perf
IAdmin
Keynote
AD
Cisco
Guard
Tools Services Layer
MOM
Core
23
High Availability Architecture
- Remote Server Management

Integrated Lights Out (iLO) from HP






Cold Reboot
Power On/Off
Debugging Over iLO – No More Crash Cart
Imaging for Dog Food OS Builds
RDP Over iLO
Movement to “Lights Out” Datacenter
24
Global Load Balancing & Caching

Heath Checking and Fail-over




Automated pulling of clusters to watermark
Removal on demand for maintenance
Load Shaping & Distribution
 Control load percentages to specific clusters
 Region specific traffic distribution
Distributing Patches/Files to 300M+ Clients
 Partnership with 3 Providers

Akamai, Savvis, & MSN
Load Distributed via Load Balancing
Functions via DNS Resolution and Custom Logic
from CDNs


25
Global Load Balancing & Caching
– Intelligent Load Balancing
100%
`
100%
x
GLB DNS
100%
`
26
Global Load Balancing & Caching
- Geo Targeting

Load Shaping Based on Client Resolver
Location



Direct Traffic to Particular Clusters or Caching
Provider as Appropriate
Customer Experience Enhanced due to Improved
Local Proximity
Load Shaping Based on Client Location

CDN Provider Proxies Requests – Responds with
File Based on Location of Client
27
SQL Server 2005
Peer-To-Peer Replication
•
Redundancy
–
•
Each server hosts a copy of the
database
SQL A
SQL C
SQL B
SQL D
Availability
– Individual servers can be
patched/upgraded without causing
database availability issues
•
Peer-To-Peer Replication
DNS
Performance
– Application calls are load balanced
between nodes of the cluster for
improved scale-out
•
•
Zero perceived App Downtime
Eliminate single point of failure for
R/W Databases
•
Considerations:
NLB 2
NLB 1
Cluster A
Cluster B
Data Center A
Cluster C
Cluster D
Data Center B
– Object names, object schema, and publication names should be identical
– Publications must allow schema changes to be replicated
– Updates for a given row should be made only at one database until it has
synchronized with its peers
Scaling Out – Real World Implementation

Data Center and Geo
redundancy

Scalable Units
R/O DB
NLB R/O SQL
Cluster A
Publishing
Web Server

Content Publishing

WAN Replication
R/W DB
R/O DB
NLB R/O SQL
Cluster B
Logshipping
Secondary

End-to-end monitoring
R/O DB
NLB R/O SQL
Cluster C
Comparative Study: x86 vs. x64
CPU Utilization Per Platform
x86
x64
HTTP Req/Sec
CPU %
HTTP Req/Sec
CPU %
222
65%
216
35%
Key Take Away's

Huge Gains due to 64-bit H/W & Windows Platforms

Seamless migration provided with WoW64

Enabled www.Microsoft.com to leverage saved infrastructure to enable Data
Center Redundancy

App Pool Recycles Eliminated – Enjoying the new 4GB VM address space
running under WoW64!!

Enabled more App Pools driving further Isolation of Code & Content in
shared hosting models
Windows 32bit vs. 64bit Comparison
Comparative Study Results – Windows Update Download System Perf
Test Case: 64-bit Hardware running 32bit vs 64bit Windows
Windows Server 2003 Enterprise
Edition SP1
Mbits/Sec Avg
784
Windows Server 2003 Enterprise x64
Edition
Mbits/Sec Avg
976
Concurrent Connections Avg
15,746
Concurrent Connections Avg
13,600
Get Req/Sec Avg
2,000
Get Req/Sec Avg
3,400
Get Req/Sec Max
2,200
Get Req/Sec Max
6,800
CPU Avg
32%
CPU Avg
60%
Application Process (VM Usage)
2GB
Application Process (VM
Usage)
3.2GB
HTTP 500 Errors
2%
HTTP 500 Errors
0%
Scenario
Stress generated by live HTTP traffic from Windows Update Downloads
32bit Application Processes bottlenecked by 2GB Virtual Memory limit vs 4GB capabilities
on 64bit operating system enabling Max Mbits/Sec
Improved compute times on 64bit increased Req/Sec while lowering Concurrent
Connections (ie. Improved HTTP Request Processing Times)
Windows 64bit Analysis
Comparative Study Results: www.Microsoft.com Perf
Objective:
Stress a live production server to identify Max ability to serve HTTP traffic from
www.Microsoft.com client requests
Test Case: 64-bit Hardware running 64bit Windows
Windows Server 2003 Enterprise x64 Edition
Results
Concurrent Connections Avg
11,697
Connection Attempts/Sec Avg
430
Connection Attempts/Sec Max
577
Get Req/Sec Avg
778
Get Req/Sec Max
956
CPU Avg
96%
Questions?
33
Resources


http://blogs.technet.com/mscom
http://blogs.msdn.com/mscomts
34
Appendix
35
R/O NLB SQL Cluster
• Redundancy - Each server hosts a
copy of the database
Web Cluster A
– SQL1– Read/Write
– SQL2 & SQL3 – Read/Only
• Availability
– Individual servers can be patched/upgraded
without causing database availability issues
• Performance
– Application calls are load balanced between
nodes of the cluster for improved scale-out
R/O SQL NLB Cluster A
SQL3
R/O
SQL2
R/O
1-Way Replication
SQL1 R/W
R/W NLB SQL Cluster

Redundancy - Each server hosts a copy of the
database






SQL1-Read/Write - Consolidator
SQL2-Primary Read/Write (active)
SQL3-Logshipping Secondary (stand by)
Availability

Web Cluster A
Single point of failure
Manual failover – takes minutes to complete
R/W SQL NLB Cluster A
SQL2
R/W
Active
Logshipping
2-Way Replication
Performance

Application calls to a database are not load
balanced between the nodes of the cluster
SQL1 R/W
SQL3
Logshipping
Secondary
Mirroring (SQL 2005 SP1)
Database Mirroring
Mirroring

Highest Availability Writes

Log Shipping for DC Redundancy
SQL Principal SQL Mirror
NLB 1

Reduced failover downtime from
10min avg to <1min (planned)

Considerations:



It works on a per database basis for
DBs in full recovery model
Only one database is available for
clients at any time
Supports two partners and an optional
“witness” server for automated failover
Cluster A
Cluster B
Read
Write
P-2-P Replication
Sync / Async
Transactions
Legend
TCP Window Size – How it Works
Bucket
Classic TCP
H20
Bucket
Classic TCP
+
CDN
Bucket
H20
Bucket
Bucket
Bucket
Bucket
Bucket
Receive Window
Auto-Tuning
H20
Bucket
Bucket Bucket
Bucket
TCP Improvements – Client Testing
What Exactly Changed?

Compound TCP (CTCP) - controls TCP sending window size; interesting when LH is the server

Receive Window Auto-Tuning – controls TCP receive window size; interesting when Vista is client
Test Scenario

Clients: Dual boot client (XPSP2 & Vista 5308)

Test: Download (EN W2KSP4 ~135MB) from 4 locations (Tukwila, Bay, Florida & Frankfurt)
Results

Corporate network environment - direct Internet connectivity (high speed, low packet loss)



5–7% relative speed gain in low latency scenarios (2-20msec RTT)
>150% relative speed gain in mid to high latency scenarios (80-180msec RTT)
Home network environment (Comcast cable modem)

~40% relative speed gain (16-330msec RTT)
1,200
Seconds
1,000
800
XP- Download Time (seconds)
600
Vista-Download Time (seconds)
400
200
0
Tukwila (16msec
RTT)
BAY (30msec
RTT)
Tampa,FL
(100msec RTT)
Frankfurt,
Germany
(160msec RTT)
Time
0:16:26
0:15:57
0:15:28
0:14:59
0:14:30
0:14:01
0:13:32
0:13:03
0:12:34
0:12:05
0:11:36
0:11:07
0:10:38
0:10:09
0:09:40
0:09:11
0:08:42
0:08:13
0:07:44
0:07:15
0:06:46
0:06:17
0:05:48
0:05:19
0:04:50
0:04:21
0:03:52
0:03:23
0:02:54
0:02:25
0:01:56
0:01:27
100,000
50,000
0:16:48
0:16:20
0:15:52
0:15:24
0:14:56
0:14:28
0:14:00
0:13:32
0:13:04
0:12:36
0:12:08
0:11:40
0:11:12
0:10:44
0:10:16
0:09:48
0:09:20
0:08:52
0:08:24
0:07:56
0:07:28
0:07:00
0:06:32
0:06:04
0:05:36
0:05:08
0:04:40
0:04:12
0:03:44
0:03:16
0:02:48
0:02:20
0:01:52
0:01:24
0:00:56

0:00:28

0:00:58
0:00:00

0:00:29
BytesPerSec

0:00:00
Bytes
TCP/IP Throughput Improvements
Server to server transfer over 20ms RTT Link
W2K3  W2K3: 10-12 Mbps
Longhorn  Longhorn: > 300Mbps
Vista client Internet download speeds 160ms RTT > 2x
Frankfurt Download BytesPerSec
500,000
400,000
300,000
VistaBytesPerSec
200,000
100,000
XPBytesPerSec
-
Time
Frankfurt Download RcvdWindowSize
200,000
150,000
VistaRcvdWindowSize
XPRcvdWindowSize