Transcript CNS Review

Proactive Surge Protection:
A Defense Mechanism
for Bandwidth-Based Attacks
Jerry Chou, Bill Lin
University of California, San Diego
Subhabrata Sen, Oliver Spatscheck
AT&T Labs-Research
USENIX Security Symposium, San Jose, USA, July 30, 2008
Outline
• Problem
• Approach
• Experimental Results
• Summary
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 2
Motivation
Seattle
Chicago
Sunnyvale
Los
Angeles
New York
Denver
Indianapolis
Washington
Kansas City
Atlanta
Houston
• Large-scale bandwidth-based DDoS attacks can quickly
knock out substantial parts of a network before reactive
defenses can respond
• All traffic that share common route links will suffer collateral
damage even if it is not under direct attack
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 3
Motivation
• Potential for large-scale bandwidth-based DDoS attacks
exist
• e.g. large botnets with more than 100,000 bots exist
today that, when combined with the prevalence of highspeed Internet access, can give attackers multiple tens of
Gb/s of attack capacity
• Moreover, core networks are oversubscribed (e.g. some
core routers in Abilene have more than 30 Gb/s incoming
traffic from access networks, but only 20 Gb/s of outgoing
capacity to the core
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 4
Example Scenario
Seattle/NY:
3 Gb/s
Seattle
10G
10G
Kansas
City
10G
Sunnyvale
Sunnyvale/NY:
3 Gb/s
New York
Indianapolis
Houston
Atlanta
• Suppose under normal condition

Traffic between Seattle/NY + Sunnyvale/NY under 10
Gb/s
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 5
Example Scenario
Seattle/NY:
3 Gb/s
Seattle
10G
10G
Kansas
City
10G
Sunnyvale
Sunnyvale/NY:
3 Gb/s
New York
Indianapolis
Houston
Atlanta
Houston/Atlanta:
Attack 10 Gb/s
• Suppose sudden attack between Houston/Atlanta

Congested links suffer high rate of packet loss

Serious collateral damage on crossfire OD pairs
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 6
Impact on Collateral Damage
US
Europe
• OD pairs are classified into 3 types with respect to the
attack traffic



Attacked: OD pairs with attack traffic
Crossfire: OD pairs sharing route links with attack traffic
Non-crossfire: OD pairs not sharing route links with attack traffic
• Collateral damage occurs on crossfire OD pairs
• Even a small percentage of attack flows can affect
substantial parts of the network
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 7
Related Works
• Most existing DDoS defense solutions are reactive in
nature
• However, large-scale bandwidth-based DDoS attacks
can quickly knock out substantial parts of a network
before reactive defenses can respond
• Therefore, we need a proactive defense mechanism that
works immediately when an attack occurs
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 8
Related Works (cont’d)
• Router-based defenses like Random Early Drop (RED,
RED-PD, etc) can prevent congestion by dropping packets
early before congestion
 But may drop normal traffic indiscriminately, causing
responsive TCP flows to severely degrade
• Approximate fair dropping schemes aim to provide fair
sharing between flows
 But attackers can launch many seemingly legitimate
TCP connections with spoofed IP addresses and port
numbers
• Both aggregate-based and flow-based router defense
mechanisms can be defeated
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 9
Previous Solutions (cont’d)
• Router-based defenses like Random Early Drop (RED,
RED-PD, etc) can prevent congestion by dropping packets
early before congestion
 But may drop normal traffic indiscriminately, causing
responsive TCP flows to severely degrade
In general, defenses based on
• Approximate
fair dropping
schemes
aim to provide
fair
unauthenticated
header
information
such as
sharing between
flows and port numbers
IP addresses
 But attackers can launch many seemingly legitimate
may not be reliable
TCP connections with spoofed IP addresses and port
numbers
• Both aggregate-based and flow-based router defense
mechanisms can be defeated
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 10
Outline
• Problem
• Approach
• Experimental Results
• Summary
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 11
Our Solution
• Provide bandwidth isolation between OD pairs,
independent of IP spoofing or number of TCP/UDP
connections
• We call this method Proactive Surge Protection (PSP)
as it aims to proactively limit the damage that can be
caused by sudden demand surges, e.g. sudden
bandwidth-based DDoS attacks
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 12
Basic Idea: Bandwidth Isolation
Seattle/NY:
Limit: 3.5 Gb/s
Actual: 3 Gb/s
All admitted as High
Traffic received in NY:
Seattle: 3 Gb/s
Sunnyvale: 3 Gb/s
…
Seattle
10G
10G
Kansas
City
New York
10G
Sunnyvale
Indianapolis
Proposed
mechanism proactively
drop attack
Sunnyvale/NY:
traffic immediately
when
attacks
occur
Houston
Atlanta
Houston/Atlanta:
Limit: 3.5 Gb/s
Actual: 3 Gb/s
All admitted as High
Houston/Atlanta:
Limit:
Limit: 3
3 Gb/s
Gb/s
Actual:
10
Gb/s
Actual: 2 Gb/s
High:
3 Gb/sas High
All admitted
Low: 7 Gb/s
• Meter and tag packets on ingress as HIGH or LOW priority

Based on historical traffic demands and network capacity
• Drop LOW packets under congestion inside network
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 13
Architecture
Traffic Data
Collector
Traffic
Measurement
Bandwidth
Allocator
Bandwidth Allocation Matrix
Policy Plane
Proposed mechanism readily available
in
Data Plane
Deployed at
routers
Network modern
Routers
forwarded
packets
dropped
packets
Preferential
Dropping
tagged
packets
Differential
Tagging
arriving
packets
Deployed at
Network Perimeter
High priority
Low priority
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 14
Allocation Algorithms
• Aggregate traffic at the core is very smooth and variations
are predictable
• Compute a bandwidth allocation matrix for each hour
based on historical traffic measurements

e.g. allocation at 3pm is computed by traffic
measurements during 3-4pm in the past 2 months
Source: Roughan’03 on a Tier-1 US Backbone
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 15
Allocation Algorithms
• To account for measurement inaccuracies and provide
headroom for traffic burstiness, we fully allocate the
entire network capacity as an utility max-min fair
allocation problem
 Mean-PSP: based on the mean of traffic demands
 CDF-PSP: based on the Cumulative Distribution
Function (CDF) of traffic demands
• Utility Max-min fair allocation
 Iteratively allocate bandwidth in “water-filling” manner
 Each iteration maximize the common utility of all flows
 Remove the flows without residual capacity after each
iteration
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 16
Utility Max-min Fair Bandwidth Allocation
Utility functions
AB
BC
100
80
60
40
20
AC
100
80
60
40
20
Utility(%)
Utility(%)
Utility(%)
100
80
60
40
20
1 2 3 4 5
BW
1 2 3 4 5
BW
Network
1 2 3 4 5
BW
Allocation
5
A
B
5
5
5
C
BW
5
4
3
2
1
0
1st round
AB
BC
Links
BW
5
4
3
2
1
0
2nd round
AB
BC
Links
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 17
Mean-PSP (Mean-based Max-min)
• Use mean traffic
demand as the utility
function
fij ( Bij ) 
Bij
 dij /# m easurem ent
• Iteratively allocate
bandwidth in “waterfilling” manner
10G
A
10G
B
10G
C
10G
Mean Demand
BW Allocation Bij
A
B
C
A
-
1.5
1
B
0.5
-
C
1.5
1
BW
10
8
6
4
2
0
A
B
C
A
-
6
4
0.5
B
4
-
6
-
C
6
4
-
1st round
AB BC CB BA
Links
BW
10
8
6
4
2
0
2nd round
AB BC CB BA
Links
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 18
CDF-PSP (CDF-based Max-min)
• Explicitly capture the traffic variance by using a
Cumulative Distribution Function (CDF) model as utility
functions
fij ( Bij )  PROB[dij  Bij ]
• Maximize utility is equivalent to minimizing the drop
probabilities for all flows in a max-min fair manner
E.g : dij  (1,1,1, 3, 5)
Utility(%)
100
80
60
40
20
When allocated 3 unit bandwidth,
drop probability is 20%
1
2
3 4
BW
5
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 19
Outline
• Problem
• Approach
• Experimental Results
• Summary
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 20
Networks
• US Backbone

Large tier1 backbone network in US

~700 nodes, ~2000 links (1.5Mb/s – 10Gb/s)

1-minute traffic traces: 07/01/07-09/03/07
• Europe Backbone

Large tier1 backbone network in Europe

~900 nodes, ~3000 links (1.5Mb/s – 10Gb/s)

1-minute traffic traces: 07/01/07-09/03/07
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 21
Evaluation Methodology
• NS2 Simulation
• Normal traffic: Based on actual traffic demands over 24
hour period for each backbone
• Attack traffic:
 US Backbone: highly distributed attack scenario
• Based on commercial anomaly detection systems
• From 40% ingress routers to 25% egress routers

Europe Backbone: targeted attack scenario
• Created by synthetic attack flow generator
• From 40% ingress routes to only 2% egress routers
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 22
Packet Loss Rate Comparison
US
Europe
• Both PSP schemes greatly reduced packet loss rates
• Peak hours have higher packet loss rates
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 23
Relative Loss Rate Comparison
US
Europe
• PSP reduced packet loss rates by more than 75%
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 24
Behavior Under Scaled Attacks
• Packet drop rate under attack demand scaled by factor up
to 3x
US
Europe
• Under PSP, the loss remains small throughout the range !
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 25
Summary of Contributions
• Proactive solution for protecting networks that provides a
first line of defense when sudden DDoS attacks occur
• Very effective in protecting network traffic from collateral
damage
• Not dependent on unauthenticated header information, thus
robust to IP spoofing
• Readily deployable using existing router mechanisms
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 26
Questions?
USENIX Security Symposium, San Jose, USA, July 30, 2008