EMIST DDoS September 2005 Experiment Plans
Download
Report
Transcript EMIST DDoS September 2005 Experiment Plans
DETER/EMIST
DDoS Defense Experimental Methodology
Impact of Traffic Generation Selection on Precision of Detection and
Response Characterization
Stephen Schwab
September 28, 2005
Schwab_20050928_1
SPARTA Team Participants
• DETER
– Steve Schwab, Ron Ostrenga, Brad Harris, David
Balenson, Dan Sterne
• EMIST DDoS
– Steve Schwab, Brett Wilson, Ron Ostrenga, Roshan
Thomas, Alefiya Hussain, Brad Harris, Dan Sterne
Schwab_20050928_2
Overview
• EMIST DDoS Experimental Methodology
• FloodWatch and CloudShield Experiment Design
• Testbed Methodology Automation
• Process for Incorporating Hardware and Appliances
• Traffic Generation
• Visualization and Metrics
• Future Plans
Schwab_20050928_3
Objectives for EMIST
• Create reusable library of test technology for conducting realistic,
rigorous, reproducible, impartial tests
– For assessing attack impact and defense effectiveness
– Test data, test configurations, analysis software, and experiment
automation tools
• Provide usage examples and methodological guidance
– Recommendations for selecting (or developing) tests and
interpreting results
– Test cases and results, possibly including benchmarks
• Facilitate testing of prototypes during development and commercial
products during evaluation
Schwab_20050928_4
Canonical DDoS Experiment
•
DDoS experimentation involves a vast multidimensional
space of experimental variables.
•
Canonical experiment form intended to organize the
experimental space and facilitate navigation through it.
•
Canonical experiment consists of:
1.
2.
3.
4.
5.
6.
7.
Attack Mechanisms
Background Traffic
Network Topology
Defense Mechanisms
Measurements and Metrics
Network Services Infrastructure
Risk
Schwab_20050928_5
Typical DDoS Toolkit Architecture
Hacker
Console
Master
Zombie
Victim
Schwab_20050928_6
Overview
• EMIST DDoS Experimental Methodology
• FloodWatch and CloudShield Experiment Design
• Testbed Methodology Automation
• Process for Incorporating Hardware and Appliances
• Traffic Generation
• Visualization and Metrics
• Future Plans
Schwab_20050928_7
SPARTA DDoS Experiment
September 2005
Background Traffic:
REPLAY | NTCG | HARPOON
HIGH FIDELITY TRAFFIC
Topology:
CORE
AS-11357
BUILDING-BLOCKS |
JUNIPER ROUTER CORE
REALISTIC CONNECTIVITY AND
SCALE-DOWN
Attack Traffic:
DETER-INTEGRATED ATTACK
SCRIPTING
AUTOMATION OF VARIETY OF
SCENARIOS UNDER STUDY
Instrumentation:
PACKET AND HOST STATISTICS
CAPTURE | SPECTRAL ANALYSIS
| METRICS CALCULATION |
INTEGRATED VISUALIZATION
TOOLBOX FOR RIGOROUS
INVESTIGATION OF RESULTS
ATTACK TRAFFIC
BACKGROUND
TRAFFIC
Schwab_20050928_8
SPARTA DDoS Experiment
September 2005
Defenses:
FloodWatch
STATISTICAL DDOS DETECTION
McAfee IntruShield
COMMERCIAL IDP APPLIANCE
CloudShield
NETWORK PROCESSOR
APPLIANCE PLATFORM
Juniper IDP-200
COMMERCIAL IDP APPLIANCE
STUDY AND QUANTIFY HOW
PLACEMENT WITHIN THE
NETWORK IMPACTS
EFFECTIVENESS OF DEFENSE
DEFENSE
DEPLOYMENT
POINT
Schwab_20050928_9
FloodWatch Experiment – Example Topology
Root
and leaf networks
transmit/receive replayed
packets
Packet
source and
destination addresses
randomly remapped to
experiment topology
Each
end network (leaf
and root) is both a source
and sink for background
traffic
Flood Watch
Edge
Network
Core Network
Example traffic flow
From one edge network
Schwab_20050928_10
Schematic of Network Connectivity
Host
2nd Tier
Router
Core
Router
Appliance
Core
Router
Juniper M7i
FloodWatch DDoS
Statistical Detector
2nd Tier
Router
Host
Target or
Attack Victim
CloudShield 2200
DDoS Entropy
Detector
Attack Source
Replay Traffic Source
TCP Traffic Source
Schwab_20050928_11
Experiment Goal
• Validate fidelity of attack and background traffic in
reproducing characteristics of real DDoS experimental
scenarios.
• Case #1: Attack Traffic
– Recreate an attack captured in a real-world network
– Compare spectral analysis of real-world and testbed network
to assess fidelity of phenomena reproduction.
• Case #2: Background Traffic
– Use TCP analytical model of throughput as a reference
– Compare theoretical throughput with observed throughput for
100Mb/s and 1Gb/s networks
– Compare model-vs-testbed discrepancy between100Mb/s and
1Gb/s to gauge ability to preserve phenomena while scalingup traffic.
Schwab_20050928_12
Overview
• EMIST DDoS Experimental Methodology
• FloodWatch and CloudShield Experiment Design
• Testbed Methodology Automation
• Process for Incorporating Hardware and Appliances
• Traffic Generation
• Visualization and Metrics
• Future Plans
Schwab_20050928_13
Automation
• DDoS Requirements generalize to broader Security
Experiment Requirements:
• Automate whenever possible for:
– Repeatability, efficiency, ease-of-use
• Experiments described in Emulab ns2 format may
include primitive events
– Under base Emulab system, provides control for a very
limited number of operations
Schwab_20050928_14
Event System Integration
• Testbed Automation Agents are controlled by events
• Events can be created in several ways:
– Specified in the NS file and played at runtime
– Played from an external list of events generated by
hand or script
– Dynamically generated by a GUI or script
• Current Agents
– Injector: traffic replay agent
– Flooder: attack agent
– Collector: packet traces (TCPDUMP) control and pernode filter counters
– Harpoon: traffic generation agent for the Harpoon TG
– FloodWatch: controls FloodWatch DDoS defense
Schwab_20050928_15
Automation Dashboard
Schwab_20050928_16
Overview
• EMIST DDoS Experimental Methodology
• FloodWatch and CloudShield Experiment Design
• Testbed Methodology Automation
• Process for Incorporating Hardware and Appliances
• Traffic Generation
• Visualization and Metrics
• Future Plans
Schwab_20050928_17
Appliances
• DETER requirement: support the experimental test and
evaluation of appliances
– Commercial products often packaged as appliances,
critical future user segment
• EMIST requirement: high-speed appliances stress the
testbed and the tools supporting our methodology
– Requirements:
– Provide the ability to seamlessly integrate appliances as
nodes in testbed experiments
– Stress all aspects of our methodology at line-rate
» Topology – Gigabit forwarding routers (Juniper)
» Traffic – Greater aggregation
» Data Capture – vanilla TCPDUMP inadequate
Schwab_20050928_18
DDoS Network Appliance Defense
Evaluation Scenarios
• Introduce scenarios in which technology developers
evaluate network appliances
– CloudShield 2200
» IXP2850 network processor + high-level application
development environment
» Evaluate prototype statistical DDoS defenses
– McAfee IntruShield 2600
» Commercial product, focus on DDoS capabilities
– Juniper IDP-200
» Commercial product with 8 Gigabit ports enabling study
of placement and connectivity
– Push envelope of DDoS defense evaluation methodology to
test Gigabit Ethernet rates
Schwab_20050928_19
DETER Hardware and Appliances
Process
• Develop a systematic process for integrating hardware
devices and appliances in the DETER testbed and
within EMIST experiments:
– Hardware connection
– Control plane ns topology
– Control plane manual configuration
– Data plane manual configuration
– Control and Data plane semi-automatic configuration
(scripting)
– Control and Data plane automation
» Integrate generalized scripts behind the scenes into
DETER and EMIST tools
Schwab_20050928_20
Juniper Routers
•
•
Deter has 5 Juniper M7i routers
• 4 Gigabit Ports/Router
The Juniper routers in DETER are almost first-class
DETER experimental devices.
• Can be allocated into an experiment by Assign
• Can be assigned IP addresses within ns topology
• Assign cannot YET configure router to use the
IP addresses that were allocated
• Must manually map the MAC and IP addresses
into a router configuration
• Plan to use the Juniper supported XML API to
automatically configure the router
Schwab_20050928_21
CloudShield Appliance
• A CloudShield Appliance with 4 Gigabit interfaces has
been added to DETER as an experimental device.
• Can be allocated into an experiment by Assign
• Must be configured manually
• Mapping of interfaces into an experiment is difficult
since there are no exposed MAC or IP addresses
• Usage is complicated by the transparent bridging
function that causes the DETER switches to go into
layer 2 loops.
• Spanning Tree Protocol (STP) is disabled on
DETER
Schwab_20050928_22
CloudShield CA2200
• 2 Intel IXP2800 network processors running CPOS
– 16 microengines each
– 1 StrongARM processor each
• Dual Pentium-III management processor running Red
Hat Linux
• 4 GigE or 4 Copper 10/100/1000 network interfaces
Schwab_20050928_23
Entropy Based Detection
• Runs on CloudShield CA2200 network appliance
• Data plane
– Detectors: source IP, destination IP, packet size, source
port, destination port
– Create a histogram for each detector on packets in a
fixed window size
– Calculate entropy for each detector
– Optionally filter packets
• Control plane
– Every N seconds, read entropy values
– Compare with high/low thresholds for normal operation
– When threshold is crossed, create a packet filter using
max item from each detectors histogram
Schwab_20050928_24
Entropy Detection Issues
• Unable to run completely in the data plane
– Packet processing runs in parallel
– Entropy algorithm requires synchronization to avoid
race conditions
– CPOS (CloudShield Packet OS) provides no support for
“manager threads” or mutexes
Schwab_20050928_25
Chi-Square Algorithm
• The top bin contains the item with the highest count
• Next bin contains the four next highest counts, etc.
• Items are swapped to different bins as their relative
counts change
• Chi-square statistic is the “shape” of the height of the
buckets
1
4
8
16
other
Schwab_20050928_26
Problems with Chi-Square
• When a packet is received on the data plane, the item
counter is incremented
• If the new count causes the item to be moved to
another bin (because it has a higher count than one of
the other items), the bins must be rebalanced
• Since packets are received in parallel, it is necessary to
lock the data structure so that only one thread can
modify it during rebalancing
• Without rebalancing, the algorithm can’t do detection
• Without synchronization primitives, data plane can
only collect data and detection must be done at the
control plane
Schwab_20050928_27
Experiment process using Juniper
routers and CloudShield appliance
1. Allocated via assign into an experiment that put
the CloudShield sensor on a GB link between two
Juniper routers
2. Router interfaces had to be configured manually
3. Routes had to be configured manually
4. Configuration scripts were created to help speed
up the experiment setup.
5. Routes needed to be verified after setup
completed
6. There were issues in determining which physical
CloudShield interfaces were actually being used
Schwab_20050928_28
IntruShield Sensors
• Two McAfee IntruShield Sensors, each with 2
unidirectional Gigabit interfaces and 6 unidirectional
100 MB IPS interfaces have been added to DETER as
an experimental device.
• Can be allocated into an experiment by Assign
• Usage is complicated by the transparent bridging
function that causes the DETER switches to go into
layer 2 loops. (STP issue revisited)
• May require physical attachment to a permanent
anchor PC node.
• Requires a dedicated Windows2000 Manager node
Schwab_20050928_29
Overview
• EMIST DDoS Experimental Methodology
• FloodWatch and CloudShield Experiment Design
• Testbed Methodology Automation
• Process for Incorporating Hardware and Appliances
• Traffic Generation
• Visualization and Metrics
• Future Plans
Schwab_20050928_30
Background Traffic Generation
• DDoS Specific Requirements:
• Detection and mitigation of attacks need to be
evaluated against a statistically valid mix of
background traffic.
• Interactive TCP traffic exhibits very different
characteristics due to congestion control and recovery
• Two traffic generation systems under active
investigation and development:
– NTCG (UC Davis)
– Harpoon (U. Wisconsin)
Schwab_20050928_31
NTCG:
Network Traffic Compiler Generator
• Interactive Traffic: Paradigm is to create controlled mix of traffic
produced by an array of traffic generators (TG, etc.).
Packet Trace
analyzer
Topology
Description
Traffic
Specifications
spec
compiler
TG, etc.
commands
Designed and developed by Allen Ting and colleagues/U.C. Davis
Schwab_20050928_32
Harpoon Traffic Generator
• Provides extreme configurability
– specify precisely ordered temporal distributions of connection
duration or use Netflow traces
– accepts multiple subnets as destination, allowing a single
Harpoon client to generate traffic across all networks
– client and server accept either 0.0.0.0 (default routes) or the
IP of a specific interface
– Can connect to and listen on all TCP and UDP ports
• Multithreaded implementation
– Allows one agent to handle fifty or more connections
(governed by the OS threads per process limit and CPU)
• Each thread can connect to any destination, rather than
using a single source / single sink paradigm
• Plug-in architecture allows for extensibility
• Well documented, clear and precise
Developed by Joel Sommers and Paul Barford, U. Wisconsin
Schwab_20050928_33
IP Address Diversity
• Background traffic should involve a wide variety of IP
addresses
– There are usually not enough nodes to obtain enough IP
diversity
• Solution: Network Address Translation (NAT)
– Using Linux IPTables, one machine can serve large blocks of
IP addresses
• Real machine 10.1.1.2 has a single IP address
– Use IPTables to add a pre-routing entry to the NAT table
– iptables -t nat -A PREROUTING -d 3.0.0.0/8 -j
DNAT --to 10.1.1.2
• Packets arriving on any 3.0.0.0/8 are translated into packets
destined for 10.1.1.2
• NAT is opaque to 10.1.1.2, allowing full TCP stacks to reply,
including proper TCP backoff, etc.
Schwab_20050928_34
Demo Video Clip
• An Attack run against the FloodWatch DDoS defense,
as performed and recorded by Brett Wilson on DETER.
Schwab_20050928_35
Requirements for NAT (Routed)
• Traditional network with routes in and out
– Start NATs on each of the nodes
– iptables -t nat -d 2.0.0.0/8 -j DNAT -A
PREROUTING --to 10.1.2.3
• Configure routers so that there are valid routes for the
NAT networks.
• At the final hop, include a route to the NAT network
(10.1.2.3) through the next hop
– This prevents the router from ARPing for a NAT
address, as it believes 10.1.2.3 is another route
– On 10.1.2.3, IPTables catches the packet first
(PREROUTING), translates it, and the connection
completes
Schwab_20050928_36
Requirements for NAT (LAN)
•
•
•
•
•
•
•
Isolated LAN configuration
– More Complex Configuration
This scenario has no routers; all nodes are on a single broadcast
domain
Configure each node so that the physical interface has a /32 netmask
– ifconfig eth0 netmask 255.255.255.255
– eth0 will no longer answer ARP requests itself
Add explicit route for the experiment network via eth0
– Use the broadest net possible
– route add -net 10.0.0.0 netmask 255.0.0.0 dev eth0
Add routes for all the NAT subnets on the other machines
– route add -net 3.0.0.0 netmask 255.0.0.0 dev eth0
Proxy ARP for all subnets this node is NATting.
./tarpd eth0 2.0.0.0 255.0.0.0
Add the NAT via iptables
– iptables -t nat -d 2.0.0.0/8 -j DNAT -A PREROUTING --to 10.1.2.3
Schwab_20050928_37
Overview
• EMIST DDoS Experimental Methodology
• FloodWatch and CloudShield Experiment Design
• Testbed Methodology Automation
• Process for Incorporating Hardware and Appliances
• Traffic Generation
• Visualization and Metrics
• Future Plans
Schwab_20050928_38
Data Capture and Analysis
• DDoS Specific Requirements:
• Automate collection of both packet traces and hostbased statistics
– to calculate metrics
– to analyze traffic behavior
• Support visualization and interaction between
experimenter and potentially large, unwieldy data
harvested from the testbed
Schwab_20050928_39
DDoS:
Instrumentation and Visualization
• Tracefiles produced, collected and extracted from DETER
for post-mortem analysis.
• ESVT analysis and experimental data graphical browser:
– Navigate within experiment topology.
– Drill down on link or node traffic/metrics: visualize and
compare one or more statistics between different links or
time intervals.
• Key Milestone: Integrated support for visualization and
analysis of DDoS experiments across multiple links,
topologies, attack patterns, and defense configurations.
• ESVT integration in collaboration with Jason Hart / Penn
State
Schwab_20050928_40
ESVT Experiment Topology
Schwab_20050928_41
ESVT Visualization
Schwab_20050928_42
ESVT Visualization: Connectivity
Schwab_20050928_43
Spectral Analysis Rationale
• Can we create high fidelity testbed experiments?
– Gross parameters such as rate, packet type, etc.
– Small time scale behavior captured by spectral analysis
• High fidelity experiments help capture accurate
interaction with cross traffic.
Schwab_20050928_44
Spectral Analysis
• Periodic behaviors encoded into the attack stream
– Host, Network, aggregation effect, ….
• Leverage experience from signal processing
– FFTs, wavelets, detection theory, etc.
08:14:33.495046 2005host1.starwoodbroadband.com.domain > 2005host74.starwoodbroadband.com.32894:
2/1/1 CNAME[|domain]
08:14:33.495924 2005host74.starwoodbroadband.com.33350 > camembear.fogcreek.net.http: S
1688588990:1688588990(0) win 5840 <mss 1460,sackOK,timestamp 27125396 0,nop,wscale 0> (DF)
08:14:33.496502 2005host74.starwoodbroadband.com.32894 > 2005host1.starwoodbroadband.com.domain:
19420+ PTR? 219.120.243.64.in-addr.arpa. (45) (DF)
08:14:33.496956 london-bar3.ja.net > 12.160.3.255: icmp: echo request
08:14:33.497347 2005host74.starwoodbroadband.com > london-bar3.ja.net: icmp: echo reply
40672
Characterize attack behavior
Schwab_20050928_45
Fourier Analysis
08:14:33.495046 2005host1.starwoodbroadband.com.domain >
2005host74.starwoodbroadband.com.32894: 40672 2/1/1 CNAME[|domain]
08:14:33.495924 2005host74.starwoodbroadband.com.33350 > camembear.fogcreek.net.http: S
1688588990:1688588990(0) win 5840 <mss 1460,sackOK,timestamp 27125396 0,nop,wscale 0> (DF)
08:14:33.496502 2005host74.starwoodbroadband.com.32894 >
2005host1.starwoodbroadband.com.domain: 19420+ PTR? 219.120.243.64.in-addr.arpa. (45)
(DF)
08:14:33.496956 london-bar3.ja.net > 12.160.3.255: icmp: echo request
08:14:33.497347 2005host74.starwoodbroadband.com > london-bar3.ja.net: icmp: echo reply
08:14:33.499372 i.am.an.animal.and.i.irc.from.zoo-gate.fi > 12.160.3.255: icmp: echo
request (DF)
08:14:33.499441 2005host74.starwoodbroadband.com > i.am.an.animal.and.i.irc.from.zoogate.fi: icmp: echo reply
…
packet trace
time-series
FFT
• FFT summarizes the
frequency content
• Used to characterize
spatial effects
frequency
domain
Schwab_20050928_46
Wavelet Analysis
08:14:33.495046 2005host1.starwoodbroadband.com.domain >
2005host74.starwoodbroadband.com.32894: 40672 2/1/1 CNAME[|domain]
08:14:33.495924 2005host74.starwoodbroadband.com.33350 > camembear.fogcreek.net.http: S
1688588990:1688588990(0) win 5840 <mss 1460,sackOK,timestamp 27125396 0,nop,wscale 0> (DF)
08:14:33.496502 2005host74.starwoodbroadband.com.32894 >
2005host1.starwoodbroadband.com.domain: 19420+ PTR? 219.120.243.64.in-addr.arpa. (45)
(DF)
08:14:33.496956 london-bar3.ja.net > 12.160.3.255: icmp: echo request
08:14:33.497347 2005host74.starwoodbroadband.com > london-bar3.ja.net: icmp: echo reply
08:14:33.499372 i.am.an.animal.and.i.irc.from.zoo-gate.fi > 12.160.3.255: icmp: echo
request (DF)
08:14:33.499441 2005host74.starwoodbroadband.com > i.am.an.animal.and.i.irc.from.zoogate.fi: icmp: echo reply
…
long packet trace
time-series
wavelet
• Wavelets summarize
both time and frequency
information
• Used to characterize
temporal effects
frequency
and time
domain
Schwab_20050928_47
Case #1: Analysis technique
• Validate fidelity of experiments by comparing spectral behavior
of real-world traces to testbed experiments.
Schwab_20050928_48
Experiment 1: Lander Attack
TCP NULL attack from a single source sending
40B packets at 1100 packets/s
Packet rate
Trace Spectrum
Bit rate
Testbed Spectrum
Number of Flows
Schwab_20050928_49
Experiment 1: Lander Attack
TCP NULL attack from a single source sending
40B packets at 1100 packets/s
Testbed Spectrum
Schwab_20050928_50
Experiment 2: Lander Attack
Invalid IP-proto 255 attack from three sources
sending 40B packets at 60 Kpackets/s
Packet rate
Trace Spectrum
Bit rate
Testbed Spectrum
Number of Flows
Schwab_20050928_51
Experiment 2: Lander Attack
Invalid IP-proto 255 attack from a three sources
sending 40B packets at 60 Kpackets/s
Trace Spectrum
Schwab_20050928_52
Case #2: Background Traffic Analysis
• CloudShield Entropy detection behavior
• CloudShield requires filter to be installed from control
plane with minimum and maximum thresholds
– Packets that push the entropy above [below] a
maximum [minimum] threshold are dropped
• What if entropy standard deviation is 23%?
– Non-stationary statistic makes filtering impossible
Schwab_20050928_53
Entropy During Attack Run
5000000000
4500000000
4000000000
3500000000
3000000000
Series1
Series2
Series3
Series4
Series5
2500000000
2000000000
1500000000
1000000000
500000000
0
1
3
5
7
9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65
Schwab_20050928_54
Overview
• EMIST DDoS Experimental Methodology
• FloodWatch and CloudShield Experiment Design
• Testbed Methodology Automation
• Process for Incorporating Hardware and Appliances
• Traffic Generation
• Visualization and Metrics
• Future Plans
Schwab_20050928_55
Introduction to our DDoS metric framework
• Distinguish between extrinsic and intrinsic metrics
– Extrinsic: Measures that can be computed and observed by external
parties
– Intrinsic: Measures that can only be computed by the object being
measured and by only analyzing the internal algorithms and data
structures
• Analyze metrics at different levels of abstraction
– packet, flow, aggregate, service/application layers
• Metrics from different vantage points
– Client-side, server-side (end point), link-level, end-to-end
• Focus on metrics for
– Traffic and service characterization
– Measuring DDoS impact
– Measuring DDoS effectiveness
Schwab_20050928_56
Examples of Metrics for Characterizing Traffic
(base traffic metrics)
Application-level
Client-observed
Intermediary-observed
link level
Server-observed
- streaming video: mean opinion score
(MOS) (E)
- VoIP: Round-trip-delay (E)
- VoiP: Percentage of Packets
discarded by the Jitter Buffer (I)
- VoIP: Mean-length-of-bursts (I)
- number-of-flows-per
application
- client-request-rate
- client-service-rate
- per-aggregate-packet-rate
(E)
- per-aggregate-arrival-rate
(E)
Aggregate-level
Flow-level
- server-connection-completion-time (E)
- server-connection-completion-rate (E)
- rate of failed connections (E)
- per-flow-loss rate
- per-flow-packet-rate (E)
- per-client-connectionrequest-rate (E)
- per-client-goodput (E)
Packet-level
- server-response-rate (E);
- server-response-time (E);
- goodput (E);
- ratio of attack traffic to
goodput
- per-client-packet-rate (E);
- packet-drop-rate (I);
- per-packet-processing
overhead (I)
Schwab_20050928_57
The Server Service and Link Service Indices:
Higher Level Posture Indicators
• Server Service Index (SSI)
– a function of common service attributes such as:
» average server-response time,
1
» Rate of failed connections,
» max number of concurrent clients serviced,
» server throughput
• Link Service Index (LSI)
– a function of common link attributes such as:
0
» goodput,
» link utilization,
» loss rate
• Index value is a composite weighted measure of deviations from
desired thresholds for constituent attributes
• Normalize to lie within [0, 1]
– An SSI/LSI value of 0.5 means the server/link is meeting
prespecified objectives for various attributes and deviations from
0.5 indicate a deficiency (if < 0.5) or a surplus (if > 0.5)
• Changes in the SSI/LSI provide an early warning that the server or
link is degrading or improving
Schwab_20050928_58
Formulating the Server/Link Index
• The SSI or LSI may be defined as:
I =
1 wm ( xm , t m )
m
2
where for each metric m with a weight wm we define a deviation function m ( x, t m )
that measures how the value x of metric m deviates from the threshold tm
The deviation function should have the following properties:
•
should have no units,
•
should be zero when x = tm
•
should be in the interval [-1,0) if the value of the metric has not achieved the
threshold and in the interval (0,1] if it has exceeded the threshold
•
should reflect the effort needed to take the metric m from the value x to tm
Sample linear deviation function:
( x, t m ) sign ( x t m )
x tm
max( x, t m )
where sign(y) is defined as:
Value of y
Threshold is
Sign(y)
Positive or zero
Upper bound
-1
Negative
Upper bound
1
Positive or zero
Lower bound
1
Negative
Lower bound
-1
Schwab_20050928_59
Metrics for Characterizing Traffic in Our Toplogy
-Average connection completion
time
-Rate of failed connections
-Average throughput per flow
-Average loss per flow
- Goodput;
- Ratio of attack to
background traffic
- Average link utilization
Victim/
Server
1
Server Service Index (SSI)
(overall service posture)
end-to-end
(TCP) flow
0
-Average server response time
-Average server-side application
throughput
- Aggregate attack rate
1
0
Link Service Index (LSI)
(link posture)
Schwab_20050928_60
Metrics for Measuring DDoS Impact
- increased connection completion
time
- increased rate of failed connections
- increased loss per flow
Victim/
Server
1
- decrease in the Server Service Index (SSI)
(overall service posture)
0
end-to-end
(TCP) flow
-decrease in
goodput
- increased aggregate
attack rate
- degraded server response time
- decreased server-side
application throughput
1
0
-decrease in the Link
Service Index (LSI)
Schwab_20050928_61
Metrics for Measuring DDoS Defense
Effectiveness
• % improvement in base and composite
metrics
• Time taken to achieve a % improvement
• Metrics to characterize the breaking points for
a defense
– e.g: a filter can filter bad traffic up to 200 Mbps
• A high level effectiveness score (index) for a
defense mechanism
Schwab_20050928_62
Metrics for Measuring Defense Effectiveness
- decreased connection completion
time
- decreased rate of failed connections
- decreased loss per flow
Victim/
Server
1
- increase in the Server Service Index (SSI)
(overall service posture)
end-to-end
(TCP) flow
0
- increase in
goodput
- decreased aggregate
attack rate
- improved server response time
- increased server-side
application throughput
1
0
-increase in the link
Service Index (LSI)
Schwab_20050928_63
Measuring Defense Effectiveness
Against TCP Flows Under Attack
• Goal:
– Measure the effectiveness of FloodWatch defense
mechanisms in isolating the effects of flooding attacks
on TCP flows
– Formulate a metric that is tied to a measure of the
spread of the throughput of TCP flows when under
attack and protected by FloodWatch and in relation to
some ideal (theoretic) notion of what the throughput
should be if the flows are unaffected by the attack.
» Ideal throughput
• f(r,p) = sqrt(1.5) / r (sqrt (p)) ……….eq (1)
where r = round-trip time and p is the loss rate
Schwab_20050928_64
Towards a Defense Effectiveness Measure
• Induce loss rates of 1,2,3, and 4% and subject a sample
size of n flows to each loss rate (rk), k = 1,2,3,4.
• Calculate the standard deviation (sdk) of the observed
throughputs of the flow for each sample with respect to
the theoretical throughput fkas given by eq (1)
• Derive an effectiveness score for each loss rate as a
function of sdk
ek = function (sdk)
• Derive and effectiveness score for the defense as the
mean of all ek for k = 1,2, 3,4.
E
e
k
k
k
• The lower the E value, the more effective the defense
• Can also normalize E as an index to lie between [0, 1]
Schwab_20050928_65
Potential Future Directions in DDoS Metrics
• Mapping of attacks to extrinsic metrics
– Development of a classification of attacks and analyzing what the
externally perceived effect of an attack category is (in terms of the
extrinsic metrics that will be perturbed)
• Mapping of extrinsic to intrinsic metrics
– Goal is to understand why a specific type of defense technology is
limited or in some cases completely fails to mitigate certain attacks
classes.
• Towards the development of a DDoS Outage Index
– Metrics to measure DDoS at Internet scales.
– The long-term goal is the development of the equivalent of the
outage index used by telecommunications and cable TV providers
to report outages to the FCC.
– Index will be a function of:
» Extent of an attack (how global, how many tierX ISPs affected
etc.)
» Spread rate
» Duration of the attack
» Impact: how many critical services were affected?
» Recovery time etc.
Schwab_20050928_66
Future Experiments and
Experimental Methodology
• Communicating DDoS defenses
– Stress the experimental methodology by requiring network
communication during DDoS attack
• Juniper routers
– Support for routing protocols (BGP, OSPF) within experiment
automation framework
– Leverage Chris Alfeld’s WAIL router building blocks
• CloudShield Appliance
– As programmable instrument
– As an emulation of a router line card, e.g. RED queue variants
– As development prototyping environment for various defenses
Schwab_20050928_67