Programming the IBM Power3 SP
Download
Report
Transcript Programming the IBM Power3 SP
Introduction
Jiří Navrátil
SLAC
Project Partners and Researchers
INCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks
Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL
Rice University
Richard Baraniuk, Edward Knightly, Robert Nowak, Rudolf Riedi
Xin Wang, Yolanda Tsang, Shriram Sarvotham, Vinay Ribeiro
Los Alamos National Lab (LANL)
Wu-chun Feng, Mark Gardner, Eric Weigle
Stanford Linear Accelerator Center (SLAC)
Les Cottrell, Warren Matthews, Jiri Navratil
Project Goals
INCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks
Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL
• Objectives
– scalable, edge-based tools for on-line network
analysis, modeling, and measurement
• Based on
– advanced mathematical theory and methods
• Designeted for
– support high-performance computing
infrastructures, such as computational grids,
– ESNET, Internet2 and other HPNetworking
project
Project Elements
INCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks
Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL
• Advanced techniques
– from networking, supercomputing, statistical signal
processing, applied mathematics
• Multiscale analysis and modeling
– understand causes of burstiness in network traffic
– realistic, yet analytically tractable, statistically robust, and
computationally efficient modeling
• On-line inference algorithms
– characterize and map network performance as a function of
space, time, application, and protocol
• Data collection tools and validation experiments
Scheduled Accomplishments
INCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks
Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL
• Multiscale traffic models and analysis techniques
– based on multifractals, cascades, wavelets
– study how large flows interact and cause bursts
– study adverse modulation of application-level traffic by
TCP/IP
• Inference algorithms for paths, links, and routers
– multiscale end-to-end path modeling and probing
– network tomography (active and passive)
• Data collection tools
– add multiscale path, link inference to PingER suite
– integrate into ESnet NIMI infrastructure
– MAGNeT – Monitor for Application-Generated Network Traffic
– TICKET – Traffic Information-Collecting Kernel with Exact
Timing
Future Research Plans
INCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks
Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL
New, high-performance traffic models
– guide R&D of next-generation protocols
• Application-generated network traffic repository
– enable grid and network researchers to test and evaluate
new protocols with actual traffic demands of applications
rather than modulated demands
• Multiclass service inference
– enable network clients to assess a system's multi-class
mechanisms and parameters using only passive, external
observations
• Predictable QoS via end-point control
– ensure minimum QoS levels to traffic flows
– exploit path and link inferences in real-time end-point
admission control
(From Papers to Practice)
MWFS, TOMO, TOPO
20 ms
~300 ms
40 T for new set of values (12 sec)
First results
What has been done
• Phase 1 - Remodeling
- Code separation (BW and CT)
- Find how to call MATLAB from another
program
- Analyze Results and data
- Find optimal params for model
• Phase 2
- Webing of BW estimate
Data Dispersions from sunstats.cern.ch
ccnsn07.in2p3.fr
sunstats.cern.c
h
pcgiga.cern.ch
plato.cacr.caltech.edu
pcgiga.cern.ch
default WS
BW ~ 70Mbps
pcgiga.cern.ch
WS 512K
BW ~ 100 Mbps
Reaction to the network problems
After tuning
MF-CT Features and benefits
• No need access to routers !
– Current monitoring systems for Load of traffic are
based on SNMP or Flows (needs access to routers)
• Low cost:
– Allows permanent monitoring (20 pkts/sec ~ overhead
10 Kbytes/sec)
– Can be used as data provider for ABW prediction
(ABW=BW-CT)
• Weak point for common use
MATLAB code
Future work on CT
• Verification model
– Define and setup verification model (S+R)
– Measurements (S)
– Analyze results (S+R)
• On-line running on selected sites
– Prepare code for automation and Webing (S)
– CT-Code modificaton ? (R)
UDP echo
SNMP counter
CERN
SNMP counter
SNMP counter
SLAC
SNMP counter
IN2P3
MF-CT Simulator
UDP echo
CT RE-ENGINEERING
For practical monitoring would be necessary to do
modification for using it in different modes:
– Continuos mode for monitoring one
site in Large time scale (hours)
– Accumulation mode (1 min, 5 min, ?)
for running for more sites in parallel
– ? Solution without MATLAB ?
Rob Nowak (and CAIDA people) say:
www.caida.org
Network Topology Identification
Ratnasamy & McCanne (99)
Duffield, et al (00,01,02)
Bestavros, et al (01)
Coates, et al (01)
Pairwise delay measurements reveal topology
Network Tomography
source
router / node
link
receivers
Measure end-to-end (from source to receiver) losses/delays
Infer link-level (at internal routers) loss rates and delay distributions
Unicast Network Tomography
Measure end-to-end losses of packets
‘0’ loss
‘1’ success
‘0’ loss
‘1’ success
Cannot isolate where losses occur !
Packet Pair Measurements
packet (2) packet (1)
cross-traffic
packet (2)
packet (1)
delay
measurement
packet pair
(1)
(2)
packet and packet experience nearly
identical losses and/or delays
Delay Estimation
Measure end-to-end delays of packet-pairs
Packets experience the
same delay on link 1
d 2 dmin 0
d 3 d min
Extra delay on link 3
Packet-pair measurements
Key Assumptions:
packet (2) (n)
packet (1) (n)
• fixed routes
• iid pair-measurements
• losses & delays on
each link are mutually
independent
y ( 2 ) ( n)
0 "loss " or 1 "success "
y ( p ) ( n)
0,1,...,K "delay units"
p{1, 2}
y (1) (n)
• packet-pair losses &
delays on shared links
are nearly identical
y y (n), y (n)
(1)
( 2)
N
n 1
record occurrences
of losses and delays
ns Simulation
• 40-byte packet-pair probes every 50 ms
• competing traffic comprised of:
on-off exponential (500 byte packets)
TCP connections (1000 byte packets)
2
10
1
10
0.5
10
2
cross-traffic link 9
5
Kbytes/s
1
time (s)
2
0.5
Test network showing
link bandwidths (Mb/s)
Future work on TM and TP
• Model in frame of Internet (~100 sites)
–
–
–
–
–
Define verification model (S+R)
Deploy and install code on sites (S)
First measurements (S+R)
Analyze results (form,speed,quantity) (S+R)
? Code modificaton (R)
• Production model ?
– Compete with Pinger, RIPE, Surveyor, Nimi ?
– How to unify VIRTUAL structure with Real