Presentation

Download Report

Transcript Presentation

OSMOSIS Final Presentation
Introduction
Osmosis System
• Scalable, distributed system.
• Many-to-many publisher-subscriber real time
sensor data streams, with QoS constrained routing.
• Ability to perform distributed processing on
stream data.
• Processing threads can migrate between hosts.
Introduction
Osmosis System (cont.)
• Distributed Resource Management.
• Maintain balanced load.
• Maximize number of QoS constraints met.
• Cross platform Implementation.
Motivation
Possible Systems
• A distributed video delivery system.
Multiple subscribers with different bandwidth requirements. Stream
compressed within the pastry network en-route, for lower bandwidth
subscribers.
• Car traffic management system.
Cameras at each traffic light, connected in a large distributed
network. Different systems can subscribe to different streams to
determine traffic in specific areas, which would allow for re-routing
of traffic, statistics gathering, etc...
Motivation
Possible Systems
• A generalized SETI-at-home type distributed system.
Clients can join and leave the Osmosis network. Once part of the
network, they can receive content and participate in processing of
jobs within the system.
Related Work
• Jessica Project
Distributed system with thread migration, but uses centralized server
for load balancing, which limits scalability.
• End System Multicast
End systems implement all multicast related functionality including
membership management and packet replication. Builds mesh of
all nodes in network to build tree topology, not scalable.
• Pastry/Scribe
Application level multicast and anycast on a generic, scalable, self
organizing substrate for peer-to-peer applications. Extremely
scalable, but no attention to QoS.
Related Work
• Osmosis Goal:
Find a middle ground between the optimal, yet non-scalable,
performance of End System Multicast, and the scalable, yet suboptimal, performance of Pastry/Scribe.
Network of Nodes
168.122.188.143
168.122.0.8
122.188.134.5
24.127.50.44
168.122.188.101
128.122.193.4
24.127.50.30
128.0.0.4
Measure Resources
168.122.188.143
CPUCPU
15%,15%
200 kps
168.122.0.8
CPUCPU
30%,
30%
40 kps
122.188.134.5
CPUCPU
0%, 700
0% kps
24.127.50.44
CPUCPU
45%,
45%
20 kps
168.122.188.101
CPUCPU
95%,95%
400 kps
128.122.193.4
CPUCPU
10%,10%
200 kps
24.127.50.30
CPU
CPU
10%,
10%
5 kps
128.0.0.4
CPUCPU
80%,80%
300 kps
Measure Resources
168.122.188.143
CPU 15%
100 k
ps
168.122.0.8
1 kp
122.188.134.5
5 kp
CPU 0%
168.122.188.101
s
s
CPU 95%
kp
100
ps
40 k
5 kp
CPU 45%
40 kps
ps
24.127.50.44
s
s
k
50
5 kp
s
CPU 30%
5k
10
k
ps
ps
128.122.193.4
CPU 10%
200 kps
24.127.50.30
ps
10
0
400
k
kp
s
CPU 10%
128.0.0.4
CPU 80%
Build Overlays
168.122.188.143
CPU 15%
168.122.0.8
CPU 30%
122.188.134.5
CPU 0%
24.127.50.44
CPU 45%
168.122.188.101
CPU 95%
128.122.193.4
CPU 10%
24.127.50.30
CPU 10%
128.0.0.4
CPU 80%
Construct Streams
Producer
Back T o T he Fut ure
Consumer
Back T o T he Fut ure
Pass-through
Back T o T he Fut ure
Processor
Back T o T he Fut ure
Consumer
Back T o T he Fut ure
System Overview
Resource
Management
Network
Utilization
Network & CPU
Utilization
Migration
Policy
Routing
Information
Transport
Where & When
To Migrate
Thread
Migration
System Overview
Thread Migration
• Provides a means of transporting a thread
from one machine to another.
• It has no knowledge of either the current
resource state or overlay network.
System Overview
Resource Management
• API to provide network and utilization
information.
• Used by Transport to create and maintain logical
overlay.
• Used by thread Migration Policy to decide when
and where to migrate.
System Overview
Transport
• Creates overlay network based on resource
management information.
• Provides communications infrastructure.
• Provides API to Migration Policy allowing
access to routing table information.
System Overview
Migration Policy
• Decides when and where to migrate threads
based on pluggable policy.
• Leverages resource metrics and routing table
of logical overlay in decision making.
• Call thread migration API when signaling that
it is time to migrate, sends destination
address of node to migrate to.
Resource Monitoring
In order to provide basic tools for scalability and QoS
constrained routing, it is necessary to monitor system
resource availability.
• Measurements
• Network Characteristics (Bandwidth/Latency)
• CPU Characteristics (Utilization/Queue Length)
Resource Monitoring
Bandwidth Measurement
• When stream exists between hosts, passive
measurement is performed.
• Otherwise, active measurements carried out
using packet train technique.
• Averaging function can be defined by user.
Implementation
• Using pcap library in Linux and Windows.
Resource Monitoring
CPU Measures
• Statistics collected at user defined intervals.
Implementation
• Linux
Kernel Level: Module collects data every jiffy
User Level: Reads loadavg & uptime /proc files.
• Windows
Built in performance counters.
Resource Monitoring
Evaluation of techniques
• System/Network wide overhead of running
measurement code.
• How different levels of system/network load
affect measurement techniques.
Resource Monitoring
Evaluation of work
• Linux functionality implemented
• CPU measures evaluated
In progress
• Bandwidth measurement evaluation
• Windows implementation
Transport Overview
•Distributed, scalable, and widely deployable
routing infrastructure.
• Create a logical space correlated with the
physical space
• Distributed routing table construction and
maintenance.
•Multicast transmission of data with the ability to
meet QoS.
Routing Infrastructure
Logical Space
• Assume IP addresses provide approximation of
physical topology
• 1:1 mapping of logical to physical
Routing Tables
• Maximum size of 1K entries
• Obtained incrementally during joining
• Progressively closer routing ala Pastry
Multicast Tree Growing
• QoS considered during join/build phase
• Localized, secondary rendezvous points
• Next-hop session information maintained by all
nodes in multicast tree
Multicast Group Organizational Diagram
P1
P2
P3
SP1
SP2
SP3
RP(G)
Transport Evaluation
Planned
• Test the network stress and QoS of our system
compared to IP Multicast, Pastry, and End-System
Multicast.
Transport Future Work
• User and kernel space implementations.
• Integrate XTP to utilize the system
Thread Migration
Client
500
01
Down
no buffer
02
500
Processor
<-- buffer -->
03
500
Up
no buffer
500
04
Server
Migration Overview
Both user and kernel level implementations:
• Change node state, associated API
(pass-through, processing, corked and uncorked).
• Migrate nodes while maintaining stream integrity.
Kernel/C : Less protection domain switches, less
copies, kernel threading, and scalability. Faster.
User/Java: Can run on any Java platform. Friendlier.
Migration Accomplishments
Kernel:
• IOCTL /dev interface.
• Different State design and code.
• Streaming handled by kernel threads in
the keventd process.
• Test and API interface.
Migration Accomplishments
Java:
• Command line OR socket-based API.
• Dynamic binding on processor object, which must
be derived from a provided abstract class.
• Works with any socket producer/consumer pair.
Migration Integration
Kernel:
• Non OSMOSIS specific C/C++ API.
• Socket-based API.
Java:
• Java command line API.
• Provides abstract classes for processors.
• Socket-based API.
Migration Evaluation
Comparison with:
• Standardized methods for data pass through.
• Existing non-real-time streaming systems.
• Existing thread migration systems.
Comparison and integration between the Java and
Kernel Loadable Module implementations.
Migration Future Work
Kernel:
• Implement zero-copy for the processing state.
• Heterogeneous thread migration.
Java:
• Increased performance.
Both:
• Support for alternate network protocols.
• Testing and evaluation.
Conclusions
The systems and algorithms developed are
significant initial steps toward a final OSMOSIS
system.
The have been designed to be modular and easily
integrated together.
The research and mechanisms developed during
this project are not bound to the OSMOSIS
system.