No Slide Title

Download Report

Transcript No Slide Title

SAHARA: A Revolutionary
Service Architecture for Future
Telecommunications Systems
Randy H. Katz, Anthony Joseph
Computer Science Division
Electrical Engineering and Computer Science Department
University of California, Berkeley
Berkeley, CA 94720-1776
Project Goals
• Delivery of end-to-end services with
desirable properties (e.g., performance,
reliability, “qualities”), provided by multiple
potentially distrusting service providers
• Architectural framework for
–
–
–
–
Economics-based resource allocation
Third-party mediators, such as Clearinghouses
Dynamic formation of service confederations
Support for diverse business models
Presentation Outline
•
•
•
•
•
Motivation
Project SAHARA
Initial Investigations
Testbeds
Summary and Conclusions
Presentation Outline
•
•
•
•
•
Motivation
Project SAHARA
Initial Investigations
Testbeds
Summary and Conclusions
The Huge Expense of New
Telecomms Infrastructures
• European auctions for 3G spectrum: 50
billion ECU and counting
• Capital outlays likely to match spectrum
expenses, all before the first ECU of
revenue!
• Compelling motivation for collaborative
deployment of wireless infrastructure
Any Way to Build
a Network?
• Partitioning of frequencies independent of
actual subscriber density
– Successful operator oversubscribe resources, while
less popular providers retain excess capacity
– Different flavor of roaming: among
collocated/competing service providing
• Duplicate antenna sites
– Serious problem given community resistance
• Redundant backhaul networks
– Limited economies of scale
The Case for Horizontal
Architectures
“The new rules for success will be to provide one
part of the puzzle and to cooperate with other
suppliers to create the complete solutions that
customers require. ... [V]ertical integration breaks
down when innovation speeds up. The big telecoms
firms that will win back investor confidence
soonest will be those with the courage to rip apart
their monolithic structure along functional layers,
to swap size for speed and to embrace rather than
fear disruptive technologies.”
The Economist Magazine, 16 December 2000
Horizontal Internet Service
Business Model
Applications
(Portals, E-Commerce,
E-Tainment, Media)
Appl Infrastructure Services
(Distribution, Caching,
Searching, Hosting)
AIP
ISV
Application-specific Servers
(Streaming Media, Transformation)
ASP
Internet
Data Centers
ISP
CLEC
Application-specific
Overlay Networks
(Multicast Tunnels, Mgmt Svrcs)
Global Packet Network
Internetworking
(Connectivity)
Feasible Alternative: Horizontal
Competition vs. Vertical Integration
• Service Operators “own” the customer,
provide “brand”, issue/collect the bills
• Independent Backhaul Operators
• Independent Antenna Site Operators
• Independent Owners of the Spectrum
• Microscale auctions/leases of network
resources
• Emerging concept of Virtual Operators
Virtual
Operator
• Local premise owner deploys own access infrastructure
– Better coverage/more rapid build out of network
– Deployments in airports, hotels, conference centers,
office buildings, campuses, …
• Overlay service provider (e.g., PBMS) vs.
organizational service provider (e.g., UCB IS&T)
– Single bill/settle with service participants
• Support for confederated/virtual devices
– Mini-BS for cellular/data + WLAN for high rate data
Presentation Outline
•
•
•
•
•
Motivation
Project SAHARA
Initial Investigations
Testbeds
Summary and Conclusions
The “Sahara” Project
•
•
•
•
•
•
Service
Architecture for
Heterogeneous
Access,
Resources, and
Applications
SAHARA Assumptions
• Dynamic confederations to better share resources &
deploy access/achieve regional coverage more rapidly
• Scarce resources efficiently allocated using dynamic
“market-driven” mechanisms
• Trusted third partners manage resource marketplace
in a fair, unbiased, audited and verifiable basis
• Vertical stovepipe replaced by horizontally organized
“multi-providers,” open to increased competition and
more efficient allocation of resources
Architectural Elements
• “Open” service/resource allocation model
– Independent service creation, establishment,
placement, in overlapping domains
– Resources, capabilities, status described/exchanged
amongst confederates, via enhanced capability
negotiation
– Allocation based on economic methods, such as
congestion pricing, dynamic marketplaces/auctions
– Trust management among participants, based on
trusted third party monitors
Architectural Elements
• Forming dynamic confederations
– Discovering potential confederates
– Establishing trust relationships
– Managing transitive trust relationships & levels
of transparency
– Not all confederates need be competitors-heterogeneous, collocated access networks to
better support applications
Architectural Elements
• Alternative View: Service Brokering
– Dynamically construct overlays on component
services provided by underlying service
providers
• E.g., overlay network segments with desirable
performance attributes
• E.g., construct end-to-end multicast trees from
subtrees in different service provider clouds
– Redirect to alternative service instances
• E.g., choose instance based on distance, network load,
server load, trust relationships, resilience to network
failure, …
Deliverables
• Architecture and Mechanisms for
– Fine grain market-driven resource allocation
– Application awareness in decision making
• Confederations and Trust Management
– Dynamic marshalling, observation/verification of
participant behaviors, dissolution of confederations
– Mechanisms to “audit” third party resource allocations,
insuring fairness and freedom from bias in operation
• New Handoff Concepts Based on Redirection
– Not just network handoff for lower cost access
– Also alternative service provider to balance loads
Research Methodology
Analyze & Design
Evaluate
Prototype
• Evaluate existing system to discover bottlenecks
• Analyze alternatives to select among approaches
• Prototype selected alternatives to understand
implementation complexities
• Repeat
Presentation Outline
•
•
•
•
•
Motivation
Project SAHARA
Initial Investigations
Testbeds
Summary and Conclusions
Initial Investigations
• Congestion-Based Pricing
– Economics-based resource allocation
• Clearinghouse Architecture
– Trusted Resource Mediators
– Measurement-based Admission Control with
traffic policing
• Service Composition
– Achieving performance, reliability from multiple
placed service instances
Congestion-Based Pricing
• Hypothesis: Dynamic pricing influences
user behavior
– E.g., shorten/defer call sessions;
accept lower audio/video QoS
• Critical resource reaches congestion levels,
modify prices to drive utilization back to
“acceptable” levels
– E.g., available bandwidth, time slots, number of
simultaneous sessions
Computer Telephony Services
(CTS) Testbed
Internet
Internet-toPSTN Gateways
PSTN
• E.g., Dialpad.com & Net-to-Phone
• Gateways as bottlenecks (limited PSTN access lines)
• Use congestion pricing (CP) to entice users to
– Talk shorter
– Talk later
– Accept lower quality
Berkeley User Study
• Goal: determine effectiveness of CP
• Figure of merits
– Maximize utilization (service not idling)
– Reduce provisioning
– Reduce congestion (reduced blocking probability)
• Users acceptance/reactions to CP
–
–
–
–
–
Talk shorter
Wait
Defer talk at another time
Use alternative access device
Use reduced connection qualities
Experiments
• Vary Price, Quality, Interval of Price Changes
• Experiments
–
–
–
–
–
Congestion pricing: rate depends on current load
Flat rate pricing: same rate all the time
Time-of-day pricing: higher rate during peak-hours
Call-duration pricing: higher rate for long duration calls
Access-device pricing: higher rate for using a phone
instead of a computer
Experimental Setup & Limitations
• Computers vs. phones to make/receive free phone calls
• Different pricing policies: 1000 tokens/week
• RT pricing, connection quality & accounting information
Flat Rate Versus Time-of-day
Peak hours from
7-11pm
Number of Minutes
Flat Rate Pricing: Calling Pattern in Minutes
160
140
120
100
80
60
40
20
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Time of Day (Hour)
Peak shifted!
Number of Minutes
High bursts right
before & right after
peak hours
Time of Day Pricing: Calling Pattern in Minutes
160
140
120
100
80
60
40
20
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Time of Day (Hour)
Initial Results
• Call-duration pricing
– Hypothesis: Less long duration calls & more short
duration calls
– Result: fewer long duration calls, but no increase in short
duration calls
• Congestion pricing
– Congestion: two or more simultaneous users
– Hypothesis: Talk less when encounter CP
– Result: Each user used service for 8.44 minutes
(standard error 11.3) more. Observed reduction in call
session when CP encountered: 2.31 minutes (2.68) less.
– Not statistically significant (t-test)
– Not enough users to cause much congestion
Preliminary Findings
• Feasible to implement/use CP in real system
• Pricing better utilizes existing resources, reduces
congestion
• CP is better than other pricing policies
• Based on surveys, users prefer CP to flat rate
pricing if its average rate is lower
– Service providers can better utilize existing resources
by providing users with incentives to use CP
• Limitations
– Too few users
– Only apply to telecommunication services
Clearinghouse
H.323
Gateway
PSTN
IP Based
Core
GSM
Wireless
Phones
Video conferencing,
Distance learning
Web surfing, emails,
TCP connections
VoIP
(e.g. Netmeeting)
Vision: data, multimedia (video, voice, etc.) and
mobile applications over one IP-network
Question: How to regulate resource allocation within
and across multiple domains in a scalable
manner to achieve end-to-end QoS?
Clearinghouse Goals
• Design/build distributed control architecture for
scalable resource provisioning
– Predictive reservations across multiple domains
– Admission control & traffic policing at edge
• Demonstrate architecture’s properties and performance
– Achieve adequate performance w/o edge per-flow state
– Robust against traffic fluctuations and misbehaving flows
• Prototype proposed mechanisms
– Min edge router overhead for scalability/ease of deployment
Clearinghouse Architecture
• Clearinghouse distributed architecture-each CH-node serves as a resource manager
• Functionalities
– Monitors network performance on ingress &
egress links
– Estimates traffic demand distributions
– Adapts trunk/aggregate reservations within &
across domains based on traffic statistics
– Performs admission control based on estimated
traffic matrix
– Coordinates traffic policing at ingress & egress
points for detecting misbehaving flows
Multiple-ISP Scenario
Host
ISP m
Ingress Router
ER
Host
ER IR
ISP 1
ISP n
ISP 2
Egress Router
IR
• Hybrid of flat and hierarchical structures
– Local hierarchy within large ISPs
• Distribute network state to various CH-nodes and reduces
the amount of state information maintained
– Flat structure for peer-to-peer relationships across
independent ISPs
Illustration
Host
CHo
ISP1
LD1
Edge CH
o
Router
LD0
LD0
CH1
• A hierarchy of Logical domains (LDs)
– e.g., LD0 can be a POP or a group of neighboring POPs
•
A CH-node is associated with each LD
– Maintains resource allocations between ingress-egress pairs
– Estimates traffic demand distributions & updates parent CH-nodes
Illustration
Host
Host
LD1
ISP n
CHo
ISP1
Edge CH
o
Router
ISP m
LD0
LD0
CH1
CH1
CH1
Peer-Peer
• Parent CH-node
– Adapt trunk reservations across LDs for aggregate traffic
within ISP
• Appears flat at the top level
– Coordinate peer-to-peer trunk reservations across multiple ISPs
Key Design Decisions
• Service model: ingress/egress routers as endpoints
– IE-Pipe(s,d) = aggregate traffic entering an ISP domain at
IR-s, and exits at ER-d
• Reservations set-up for aggregated flows on intraand inter-domain links
– Adapt dynamically to track traffic fluctuation
– Core routers stateless; edge maintain aggregate states
• Traffic monitoring, admission control, traffic
policing for individual flows performed at the edge
– Access routers have smaller routing tables; experience
lower aggregation of traffic relative to backbone routers
– Most congestion (packet loss/delay) happens at edges
Traffic-Matrix Admission Control
Host Network
Rnew
• Mods to edge routers
A
IR-s
Accept
or Reject
POP 1
CH
POP 2
– Traffic monitors passively
measure aggregate rate of
existing flows, M(s,d)
– IR-s forwards control
messages
(Request/Accept/Reject)
between CH and host/proxy
– Estimate traffic demand
distributions, D(s,:), and
report to the CH
• CH
B
ER-d
Host Network
Traffic Monitor
– Leverages knowledge of
topology and traffic matrix to
make admission decisions
Group Policing for Malicious
Flow Detection
Request
IR-s
A
Accept
(with Fid)
POP 1
CH
Update
TBFs
POP 2
B
Host Network
ER-d
TBF Traffic Policer
• CH assigns Fid if the flow is
admitted
– Let FidIn = x, FidEg = y
x
y
x
a
x
b
TBF for group-x
Traffic Policer at IR-s aggregate flows
based on FidIn for group policing
x
y
t
y
w
y
TBF for group-y
Traffic Policer at ER-d aggregate flows
based on FidEg for group policing
* Traffic Policer at IR or ER only maintains
total allocated bandwidth to the group
(aggregate state) and not per-flow
reservation status
Service Composition
• Assumptions
– Providers deploy services throughout network
– Portals constructed via service composition
• Quickly enable new functionality on new devices
• Possibly through SLAs
– Code is initially non-mobile
• Service placement managed: fixed locations, evolves slowly
– New services created via composition
• Across service providers in wide-area: service-level path
Service Composition
Provider A
Cellular
Phone
Video-on-demand
server
Provider A
Provider B
Transcoder
Thin
Client
Provider B
Replicated
instances
Text
to
speech
Provider R
Email
repository
Provider Q
Architecture for Service
Composition and Management
Application
plane
Logical
platform
Composed services
Peering relations,
Overlay network
Service-level
path creation
Handling failures
Hardware
platform
Service clusters
Service
location
Network
performance
Detection
Recovery
Architecture
Internet
Source
Destination
Peering:
monitoring
& cascading
Application
plane
Composed
services
Logical
platform
Peering relations,
Overlay network
Hardware
platform
Service
clusters
Service cluster: compute
cluster capable of running
services
• Overlay nodes are clusters
– Compute platform
– Hierarchical monitoring
– Overlay network provides
context for service-level path
creation & failure handling
Service-Level Path Creation
• Connection-oriented network
– Explicit session setup plus state at intermediate nodes
– Connection-less protocol for connection setup
• Three levels of information exchange
– Network path liveness
• Low overhead, but very frequent
– Performance Metrics: latency/bandwidth
• Higher overhead, not so frequent
• Bandwidth changes only once in several minutes
• Latency changes appreciably only once an hour
– Information about service location in clusters
• Bulky, but does not change very often
• Also use independent service location mechanism
Service-Level Path Creation
• Link-state algorithm for info exchange
–
–
–
–
Reduced measurement overhead: finer time-scales
Service-level path created at entry node
Allows all-pair-shortest-path calculation in the graph
Path caching
• Remember what previous clients used
• Another use of clusters
– Dynamic path optimization
• Since session-transfer is a first-order feature
• First path created need not be optimal
Session Recovery: Design Tradeoffs
• End-to-end:
– Pre-establishment possible
– But, failure information
has to propagate
– Performance of alternate
path could have changed
• Local-link:
Finding
entry/exit
Overlay n/w
Service-level
path creation
Handling failures
Service
location
Network
performance
Detection
Recovery
– No need for information to
propagate
– But, additional overhead
The Overlay Topology:
Design Factors
• How many nodes?
– Large number of nodes implies reduced latency overhead
– But scaling concerns
• Where to place nodes?
– Close to edges so that hosts have points of entry and
exit close to them
– Close to backbone to take advantage of good connectivity
• Who to peer with?
– Nature of connectivity
– Least sharing of physical links among overlay links
Presentation Outline
•
•
•
•
•
Motivation
Project SAHARA
Initial Investigations
Testbeds
Summary and Conclusions
Testbeds at Different Scale
• Room-scale
– Bluetooth devices working as ensembles,
cooperatively sharing bandwidth within microcell
– Inherent trust, but finer grained intelligent and
active allocation as opposed to etiquette rules
– How lightweight? Too heavyweight for Bluetooth?
• Building-scale
– Multiple wireless LAN “operators” in building
– Experiment with “evil operators”; third party audit
mechanisms to determine offender
– GoN offers alternative telephony, dynamic
allocation of frequencies/time slots to
competing/confederating providers
Testbeds at Different Scale
• Campus-scale
– Departmental WLAN service providers with
overlapping coverage out of doors
• Regional-scale
– Possible collaborations with AT&T Wireless
(NTTDoCoMo), PBMS, Sprint?
Presentation Outline
•
•
•
•
•
Motivation
Project SAHARA
Initial Investigations
Testbeds
Summary and Conclusions
Summary
• Congestion Pricing, Clearinghouse, Service
Composition first attempts at service architecture
components
• Next steps
– Generalization to multiple service providers
– Introduction of market-based mechanisms: congestion
pricing, auctions
– Composition across confederated service providers
– Trust management infrastructure
– Understand peer-to-peer confederation formation vs.
hierarchical overlay brokering
Conclusions
• Support for multiple service providers
needed to be retrofitted to original
Internet architecture
• Telephony architecture better developed
model of multiple service providers &
peering, but with longer-lived agreements,
fewer providers
• Need for support in a more dynamic
environment, with larger numbers of
service providers and/or service instances