Transcript Long Path

Budget-Based QoS
Management Architecture
指導老師
學生
學生
學生
學生
連耀南 教授
陳建同
李宗勳
陳明志
陳逸民
論文題目
陳建同
全IP網路中以預算為基礎之端對
端服務品質之管理
李宗勳
預算法全IP核心網路服務品質管
理之路徑規劃
陳明志
預算法全IP核心網路服務品質管
理之分散式資源管理與允入控制
陳逸民
預算法全IP核心網路品質管理中
可彌補預測誤差的資源配置方法
Session 1
Introduction
Introduction





Network convergence
All-IP Network
Quality of Service
UMTS Application Classes
QoS Architecture for IP-Based Network


Integrated Service
Differentiated Service
Network Convergence


Packet switching & circuit switching.
All-IP Networks.
Network Convergence &
All-IP Network
All-IP Network


A globally integrated network based on IP
technology
Strength



Low construction & management cost.
A platform for cross-network applications.
Problem


Heterogeneous networks (Impedance Mismatch)
Complicated quality of service
Heterogeneous Networks
Impedance Mismatch

Horizontal impedance mismatch


e.g. 3G – IP Core Network – WLAN
Vertical impedance mismatch

e.g. DiffServ – ATM
Quality of Service

QoS aspects





Long delay time
Jitter
Packet loss
More complicated QoS management
Diversified QoS Expectations
Diversified QoS Expectations

User





Lowest price, best service
Lowest price, accommodative service
Acceptable price, best service
Lowest price, tolerable service
System Provider



Acceptable price, best service
Highest price, acceptable service
Lowest price, tolerable service
QoS Flexibility

System providers can adjust QoS
management depending on different
policies.
UMTS Application Classes




Conversational class
Streaming class
Interactive class
Background class
QoS Sensitivity
of UMTS classes
QoS Architecture for IP-Based
Networks



IETF working group
Integrated Service
Differentiated Service
Challenges


Network convergence brings new QoS
problems.
How to provide per-flow end-to-end
QoS with limited resource and
maximum satisfaction level?
Session 2
Related Work
Related Work

Related Technology



Trunk
MPLS (Multi-Protocol Label Switching)
IP Network QoS architecture



QoS Management Architecture




IntServ
DiffServ
TEQUILA
Victor O.K. Li’s System
AQUILA (Adaptive Resource Control for QoS Using
an IP-Based Layered Architecture)
Summary
Related Work
Related Technology
Trunk



A trunk is defined as a
unit consisting of a
number of flows.
A link is a unit
consisting of a number
of trunks.
Intermediate routers
recognize only trunks.
Trunk (2)


Available bandwidth is fixed.
Suffering high blocking rate comparing
to per-flow management.
MPLS (1)


The use of labels is to explicitly identify a
common group of packets rather than to
match variable parts of the packet header.
MPLS is to enable fast switching, by replacing
route lookup for a variable length IP
destination address, with an exact match of a
fixed, predefined number of bits.
MPLS (2)


MPLS provides the ability to forward
packets over arbitrary non-shortest
paths
For non-IP based networks such as ATM
or frame relay, it provides a IP-based
control plane(routing, path selection,
reservation)
Integrated Service

IntServ Architecture



Virtual circuit
Long-lived unicast or multicast flows
RSVP (Resource Reservation Protocol)


PATH.
RESV.
IntServ Strength & Weakness

Strength




Absolute service guarantees
Traffic flows can be monitored
Using existing routing protocols
Weakness


Enormous processing overhead & state
Scalability problem
Differentiated Service

DiffServ


Providing service by classes
DSCP (DiffServ Code Point) & PHB (PerHop Behavior)
DiffServ PHBs

Expedited Forwarding (EF) - to minimize
any delays that may occur when congestion
exists

Assured Forwarding (AF) – to clarify
whether a packet can be dropped

Best Effort (BE) – for traffic having no
special forwarding requirement, for backward
compatibility
DiffServ PHBs Application
Examples
PHB
Examples
BE
E-mail, FTP
EF
Voice over IP(VOIP), Video on
Demand(VOD)
AF
Web Browsing, Telnet
DiffServ Domain (1)

Different DiffServ domain



DSCP is not compatible
Hosts that arbitrarily mark DSCP
Non-DiffServ host can be part of
DiffServ domain
DiffServ Domain (2)
DiffServ Management Function

Classifying – based on IP, port # and
network protocol

Policing



Metering – flow rate, burst size (not only
for keeping the quality, but also for billing)
Shaping – holding bursts and pacing the
traffic
Dropping
Distinction between Edge &
Core

Edge Router

Ingress Router



Egress Router


Classifying
Policing
Shaping
Core Router

Forwarding
DiffServ Strength & Weakness

Strength



Low overhead
No scalability problem & easy to implement
Weakness

DiffServ does not provide per-flow end-toend QoS .
Related Work
QoS Management Architecture
TEQUILA


The Traffic Engineering for Quality of
Service in the Internet at Large Scale
(TEQUILA)
TEQUILA Architecture



Service Level Specifications Management
(SLS Management)
Traffic Engineering
Data plane
TEQUILA Architecture
TEQUILA System Components

SLS





Subscription
Invocation
Forecast
Inter-domain SLS
Traffic Engineering



Network dimensioning
Dynamic route management
Dynamic resource management
TEQUILA System


Dynamic algorithms
Route Management


Load balance
Resource Management


Link bandwidth
Buffer
Victor O.K. Li’s Architecture

Efficient Resource Management for End-toEnd QoS Guarantees in DiffServ Networks
Admission Control



RSVP-like
Inter-domain
Intra-domain
Victor O.K. Li’s System

Centralized Routing & Resource
Allocation



Pre-calculated paths
Periodically Reallocation
Distributed Admission Control
Summary




Scalability problem (IntServ)
Providing per-flow end-to-end QoS
(DiffServ)
Too many real-time processes (TEQUILA)
Low performance to deal with bulky
burst traffic (Victor O.K Li’s System)
Research Objective

Propose a flexible QoS management
system and associated tools for All-IP
Networks.
QoS Management for largescale networks





Simple QoS management architecture
Less real-time computation
Capability of providing per-flow end-toend QoS
Flexibility for operators’ different
expectation
Adaptability for different network
technologies.
Budget-Based QoS
System Features





BBQ system is one pre-planning, distributed
system.
A simplified architecture for providing perflow end-to-end QoS with high processing
efficiency low management complexity.
The proposed architecture is easy to deploy.
Flexibility for operators to adjust their QoS
policy.
Adaptability for different layer-2 or layer-3
technologies.
Session 3
Budget-Based End-to-End QoS
Management for All-IP Networks
全IP網路中
以預算為基礎之端對端服務品質管理
Student: Chen, Chien-Tung
Advisor: Lien, Yao-Nan
Lab. of Mobile Communication, Dept. of Computer Science,
National ChengChi Univ.
September, 2003
Outline




Budget-Based QoS, BBQ
Budget-Based End-to-End QoS
Management for All-IP Networks
Performance Evaluation
Conclusion and Future Work
Budget-Based QoS, BBQ











BBQ Philosophy
An Simplified All-IP Network Architecture
Budget-Based Management
Paths Definitions
Bearer Service Architecture
Quality Entropy
On Demand Allocation vs. Pre-Planning
Centralized Planning vs. Distributed Planning
Management System Architecture for BBQ
An Simplified End-to-End Path Setup Procedure
BBQ System Assumptions
BBQ Philosophy

Objective


Provide operator an adaptable QoS management
infrastructure for All-IP networks.
Principles





Simple and Easy to employ.
Reduce the real time on demand resource
management.
Scalability
Adaptable
Provide an architecture and management tool for
operators to tune their networks to meet own
objective.
A Simplified All-IP Network
Architecture
A Simplified All-IP Network
Architecture (cont’d)
Budget-Based Management


According to the capability of each
network component, we allocate user’s
bandwidth requirements and quality
requirements to each network
component.
We allocate QoS management
responsibilities to network by budget.
Budget-Based
Management(cont’d)


BBQ is designed to reduce management
complexity.
Global resource optimization is
secondary concern.
Path Definitions




The sequence of nodes and links a packet
traveled.
In general, a packet in a packet-switching
network doesn't follow a predefined path to
travel.
The process of finding a path for a packet is
called routing.
It is much easier to assure QoS if packets
travel along designated paths.
Path Definitions (cont’d)


BBQ accomplishes QoS by deploying
paths with QoS.
Paths with QoS are designed using the
Budget-Based Management.
Path Definitions (cont’d)
Planned
Management
Component
Abilities
Short
Path
PPAs in Core
Network
Paths cross a Core
Network with certain
quality guarantees.
Long
Path
LPPAs in Core
Network
Paths cross the backbone
with certain quality
guarantees.
End-toEnd Path
Global ACAs in
Stub Networks
A path is composed of
long path and paths in
Stub Networks with certain
quality guarantees
Path Definitions (cont’d)
Bearer Service Architecture


When a packet needs to be delivered to
a destination, each sub-network that the
packet might travel through must offer
its bearer service to the packet.
In general, it needs three basic bearer
services, one for each of Entrance Stub
Network, Exit Stub Network, and
Backbone.
Bearer Service Architecture
(cont’d)
Bearer Service Architecture
(cont’d)



End-to-End service is supported by underlying
bearer services.
Backbone bearer service is supported by Core
Network bearer services.
In order to simplify, we could define


Long paths providing backbone bearer services,
and
Short paths providing Core Network bearer
services
Quality Entropy




The lower the better
We use quality entropy inversed quality
level for simplification.
A single value computed by operator
defined function.
In our research, we assume quality
entropy is additive.
Quality Entropy (cont’d)
Quality Entropy (cont’d)



Users’ quality definition may not be the
same with networks’.
Quality entropy represents a mediator.
Quality entropy can accommodate to
different quality definitions.
On Demand Resource
Allocation vs. Pre-Planning

On demand resource allocation :





Admission controllers control minimum resource.
Request more resources when needed
Low scalability
Difficult to achieve optimal resource utilization
(FCFS).
Slower response time

Not good for time- sensitive applications
On Demand Resource
Allocation vs. Pre-Planning
(cont’d)

Pre-Planning :




Allow long computation time for better
algorithms
Could utilize historical data for forecast
Assign a pre-calculated path to a request
immediately fast!
Good for stable networks with periodical
traffic patterns
Centralized vs. Distributed

Centralized





Easy to implement.
Lack of flexibility to deal with forecast error.
Heavy load in planner
Fairness problem
Distributed



Reduce load for planner
Fair resource allocation
More precise forecast
Management System
Architecture for BBQ


Management System Hierarchy
Management System Software
Architecture
Management System
Hierarchy




BBQ uses hierarchical management system to
reduce management complexity.
Components in different layers plan different
resource respectively.
Lower layer components provide planned
resource for upper layer components.
All layers are independent managements.
Management System
Hierarchy (cont’d)
Layer
End-to-end Network QoS
Coordination
Core Network Resource
Management
Function
End-to-end QoS control, end-to-end
resource/path planning
Allocate resources of a Core Network to the
QoS Manager of all resource mediators
Core Network QoS Control
(One for each Ingress)
Implement QoS policy of Core Networks, e.g.,
admission control, load control, routing and
path selection, packet scheduler, etc.
DiffServ
Execute the QoS policy designated by the
upper layer manager
Management System Software
Architecture
A Simplified End-to-End Path
Setup Procedure
BBQ System Assumptions




Quality entropy is a single value metric
Quality entropy is additive
Class by class planning
Periodical traffic pattern
Session 4
Budget-Based End-to-End QoS
Management for All-IP Networks







Resource Reservation/Allocation
Quality Entropy Definition
Long Path Planning
End-to-End QoS Component, LPPA
End-to-End QoS Component, Global ACA
End-to-End QoS Procedure
LPPA Optimization Model
Resource
Reservation/Allocation

Reservation Time Frame




Long Term
Short Term
Real Time
Automatic Revoke Period
Resource
Reservation/Allocation (cont’d)

Reservation Certainty


Hard : Cannot be revoked without negotiation within
revoke time period
Soft : Can be revoked without negotiation within revoke
time period.
Resource
Reservation/Allocation (cont’d)

Reservation Quantity

Batch Order



Pre-reservation
On demand reservation
Per flow

On demand reservation
Resource
Reservation/Allocation (cont’d)
Resource Reservation
Scheme
Simple Long Term Reservation
Reservation Time Frame
Long Term
Short Term
Real Time
Hard
(Batch Order)
2. Simple Short Term
Reservation
Hard
(Batch Order)
3.
Real-time Batch Reservation
Hard
(Batch Order)
4.
Real-time Reservation(RSVP)
Hard
(Per Flow)
5.
Progressive Reservation
6. Proposed Approach for Endto-End
Soft
(Batch Order)
Soft
(Batch Order)
Hard
(Batch Order)
Soft
(Batch Order)
Hard
(Per Flow)
Long Path Planning

Long Path Planning


Choose short paths among different Core
Networks for a service request
Per flow on demand is impossible



Reduce management complexity
Adopt Pre-planning


We use pre-planning approach
Not on demand allocation
No reservation will be made


Real time on demand reservation instead
avoid waste resources
Long Path Planning (cont’d)

Due to the management system
hierarchy



Long path planning is to compose short
paths into long paths
Not to determine short path internals
More efficient than virtual circuit
Long Path Planning (cont’d)
Long Path Planning (cont’d)



LPPA is a component in End-to-End Network
QoS Coordination Layer in every Core
Network.
Responsible for long path planning
Input

Traffic Aggregate Forecast



Each border gateway monitors the traffic pattern.
(Source,Destination,Bandwidth,Quality Entropy)
Available Short Paths


Planned by Core Networks
(Source,Destination,Bandwidth,Quality Entropy)
Long Path Planning (cont’d)

Output
Long Path Planning (cont’d)
Long Path Planning (cont’d)

Global planning LPPA





Centralized planning
Better
utilization
Without reservation
mechanism
Hard to realize
Could not match
individual operators’
policy
Long Path Planning (cont’d)

Local planning LPPA






Distributed planning
Easy to deploy
Individual operators
could operating objective
Lower network utilization
Resource reservation
overhead
Use our reservation
scheme for resource
reservation
End to End Admission Control



Global ACA is a component in End-to-End
Network QoS Coordination Layer.
Global ACA resides in Network Access Server.
Responsible for

End-to-End path set up for incoming request



End-to-End path selection
Real time hard reservation
End-to-End QoS admission control
End-to-End QoS Procedure


Local planning LPPA is adopted as our
approach.
Three phases

Long term path planning plus soft reservation


Short term soft reservation


LPPA establishes cooperate relationship between
operators(Seldom).
LPPA revises the long path to adapt to enormous traffic
fluctuation (Seldom).
Real time hard reservation

Global ACA reserves resources for incoming request
End-to-End QoS
Procedure(cont’d)

Long term soft reservation
End-to-End QoS
Procedure(cont’d)

Short term soft reservation


Same as long term soft reservation
Triggered whenever traffic fluctuation
occurs .
End-to-End QoS
Procedure(cont’d)

Real time hard reservation
LPPA Optimization Model
LPPA Optimization Model
(cont’d)
LPPA Optimization
Model(cont’d)

Nonlinear Goal Mixed Integer
Programming


Solve this model with linear relaxation
Minimize
N BG N BG
4
 p
S 1 D 1 m 1
m
S
( m )
SD

N NodePair
 BC C
i
i 1
i
Performance Evaluation




Performance Evaluation Metrics
Experiment Design
Experiments and Results
Summary
Performance Evaluation
Metrics

Long term soft reservation

Resource Reservation Cost


Cost for long term soft reservation
Satisfaction Ratio

The level we can satisfy the traffic aggregate forecast
Performance Evaluation
Metrics (cont’d)

Real time hard reservation

Blocking Ratio

The Ratio of blocked flows to traffic flow requests
Experiment Design

Experiment Environment

Optimization Model


ILOG on Windows 2000 platform
Simulation

BBQ Computational Simulator on Unix
Experiment Design (cont’d)

Test Instance Generation

Network Topology Generation




Adopt a basic topology
Change the network connectivity
Randomize the quality entropy and cost of each short
path
Traffic Request Generation


Change the traffic aggregates forecast requirement
Change the traffic flows sequence
Experiment Design (cont’d)
Experiment Design (cont’d)

Experiments

Experiment 1 & 2




Sensitivity to total traffic aggregate requirements
Sensitivity to network connectivity
Observing the performance using two metrics, resource
reservation cost and satisfaction ratio(penalty)
Experiment 3


Sensitivity to forecast error
Observing the performance using one metrics, blocking
ratio
Results of Exp 1:
Sensitivity to Total Traffic Aggregate
requirements

Parameters Setup:


Objective:

Observation of metrics:



Total traffic aggregate requirements: 10, 30, …, 150
Gbps
1-A:Resource reservation cost
1-B:Satisfaction ratio, Penalty
Boundary




1-A(1): Satisfaction Ratio = 100%
1-A(2): Network Topology
1-B(1): Resource Reservation Cost = 3000k
1-B(2): Network Topology
Results of Experiment 1-A
Satisfaction Ratio = 100%
Network Topology
Sensitivity to total traffic aggregate requirement on resource reservation cost
(1)Different network connectivity and (2) Different satisfaction ratio.
Resource reservation cost increases whenever resource need increases.
Results of Experiment 1-B
Resource Reservation Cost = 3000k
Network Topology
Sensitivity to total traffic aggregate requirement on satisfaction ratio
(1)Different network connectivity and (2) Different cost limit.
Satisfaction ratio decreases whenever resource need increases.
Results of Experiment 1-B(cont’d)
Resource Reservation Cost = 3000k
Network Topology
Sensitivity to total traffic aggregate requirement on penalty
(1)Different network connectivity and (2) Different cost limit.
Penalty increases whenever resource need increases.
Results of Exp 2:
Sensitivity to Network Connectivity

Parameters Setup:


Objective:

Observation of metrics:



Network connectivity: 30, 40, …, 70 %
Resource reservation cost
Satisfaction ratio, Penalty
Boundary




2-A(1): Satisfaction ratio = 100%
2-A(2): Total traffic aggregate requirement = 70Gbps
2-B(1): Total traffic aggregate requirement = 70Gbps
2-B(2): Resource Reservation Cost = 3000k
Results of Experiment 2-A
Satisfaction Ratio = 100%
Total traffic aggregate requirement = 70Gbps
Sensitivity to network connectivity on resource reservation cost
(1)Different total traffic aggregate requirement and (2) Different satisfaction ratio.
Resource reservation cost decreases whenever network connectivity increases.
Results of Experiment 2-B
Total traffic aggregate requirement = 70Gbps
Resource Reservation Cost = 3000k
Sensitivity to network connectivity on satisfaction ratio
(1)Different cost limit and (2) Different total traffic aggregate requirement .
Satisfaction ratio increases whenever network connectivity increases.
Results of Experiment 2-B(cont’d)
Total traffic aggregate requirement = 70Gbps
Resource Reservation Cost = 3000k
Sensitivity to network connectivity on penalty
(1)Different cost limit and (2) Different total traffic aggregate requirement .
Penalty decreases whenever network connectivity increases.
Results of Exp 3:
Sensitivity to Forecast Error

Parameters Setup:


Objective:

Observation of metrics:


Forecast Error: 0, 10, …, 100 %
Blocking Ratio
Test Cases

5 test cases of different traffic aggregate
bandwidth requirements S.D.
Results of Exp 3:
Sensitivity to Forecast Error (cont’d)
Results of Exp 3:
Sensitivity to Forecast Error (cont’d)
Results of Experiment 3
S.D.=48.7
S.D.=160.75
Sensitivity to forecast error on blocking ratio (1)Different S.D.
1. Blocking ratio increases whenever forecast error increases.
2. Compare to Trunk, the influence on LPPA and OSPF is
slight
Summary



Resource reservation cost increases whenever
total traffic aggregate requirement increases.
Resource reservation cost decreases
whenever network connectivity increases.
Our approach is more tolerant towards
forecast error if the total traffic aggregate
requirement remains the same.
Conclusion and Future Work





Long Path planning can achieve perflow End-to-End QoS among multi Core
Networks for All-IP networks.
Our approach could use resource more
efficiently.
Long term soft reservation broker
A more delicate pricing mechanism
Design a heuristic algorithm.
Session 5
Outline







Core Network Architecture
Resource Allocation in Core Network under
BBQ
Resource Allocation Procedure in Core
Network
Path-planning Problem
Proposed Solution
Performance Evaluation
Conclusion and Future work
Core Network Architecture


Several core networks are included in
one backbone.
We assume one core network is own by
one operator with one management
policy.
Core Network Architecture
(cont’d)

Assume our management system is on
top of DiffServ Network



Bandwidth Broker – bandwidth
management.
Edge Router – traffic classification,
admission control.
Core Router – traffic forwarding.
DiffServ Architecture
Traffic Procedure
Resource Allocation Time in
Core Network

Real-time on demand





Resource is allocated in real-time.
Flexible
High Overhead
Greedy(FCFS)
Preplan


Resource is allocated before request arrival.
Global consideration
Resource Allocation Approach
- Centralized



All resources in a core network is all
controlled by Bandwidth Broker.
Bandwidth Broker performs path
planning.
Edge routers receive planned paths.
Centralized Approach :
Pros. And Cons.

Pros :


Easy to implement.
Cons :



Lack of flexibility.
Heavy load in Bandwidth Broker.
Fairness Problem
Resource Allocation Approach
- Distributed




Edge router submit resource request.
Bandwidth Broker responses to edge router
request.
Path planning is done by edge router.
Pros.



Distribute load to different components
Easy to implement resource re-allocation
Cons.

It needs coordination.
Resource Allocation Approach
- Hybrid


Use centralized approach initially.
Use distributed approach for enormous
traffic fluctuation.
Resource Waste Situation
Resource Allocation Approach
in BBQ



Preplan
Distributed
The forecast error is compromised by
more delicate methods.
Software Components in Core
Network(I)
Software Components
Architecture in Core Network(II)

Core Network Coordinator



Bandwidth Broker – resource manager.
Long Path Planning Agent – deal long path
with LPPA in other core networks.
Short Path Planning Agent – plan short
path in the core network.
Software Components
Architecture in Core Network(III)

Ingress Router



Bandwidth Order Agent, BOA – Order
bandwidth from BB.
Local Short Path Planning Agent, LSPPA –
find a set of planned path for this ingress.
Admission Control Agent, ACA – Admission
control function.
Historical request data

Terms


Length of Time Period - Operator define
Concerned Time Period (CTP)


The time period concerned by current resource
planning process
Reference Time Period (RTP)
Resource Allocation Procedure
Resource Allocation Procedure
Step 1

Step 1:




Execute path planning and transform path-based
demand to link-based demand for BOA.
Execute by SPPA in CNC
Input : Historical traffic pattern in last R.T.P. of the
core network, characteristics (bandwidth, quality
entropy) of each link in core network.
Output : a set of planned path.
Path Demand to Link Demand
Resource Allocation Procedure
Step 2

Step 2:



BOA order bandwidth from BB.
Input : Link-based bandwidth demand.
Output : a optimal resource demand.
Resource Allocation Procedure
Step 3

Step 3:



BB commits bandwidth resource requested
by each ingress.
Input : Link-based bandwidth demand by
BOA in each Ingress.
Output : a optimal committed bandwidth.
Resource Allocation Procedure
Step 4

Step 4:

BOA forwards committed bandwidth to
LSPPA.
Resource Allocation Procedure
Step 5

Step 5:



LSPPA executes path planning with the
resource acquired by BOA.
Input : Historic traffic pattern in last R.T.P
of this pattern, resource acquired by BOA
of this ingress.
Output : a set of planned path, ready for
the using of ACA in run-time.
Runtime Traffic Processing
Session 6
預算法全IP核心網路服務品質
管理之路徑規劃
Planning in Budget-Based QoS
Management for All-IP Core Networks
Path Planning in Core Network


The packet forwarding must have
following assigned path to insure the
quality.
The objective of path planning is to
maximize profit defined by network
operator.
Path Planning Agent in BBQ

Long Path Planning Agent


Short Path Planning Agent


Located in CNC, coordinate long path with LPPA in
other core network.
Located in CNC, plans the short path for entire
core network.
Local Short Path Planning Agent

Located in each Ingress Router, plans the short
path start from this ingress.
Related Work of Path Planning

Traditional routing algorithm :



RIPs, OSPF, etc…
Consider only in one weight.
Quality related routing algorithm


Multi-Constrained Path Problem(MCP
Problem)
NP-complete
Assumption of Path Planning



The profit is function of bandwidth and
quality entropy.
We assume the quality entropy is
additive.
For a link i , the quality of a link is
guaranteed if the usage of bandwidth
isn’t over the capacity of the link.
Problem of Path Planning



Give a topology with capacity and
quality of each link
To find a planned path with maximal
profit
Each path in planned path set must
satisfies some constraints
Path Planning Symbol Table(I)
G(V,E) A directed graph, G, containing |V| nodes and |E|
vi
directed links; V denotes the set of all nodes, and E
denotes the set of all links.
A node; vi  V
ek
A directed link ek = (vx, vy)  E, vx is the start node,
vy is the end node of link ek; also denoted as exy
R
Incoming traffic set.
ri
A request i, consist of (si, di, bi, qi); ri  R
Path Planning Symbol Table(II)
si
The path set satisfied constraints of request i and
selected by our algorithm
di
Destination node of a request i
bi
Bandwidth requirement of a request i
qi
Quality entropy of a request i
mi
Profit earn of a admitted request i
Pi
All possible path satisfied constraints of request i
Path Planning Symbol
Table(III)
pi
A path; pi  Pi
i
The path set satisfied constraints of request i and
selected by our algorithm

The final path set selected by our algorithm
Be(Pij)
= be(Pij) , if links of Pij contains link e
= 0 , otherwise
L(Pij)
The satisfied ratio of request i with Pij
q(e)
Quality entropy of link e
Equations of Path Planning

Profit of request i
bi
mi
qi

Objective Model
max
mi
i n
L P ij
P ij
bi
max
i n qi
L P ij
P ij
i
Constraints

Quality Constraint
q e
q i , for all P ij
e

Bandwidth Constraint
b e p ij
P ij
B e , for all e E
Proposed Solution


A path must fit in two constrains –
Bandwidth, Quality Entropy.
Multi Constrained Path Problem



NP-Complete
We proposal a heuristic algorithm for
path planning – Greedy Path Planning
Algorithm(G.P.P.A).
Greedy method.
Path Planning Algorithm
Workflow
Performance Evaluation



Evaluate the performance of path planning
algorithm.
Compare to traditional OSPF algorithm.
Performance Metrics




Profit
Link Utilization S.D.
P.U. Ratio – total profit / average link utilization
Sensitivity to



Number of Nodes
Connectivity
Forecast error
Experiment Environment and
tools



FreeBSD
GNU C++ 3.2
BBQ Computation Simulator


Simulate the process of path planning and
the admission control in run-time.
Simply admission control function
Test Instances Generation

Topology




Number of node is 10 – 50
The capacity of link is 1000Mbps
Connectivity is 20% - 100%
Definition of Connectivity

In a topology with n nodes, a node with n-1
links connect to other nodes is called 100%
connectivity.
Test Instances
Generation(cont’d)

Request



The bandwidth demand of a request is
1~100Mbps.
The quality entropy of a request is 20 ~ 50
The number of request is about 100~
200/per nodes.
The workflow of experiments
Experiments

Experiment #1:


Performance evaluation by computation.
Performance metrics


Profit, and P.U. Ratio.
Experiment #2, #3 and #4:


Performance evaluation by simulation.
Performance metrics




Profit, Link Utilization S.D., and P.U. Ratio.
Exp #2 : Number of nodes test
Exp #3 : Connectivity test
Exp #4 : Forecast error test
Result of Experiment #1 :


Parameters Setup:
 200 requests / per nodes
 10 ~ 30 nodes
 Connectivity 20% ~ 100%
Objective:
 Observation of metrics:


Profit
P.U. Ratio
Exp #1 :
Preplan Test – Profit
Sensitivity to number of nodes
Number of nodes increases, whenever profit increases.
預測規劃之獲利趨勢
550
500
450
連接率
400
獲利
350
20 .00 %
40 .00 %
60 .00 %
80 .00 %
10 0.0 0%
300
250
200
150
100
50
0
10
15
20
節點數
25
30
Exp #1:
Preplan Test – P.U Ratio
Sensitivity to number of nodes
Number of nodes increases, whenever P.U. increases.
預測規劃之利潤密度趨勢
50
45
40
連接率
獲利密度
35
20.00%
40.00%
60.00%
80.00%
100.00%
30
25
20
15
10
5
0
10
15
20
節點數
25
30
Result of Experiment #2 :


Parameters Setup:
 200 requests / per nodes
 10 ~ 50 nodes
 Connectivity 20%, 40%, 60%, 80%
Objective:
 Observation of metrics:



Profit
Link Utilization S.D.
P.U. Ratio
Exp #2 : Sensitivity to Number
of Nodes – Profit
Sensitivity to number of nodes on profit
Profit increases whenever number of nodes increases.
連接率 60%
375
350
325
300
275
250
225
200
175
150
125
100
75
50
25
0
500
450
400
350
300
GP P A
OS P F
獲利
獲利
連接率 40%
GP P A
OS P F
250
200
150
100
50
0
10
15
20
節點數
25
30
10
200 Requests/Nodes
15
20
節點數
25
30
Exp #2 : Sensitivity to Number
of Nodes – Profit (Cont,d)
Sensitivity to number of nodes on profit
Profit increases whenever number of nodes increases.
連接率 80%
連接率 100%
500
550
450
500
400
450
400
350
350
GP P A
OS P F
250
200
獲利
獲利
300
300
GP P A
OS P F
250
200
150
150
100
100
50
50
0
0
10
15
20
節點數
25
30
10
200 Requests/Nodes
15
20
節點數
25
30
Exp #2 : Sensitivity to Number
of Nodes - Link Utilization S.D.
Sensitivity to number of nodes on Link Utilization S.D
link utilization s.d. decreases
whenever number of nodes increases
連接率 60%
37.5
35
32.5
30
27.5
25
22.5
20
17.5
15
12.5
10
7.5
5
2.5
0
GP P A
OS P F
10
15
20
節點數
25
30
鏈結使用率標準差
鏈結使用率標準差
連接率 40%
.
35
32.5
30
27.5
25
22.5
20
17.5
15
12.5
10
7.5
5
2.5
0
GP P A
OS P F
10
200 Requests/Nodes
15
20
節點數
25
30
Exp #2 : Sensitivity to Number of
Nodes – Link Utilization S.D.
(Cont,d)
Sensitivity to number of nodes on Link Utilization S.D.
link utilization s.d. decreases
whenever number of nodes increases
連接率 100%
32.5
32.5
30
27.5
25
30
27.5
25
22.5
20
17.5
15
12.5
GP P A
OS P F
10
7.5
5
2.5
0
鏈結使用率標準差
鏈結使用率標準差
連接率 80%
22.5
20
17.5
15
12.5
GP P A
OS P F
10
7.5
5
2.5
0
10
15
20
節點數
25
30
10
200 Requests/Nodes
15
20
節點數
25
30
Exp #2 : Sensitivity to Number
of Nodes - P.U. Ratio
Sensitivity to number of nodes on P.U. ratio
The P.U. ratio increases, whenever number of nodes increases.
節點多寡影響
50
45
40
35
30
獲利密度
He u ris tic
OS P F
25
20
15
10
5
0
10
15
20
25
30
節點數
35
40
45
50
Result of Experiment #3 :


Parameters Setup:
 200 requests / per nodes
 Connectivity 20% ~ 100%
 10 ~ 30 nodes
Objective:
 Observation of metrics:



Profit
Link Utilization S.D.
P.U. Ratio
Exp #3 : Sensitivity to
Connectivity - Profit
Sensitivity to connectivity on profit
profit increases whenever connectivity increases.
節點數 20
節點數 15
275
250
225
200
150
GP P A
OS P F
125
100
75
50
25
0
20.00%
40.00%
60.00%
連接率
80.00%
100.00%
獲利
獲利
175
350
325
300
275
250
225
200
175
150
125
100
75
50
25
0
20.00%
200 Requests/Nodes
GP P A
OS P F
40.00%
60.00%
連接率
80.00%
100.00%
Exp #3 : Sensitivity to
Connectivity – Profit (Cont’d)
Sensitivity to connectivity on profit
profit increases whenever connectivity increases.
節點數 25
節點數 30
450
550
400
500
450
350
400
350
250
GP P A
OS P F
200
獲利
獲利
300
300
GP P A
OS P F
250
200
150
150
100
100
50
50
0
0
20.00%
40.00%
60.00%
連接率
80.00%
100.00%
20.00%
200 Requests/Nodes
40.00%
60.00%
連接率
80.00%
100.00%
Exp #3 : Sensitivity to
Connectivity – Link Utilization S.D.
Sensitivity to connectivity on Link Utilization S.D. :
Link Utilization S.D. decreases, whenever connectivity increases.
節點數 20
37.5
35
32.5
30
27.5
25
22.5
20
17.5
15
12.5
10
7.5
5
2.5
0
20.00%
GP P A
OS P F
40.00%
60.00%
鏈結率
80.00%
100.00%
鏈結使用率標準差
鏈結使用率標準差
節點數 15
35
32.5
30
27.5
25
22.5
20
17.5
15
12.5
10
7.5
5
2.5
0
20.00%
200 Requests/Nodes
GP P A
OS P F
40.00%
60.00%
鏈結率
80.00%
100.00%
Exp #3 : Sensitivity to
Connectivity – Link Utilization
S.D.(Cont,d)
Sensitivity to connectivity on Link Utilization S.D. :
Link Utilization S.D. decreases whenever connectivity increases.
節點數 30
32.5
35
32.5
30
27.5
25
22.5
20
17.5
15
12.5
10
7.5
5
2.5
0
20.00%
GP P A
OS P F
鏈結使用率標準差
鏈結使用率標準差
節點數 25
30
27.5
25
22.5
20
17.5
15
12.5
GP P A
OS P F
10
7.5
5
2.5
0
40.00%
60.00%
鏈結率
80.00%
100.00%
20.00%
200 Requests/Nodes
40.00%
60.00%
鏈結率
80.00%
100.00%
Exp #3 : Sensitivity to
Connectivity – P.U. Ratio
Sensitivity to connectivity on P.U. ratio :
P.U. ratio increases whenever connectivity increases.
連接率影響
獲利密度
80
75
70
65
60
55
50
45
40
35
30
25
20
15
10
5
0
He u ris tic
OS P F
20
40
60
連接率
80
Result of Experiment #4 :


Parameters Setup:
 4000 requests
 Connectivity 20%
 20 nodes
 Forecast error 10%, 20%, 30, …, 100%
Objective:
 Observation of metrics:



Profit
Link Utilization S.D.
P.U. Ratio
Exp #4 : Sensitivity to forecast
error - Profit
Sensitivity to forecast error on profit:
Profit is unstable whenever forecast error increases.
預測誤差之影響
2 50
2 25
2 00
1 75
1 50
獲利
He u ris tic
OS P F
1 25
1 00
75
50
25
0
1
5
10
20
30
40
50
需求誤差 (% )
60
70
80
90
Exp #4 : Sensitivity to forecast
error - P.U. Ratio
Sensitivity to forecast error on P.U. ratio:
P.U. ratio is unstable whenever forecast error increases.
預測誤差之影響
18
16
14
12
獲利密度
10
He u ris tic
OS P F
8
6
4
2
0
1
5
10
20
30
40
需求誤差 (% )
50
60
70
80
90
Summary

Experiment #1


Profit and P.U. ratio will increase when the
number of nodes or connectivity increased.
Experiment #2

When the number of nodes increase, the
profit and P.U. ratio will increase in both
algorithm. But the increase rate of G.P.P.A.
is large then OSPF.
Summary

Experiment #3


When the connectivity increase, the profit
and P.U. ratio will increase in both
algorithm. But the increase rate of heuristic
is large then OSPF.
Experiment #4


When the forecast error ratio increases,
the profit of OSPF is more than GPPA.
But the P.U. ratio is still better than OSPF.
Conclusion and Future Work




Path planning could maximize the profit
under the same network resource.
G.P.P.A. could decrease the probability
of link resource exhausted.
Implement our path planning algorithm
in NCTUNS or NS2.
Link characteristics setting.
Session 7
預算法全IP核心網路服務品質管理之分散式資源管理與
允入控制
Distributed Resource Management and Admission
Control in Budget-Based QoS
Management for All-IP Core Networks
指導教授 : 連耀南
學生 : 陳明志
國立政治大學行動通訊實驗室
Outline







Resource Pre-Planning
Demand Forecast
Resource Pre-Planning Problem
CETP Resource Management Problem
Proposed Solution
Performance Evaluation
Conclusion and Future Work
Resource Pre-Planning



BOA order resource from BB in advance
PPA plan path by the available link
resource
How much resource should BOA order
in advance? => forecast demand
Traffic Assumptions


Periodical traffic pattern
Traffic demand can be forecasted by
historical data
Demand Forecast



Historical request data of Ingress Router
Path-based demands will be transformed to
link-based demands
Terms
- Length of Time Period
- Concerned Time Period (CTP)
- Reference Time Period (RTP)
Length of Time Period


Operator may change length of time
period by the variance in traffic demand
Shorter length of time period for larger
variance traffic demand
Concerned Time Period (CTP)


The time period concerned by current
resource planning process
Refer to all RTP and make decision
Reference Time Period (RTP)


Historical data of traffic demand
Same time at the days of different
weeks (eg : all Monday 8-9 AM)
Historical Traffic Pattern
List of all RTP pattern of a certain link
100
80
day1
day2
day3
day4
40
20
19
16
22
time
13
10
7
0
4
Mbps
60
1

day
day1
Historical data of the CTP

Bandwidth request of RTPs for a certain
CTP
Request Distribution

Request distribution of RTPs for a certain
CTP
Problem of Resource PrePlanning


Forecast error may occur at run time
Inaccurate planning will waist bandwidth or
on-demand order bandwidth
Optimal Request Bandwidth

By the given traffic pattern and demand
distribution of a certain link, find best
bandwidth request value of the link
Optimal Request Bandwidth
(cont’)
Optimization Model for
Request Bandwidth


Expected bandwidth cost of the CTP is
sum of pre-order cost and expected ondemand cost
Minimize expected cost of the CTP
Parameter List






C1
C2
i
Pi
Pre-order request unit cost
On-demand request unit cost
Bandwidth demand at CTP
Probability of demanded
bandwidth i
E{C(θ)} Expected bandwidth cost
Θ
Optimal pre-order bandwidth
Mathematical model

Continuous form
n
E{C ( )}  C1 *  C2 *  (i   ) * Pidi


Discrete from
n
E{C ( )}  C1 *  C2 *  (i   ) * Pidi
i 
Cost figure
Optimal Condition for
Continuous Formn
n
E{C ( )}  C1 *  C2 [ i * Pidi    * Pidi ]


n
E{C ( )}  C1 *  C2 *  (i   ) * Pidi

n
dE{C ( )}
 C1  C 2 [ * P   Pidi  * P]

d
n
dE{C ( )}
 C1  C 2  Pidi

d
dE{C ( )}
d
d
 C 2 * P
d
Optimal Condition for
Continuous Form (cont’)

The optimal request bandwidth Θ can be
found by the equation C   Pidi
1
C2
n
Optimal Condition for Discrete
Form


Set bandwidth scale
Find optimal value by existing search
algorithm
Charge Rate and Pre-Order
Bandwidth



Operator decides charge rate(C2/C1)
C2≦C1 , no pre-order bandwidth (ondemand per-flow request)
C2>C1 , higher charge rate , more preorder bandwidth
Charge Rate and Pre-Order
Bandwidth (cont’)
Solutions for Resource
Shortage


On-demand order bandwidth at run
time
Order upgrade resource to fill the
planned quantity
Pre-Order Examination
Procedure
Admission Control Procedure


FCFS
Insufficient resource


Downgrade request
Reject
Current Execution Time Period
(CETP)



Execution time of the planned CTP
Resource consumption may be stable or
unstable
Need resource management
CETP resource management


Set up resource level according to
remaining time or traffic distribution
Examine resource level at the check
point (time length between two check
point ≧ On-Demand order latency)
CETP resource management
(cont’)

Examine resource level at the check point (time
length between two check point ≧ On-Demand
order latency)
Solutions for Resource
Shortage at CETP



Resource reallocation
Downgrade traffic request
On-demand batch order
Resource Reallocation


Bandwidth of some path is insufficient,
PPA checks the links along the path, to
locate insufficient bandwidth links
Reallocate link bandwidth from different
path that has surplus bandwidth
Downgrade Traffic Request
When available resource of demanded
quality is not enough, downgrade traffic
request as substitute quality
 Must define reasonable downgrade rule
- eg : in DiffServ, degrade AF1 service
class to AF2 service class

On-Demand Batch Request


BB reserve some bandwidth of the links
(Central Pool or ACA send-back function)
ACA check in-hand resource level at
each check point
On-Demand Batch Request
(cont’)
CETP Resource Management
Optimization Model


By given request distribution, unit
rejecting penalty and unit admitting
revenue, find best resource level y*
Maximize expected profit
Parameter List

Pt(D)

X
Y
y*t
p
r

C2

E{B(x)}




PDF of traffic demand D at time
t
Current in-hand resource
Resource level
Optimal resource level at time t
Unit rejecting penalty
Unit admitting revenue
On-demand unit cost
Current Execution Time Period
expected profit
CETP Resource Management
Mathematical Model

Continuous form
y
E{B( x)}  C 2 * ( y  x)   rD * Pt ( D)dD
0

  [ry  p( D  y )] * Pt ( D)dD
y

Discrete form
y
E{B( x)}  C 2 * ( y  x)   (rD * Pt ( D))
D 0

  (( ry  p( D  y )) * Pt ( D))
D y
Optimal Condition of Discrete
Form (cont’)

The optimal resource level y* at time t
can be found by the equation
y*
( p  r  c) /( p  r )   Pt ( D)dD
0
Performance Evaluation




Performance Evaluation Metrics
Experiment Design
Experiments and Results
Summary
Performance Evaluation
Metrics

Resource pre-planning

Management cost



Pre-order cost
On-demand cost
Admission profit
Performance Evaluation
Metrics (cont’)

CETP resource management



Ratio of fully-satisfied traffic request
Ratio of partially-satisfied traffic request
Ratio of rejected traffic request
Experiment Design

Experiment Environment

Simulation :
Network Simulator ns-2 on FreeBSD
Experiment Design (cont’)

Test instance generation

Traffic request generation



Traffic type : CBR and exponential
Change traffic request generating distribution
Network topology

Single path topology, focus on the result of
resource management
Real Time Simulation Process
Topology and Run Time Snap
Shot
Experiments


Exp 1 : Pre-Planning experiments
Exp 2 : CETP resource
management experiments
Exp1 : Pre-Order Experiment
design

Parameter setup




Objective


Traffic load : 4 distribution sets
Charging rate(C2/C1)
Traffic type
Observation matrics : management cost and profit
Experiment sets :



Exp 1-1 : Sensitivity to traffic load
Exp 1-2 : Sensitivity to charge rate
Exp 1-3 : Sensitivity to traffic type
Exp1 : Pre-Order Experiment
design (cont’)

4 distribution sets
Result of Optimal Request
Value of CBR Exp (exp1-1)

Resource planning policies : mean, theta, ondemand
Profit of Optimal Request
Value of CBR Exp (cont’)

Profit analysis
Summary of Optimal Request
Value of CBR Exp (cont’)


At set1 test, profit of “mean policy” is
the highest
As forecast error raising (set 2 – set 4) ,
profit of “θ policy” is higher than other
two policies
Theta Value of different
charge(C2/C1) rate (exp1-2)


Request distribution : mean = 50,
range=100
Charge rate : form 2 to 9
Result of Optimal Request
Value of Different Charge Rate

Charge rate(C2/C1) : 2, 3, 4
Profit of Optimal Request Value
with Different Charge(C2/C1) Rate
Summary of Different
Charge(C2/C1) Rate (exp1-2)


Low fault tolerance at low charge rate
At high charge rate, BOA will order
more bandwidth in advance
=>over estimate
Result of Optimal Request Value
of Exponential Exp (exp1-3)

Resource planning policies : mean, theta, ondemand
Result of Optimal Request Value
Performance Exponential Exp
(cont’)

Profit analysis
Summary of Optimal Request
Value of exponential Exp (cont’)


At set 1 test, profit of “mean policy ” is
still the highest
As forecast error raising (set 2 – set 4) ,
profit of “θ policy” is higher than other
two policies
Exp2 : CETP Resource
Management Experiments design

Parameter setup



Traffic load : 5 distribution sets
Traffic type
Objective

Observation matrics




Ratio of fully-satisfied traffic request
Ratio of partially-satisfied traffic request
Ratio of rejected traffic request
Experiment sets :


Exp 2-1 : Sensitivity to traffic load
Exp 2-2 : Sensitivity to traffic type
Exp2 : CETP Resource
Management Experiments design
(cont’)

5 distribution sets
Ratio of Fully-Satisfied
Request (CBR)
a) No management
b) CETP resource management
CETP resource management improve fully-satisfied ratio under forecast error
Ratio of Partially-Satisfied
Request (CBR)
a) No management
b) CETP resource management
CETP resource management improve partially-satisfied ratio under forecast
error
Ratio of Rejected Request
(CBR)
a) No management
b) CETP resource management
CETP resource management improve rejected ratio under forecast error
Exp2-1 : Summary


CETP resource management reduce
reject ratio and partially-satisfied ratio
Reject ratio is remaining raising under
higher forecast error
Ratio of Fully-Satisfied
Request (exponential)
a) No management
b) CETP resource management
CETP resource management improve fully-satisfied ratio under forecast error
Ratio of Partially-Satisfied
Request (exponential)
a) No management
b) CETP resource management
CETP resource management improve partially-satisfied ratio under forecast
error
Ratio of Rejected Request
(exponential)
a) No management
b) CETP resource management
The improvement between a) and b) is unobvious
Exp2-2 : Summary

CETP resource management reduce
partially-satisfied ratio mostly
Exp2 : Summary


CETP resource management reduce
reject ratio and partially-satisfied ratio
Traffic type will affect management
result
Conclusion and Future Work

Resource Management :




Resource management makes more profit under certain
forecasting error
Traffic type will affect resource management utilization
The parameters will affect management policies
Future Work :

Multi-class resource planning

Modify resource planning model to fit different traffic types
Session 8
Forecasting Error Tolerable
Resource Allocation
Forecasting Error Tolerable
Resource Allocation
Allocation Approaches
Approaches Overview




Resource Reallocation approach
Central Pool approach
Overbook approach
Hybrid approach


Central pool & resource Reallocation
Overbook & Resource Reallocation
Forecasting Error Tolerable
Resource Allocation
Central Pool Approach
Central Pool Approach




A portion of resource reserved in
Central Pool.
Real-time on-demand allocation of
reserved resource.
Too much reservation may result in too
much processing overhead.
Too little reservation may result in poor
performance.
Central Pool (1)
Central Pool (2)
Optimization Objective

Find out one committed bandwidth level
(resource reservation amount、τ) that
maximizes expected system profit.
Symbol Table
θ
Required total bandwidth
τ
Committed total bandwidth
C1 Per unit committed bandwidth
Per unit on-demand allocated bandwidth without
C2
pre-order
Per unit on-demand allocated bandwidth with preC1’
order
P Expected profit
pi
Potential probability of incoming traffic i
N
Maximum possible incoming traffic
Demanded Bandwidth Distribution,
Booking Level & Pricing

C1’<C1<C2
Central Pool Optimization
Model (1)


Pre-allocated bandwidth profit
C1  
Expected real-time on-demand
allocated bandwidth profit

 p
i
N
i
 C1 '(i   )   pi  C2  (i   )  C1 '(   )
i 
Central Pool Optimization
Model (2)

Total expected profit

N
i 
i 
P  C1     pi  C1 'i      pi  C2  i     C1 '   

The complexity of the equation above is
O(τ2) which can be solved by numeric
approach.
Forecasting Error Tolerable
Resource Allocation
Overbook Approach
Overbook Approach



Overbook approach commits more
resource than available.
Overbook approach may admit too
much traffic.
Too much admitted traffic results in
performance degradation.
Overbook (1)
Overbook (2)
Overbook (3)
Traffic Distribution (1)

Demanded bandwidth distribution of a link of
an Ingress Router.
Traffic Distribution (2)

Admitted bandwidth distribution of a link of an
Ingress Router with committed bandwidth given.
Traffic Distribution (3)

Demanded bandwidth distribution of a link.
Traffic Demand Distribution (4)

Admitted bandwidth distribution of an link with
committed bandwidth given.
Optimization Objective



Find out one committed bandwidth level
(overbook amount 、τ) that maximizes
expected system profit.
The demanded resource distribution is
assumed to be continuous in the
following optimization model deduction.
The optimization model using discrete
distribution is also introduced.
Symbol Table
θ
Required total bandwidth
τ
Committed total bandwidth
m1
Per unit overbooked bandwidth
m2
Per unit packet loss penalty
R
Net profit
p(b) Potential probability of incoming traffic b
N
Maximum possible incoming traffic
h(b) Packet loss function of committed bandwidth b
C
Link capacity
Total Probability of Admitted
Bandwidth over Booking Level

The whole area under distribution curve
between booking level (τ) and N
The Probability of Admitted
Bandwidth at Booking Level

Sum of probability of incoming traffic b &
total probability of admitted bandwidth over
booking level
Profit of Overbooked
Bandwidth

Additional profit due to overbook.
Possible Penalty of
Overbooked Bandwidth
Optimization Model

N

R  m1    C   m2   pb   hbdb   pb  h db
 C


For
L( b )  p ( b )  h ( b )

R  m1    C   m2  Lbdb  m2  h    pbdb
N

C
the optimal booking level can be located after first differential
R  m1  m2  L   m2  h     pbdb  m2  h   p 
'
N
'

 m1  m2  L   m2  h     pbdb  m2  L 
'
 m1  m2  h '     pbdb
N

N

Optimal Booking Level
N
m1
'
 h     p b db

m2
Performance
Evaluation
Performance Metrics


Traffic accepted rate
System profit
Experiments

Exp 1: Sensitivity to distribution
deviation


Observing traffic accepted rate.
Exp 2: Allocation approach test

Observing system profit.
Parameters




5 Ingress Routers
Bandwidth per flow: 448 kbps
Path capacity: 50 flows
Distribution: 0~20 flows


Mean – 10 flows
Required bandwidth θ – 10 flows
Exp 1. Sensitivity to distribution deviation
Central Pool (1)
1.0500
1.0000
Dev 1
0.9500
Dev 2
訊務
0.9000
允入率(%)
0.8500
Dev 3
Dev 4
Dev 5
0.8000
0.7500
0%
10%
20%
30%
40%
50%
資源保留比例(%)

The improvement in traffic accepted rate is proportional to
resource reservation percentage.
Exp 1. Sensitivity to distribution deviation
Central Pool (2)
0.1200
0.1000
Dev 1
Dev 2
Dev 3
Dev 4
Dev 5
0.0800
訊務允入率
0.0600
差值 (%)
0.0400
0.0200
0.0000
10%
20%
30%
40%
50%
資源保留比例(%)

The improvement in traffic accepted rate difference is directly
proportional to resource reservation percentage.
Exp 1. Sensitivity to distribution deviation
Central Pool (3)
0.0450
0.0400
0.0350
0.0300
訊務允入 0.0250
率差值(%) 0.0200
0.0150
0.0100
0.0050
0.0000
Dev 1
Dev 2
Dev 3
Dev 4
10%
20%
30%
40%
50%
資源保留比例(%)

The improvement in traffic accepted rate difference is inversely
proportional to resource reservation percentage.
Exp 1. Sensitivity to distribution deviation
Overbook (1)
1.0500
1.0000
Dev 1
0.9500
Dev 2
訊務
0.9000
允入率(%)
0.8500
Dev 3
Dev 4
Dev 5
0.8000
0.7500
0%
10%
20%
30%
40%
50%
超額配置比例(%)

The improvement in traffic accepted rate is proportional to
resource reservation percentage.
Exp 1. Sensitivity to distribution deviation
Overbook (2)
0.1400
0.1200
Dev 1
0.1000
Dev 2
訊務允入率 0.0800
差值(%) 0.0600
Dev 3
Dev 4
0.0400
Dev 5
0.0200
0.0000
10%
20%
30%
40%
50%
超額配置比例(%)

The improvement in traffic accepted rate difference is directly
proportional to resource reservation percentage.
Exp 1. Sensitivity to distribution deviation
Overbook (3)
0.0450
0.0400
0.0350
0.0300
訊務允入
0.0250
率差值
0.0200
(%)
0.0150
0.0100
0.0050
0.0000
Dev 1
Dev 2
Dev 3
Dev 4
10%
20%
30%
40%
50%
超額配置比例(%)

The improvement in traffic accepted rate difference is inversely
proportional to resource reservation percentage.
Allocation approach test (1)
58
56
54
52
Basic
CP
OB
獲利 50
48
46
44
42
10%
20%
30%
40%
50%
比例

The improvement in system profit of Central Pool approach is
more stable than Overbook approach.
Allocation approach test (2)
6
4
2
0
獲利 -2
10%
20%
30%
40%
50%
CP
OB
-4
-6
-8
-10
比例(%)

The additional system profit of Central Pool approach stably
goes down, but goes up slowly and down quickly of Overbook
approach.
Summary




Central pool approach is recommended
when traffic deviation is small.
Overbook approach is recommended
when traffic deviation is large.
Central pool approach works better at
low Central Pool reservation.
Overbook approach works better at
high overbook percentage.
Conclusion & Future Work

Resource allocation approach


Forecast error tolerable resource allocation
approach is needed in large-scale end-toend QoS architecture.
Future work


Hybrid approach
Central pool & Overbook
Thanks!
NCCU Mobile Communication
Laboratory