Business Roles and Architecture to realize MUSE

Download Report

Transcript Business Roles and Architecture to realize MUSE

QoS Framework for Access Networks
Govinda Rajan
Lucent Technologies
[email protected]
Muse confidential
Agenda
>
QoS with resource based admission control
>
Tight & loose QoS models
>
QoS in carrier & application network models
>
Distribution and adaptation of QoS policies
>
Performance monitoring for QoS
>
Comparison of Muse QoS principles to other QoS
proposals
>
Conclusion
MUSE Summer School July 2006 — 2
Muse confidential
QoS in Access Networks
>
QoS is key in multi-service access
>
End-to-End QoS solutions (e.g. IntServ)
•
>
commercially failed because of complexity
Priority based QoS (e.g. Diffserv)
•
•
works in over-provisioned networks
insufficient for Access & Aggregation
=> Simplified QoS control in Access & Aggregation needed
Access
Node
MUSE Summer School July 2006 — 3
Muse confidential
Edge
Node
Ethernet Network Model
bridged
or
routed
Ethernet switch
(802.1ad)
Ethernet switch
(S-VLAN aware or 802.1Q)
BRAS or Edge Router
CPN
NSP/ISP
CPG
AEN
NSP/ISP
Ethernet (MPLS)
aggregation network
NSP/ISP
AEN
AN
MUSE Summer School July 2006 — 4
NAP
ASP
Muse confidential
Congestion in Ethernet Networks
A1
A2
A3
A
Ag1
B1
B2
B3
B
En1
Ag4
Ag2
C1
C2
C
All links: 1 GB
>
Assume all nodes have strict priority queuing & assume there are just 2 QoS priority
classes.
>
Say, that there are 10 flows of high QoS priority of 0.1 GB each, upstream from
residential gateway and 2 flows of lower QoS priority.
• Link A-Ag1 is 1 GB so A allows all 10 flows and the flows of lower priority will be
dropped.
>
Assume: Similarly B also allows 10 flows of 0.1 GB each.
>
There is congestion at Ag1, since incoming traffic is 2 GB & upstream link capacity is
only 1 GB and eventually packets will have to be dropped and QoS is not guaranteed.
MUSE Summer School July 2006 — 5
Muse confidential
Potential BottleNecks in Access Networks
BNAN
SPA
SPB
BNAgN
BNEN
EN1
ni times
bi
Bo
Statistical Multiplexing
where bandwidth nixbi < Bo
EN2
bo
Bi
Aggregation
Node
User
Gateway
Edge Node
SPc
Access
Node
>
Ingress traffic for a egress port
has higher rate than egress
port: Bi > bo
Because of the nature of sharing the connectivity resources, there can be
potential bottlenecks Upstream & Downstream traffic at Access Nodes,
Aggregation Nodes & Edge Nodes depending on the quantity & the required
quality of service.
MUSE Summer School July 2006 — 6
Muse confidential
Call Admission Control for QoS
QoS
Signalling
A1
A2
A3
A
request for 4 high QoS class flows & 2
low QoS flows.
>A signals
Ag1
B1
B2
B3
>A gets
Central Resource
View
B
En1
Ag4
Ag2
C1
C2
C
central resource & gets allow signal
for 4 high QoS class. CRV decreases
resources by 0.4.
>A allows
All links: 1 GB
2 flows of low QoS flows without
signalling.
>B
>For
all flows, decision of allow or block is
needed from central resource view.
>Strict
priority queuing & 2 QoS classes.
gets request for 5 flows of high QoS, gets
allow for 4 & block for 1 flow. CRV resource
count is now 0.
>B
gets request for 2 low QoS & allows them.
>Central
>Low
>First
>Additional
resource is provisioned with 0.8 & 0.2
GB for 2 QoS classes respectively.
Mile will be handled locally by ANs.
>Central
resource view is provisioned with 0.8
Gb for higher QoS and 0.2 for lower QoS.
MUSE Summer School July 2006 — 7
QoS flows experience packet loss
because of priority queuing at nodes.
requests for high QoS flows from
edge nodes are blocked by CRV until current
flows are ended and signalled to CRV.
Muse confidential
Virtual Pipes for Multi-service Provider QoS
A1
A2
A3
Virtual Pipe
A
Ag1
B1
B2
B3
B
Ag4
En1
Ag2
C1
C2
>
C
SP1: A – EN1
0.2
SP2: A – EN1
0.3
SP3: B – EN1
0.3
All links: 1 GB
At node A
•
•
•
•
>
Capacity (GB)
For SP1 , 0.2 GB of flows are allowed & above that new flows are blocked
For SP2 , 0.3 GB of flows are allowed & above that new flows are blocked
All other QoS flows are blocked
Best Effort traffic is allowed
At node B
•
•
•
For SP3 , 0.3 GB of flows are allowed & above that new flows are blocked
All other QoS flows are blocked
Best Effort traffic is allowed
MUSE Summer School July 2006 — 8
Muse confidential
QoS Classes
elastic = data
integrity dominates
elastic
inelastic
inelastic = temporal
integrity dominates
MUSE Summer School July 2006 — 9
noninteractive
interactive
noninteractive
interactive
Muse confidential
Best effort class
Background class (3GPP)
Non-critical class (ITU)
Transactional class
Interactive class (3GPP)
Responsive class (ITU)
Streaming class
Streaming class (3GPP)
Timely class (ITU)
Real-time class
Conversational class (3GPP)
Interactive class (ITU)
Agenda
>
QoS with resource based admission control
>
Tight & loose QoS models
>
QoS in carrier & application network models
>
Distribution and adaptation of QoS policies
>
Performance monitoring for QoS
>
Comparison of Muse QoS principles to other QoS
proposals
>
Conclusion
MUSE Summer School July 2006 — 10
Muse confidential
Tight QoS in ‘Application’ Model
Flow based
FB monitoring
FB gating
FB policing
FB monitoring
FB gating
FB policing
ABG
Access
Network
Access
Network
IP Backbone
ABG
EDGE
EDGE
Eth.
CPN B
CPN A
MUSE Summer School July 2006 — 11
Muse confidential
Loose QoS in ‘Application’ Model
Traffic Class “per Service Provider” Based control: All
the traffic flows belonging to the same traffic class and
served by the same SP are aggregated together and
treated as a whole in the ENs.
FB monitoring
FB gating
FB policing
ABG
Traffic Class
Based
FB monitoring
FB gating
FB policing
TCB monitoring
TCB gating
TCB policing
Soft
switch
Access
Network
Access
Network
IP Backbone
ABG
EDGE
EDGE
Eth.
CPN B
CPN A
MUSE Summer School July 2006 — 12
Muse confidential
Agenda
>
QoS with resource based admission control
>
Tight & loose QoS models
>
QoS in carrier & application network models
>
Distribution and adaptation of QoS policies
>
Performance monitoring for QoS
>
Comparison of Muse QoS principles to other QoS
proposals
>
Conclusion
MUSE Summer School July 2006 — 13
Muse confidential
Models of Access Networks
‘Carrier’ model
•
•
•
•
IMS
The network just provides transport/connectivity services.
The network is not application/session aware.
Current best-effort Internet access model.
Can be implemented by enhancing with multiple QoS conenctivity
services, tailored and classified for the most prevalent group of
services.
‘Application’ model
•
•
The provisioning of an end service is controlled & guaranteed by
the operator. The user requests (dynamically) for a service, and
the network sets-up the most appropriate transport/connectivity
service (service-based policy push).
The network is application/session aware.
MUSE Summer School July 2006 — 14
Muse confidential
Loose QoS in ‘Carrier’ Model
Traffic Class “per User”
& “per SP” Based control
TCB monitoring
TCB gating
TCB policing
Traffic Class “per SP”
Based control
TCB monitoring
TCB gating
TCB policing
Traffic Class “per User”
& “per SP” Based control
FB monitoring
FB gating
FB policing
Soft
switch
ABG
Access
Network
Access
Network
IP Backbone
ABG
EDGE
EDGE
Eth.
CPN A
Intelligent RGW: It must
solve the applications
contest for bandwidth
according to the end user
preferences and the
available transport services
MUSE Summer School July 2006 — 15
Muse confidential
CPN B
Converged QoS Model
Border nodes (AN & EN) must be able to perform
these tasks both at IP flow and traffic class level.
Network operator’s choice of policy:
Appropriate application of ‘tight’ or ‘loose’ QoS model based on
specific requirements for individual access networks
• Mix and match of ‘carrier’ or ‘application’ models
•
Soft
switch
AB
G
Access
Network
Access
Network
AB
G
IP Backbone
EDGE
EDGE
Eth.
CPN A
CPN B
Intelligen
t
RGW
MUSE Summer School July 2006 — 16
Muse confidential
Agenda
>
QoS with resource based admission control
>
Tight & loose QoS models
>
QoS in carrier & application network models
>
Distribution and adaptation of QoS policies
>
Performance monitoring for QoS
>
Comparison of Muse QoS principles to other QoS
proposals
>
Conclusion
MUSE Summer School July 2006 — 17
Muse confidential
Need for Central View of Resource
A1
A2
A3
All links: 1 GB
A
Ag1
B1
B2
B3
B
Ag4
En1
Ag2
C1
C2
C
> Virtual
& B.
Ag5
En2
Virtual Pipe
Capacity (GB)
SP1: A,B – EN1
0.4
SP2: B – EN1
0.2
SP3: C – EN2
0.1
Pipe of SP1 is 0.4 GB & total subscribers are for 1 GB, at A
> To
ensure that only 0.4 GB or less is allowed; for each call
request, it should be checked if existing calls from both A & B are
less than 0.4 GB totally.
> A central
view of resource is needed to share the same resource
for subscribers at A & B. By resource here, it is meant a fraction of
physical resource for exclusive use by SP1 subscribers at A & B.
MUSE Summer School July 2006 — 18
Muse confidential
Local Control of Exclusive Fraction of Resource
A1
A2
A3
All links: 1 GB
A
Ag1
B1
B2
B3
B
En1
Ag4
Ag2
C1
C2
C
En2
Ag5
Virtual Pipe: SP1: A,B – EN1 (0.4GB)
Node
Local Quota
Node A
0.1
Node B
0.2
Central
Quota
0.1
Example for SP1 Virtual Pipe of 0.4 GB:
>
Node A is provisioned with 0.1 GB as the local threshold. New calls are allowed at A until the
total calls active at A is less than 0.1 GB. If total calls active at A is equal or above 0.1 GB, new
call requests are signaled to a central resource control for admission decision.
>
Node B is provisioned with 0.2 GB as the local threshold. New calls are allowed at B until total
calls active at B is less than 0.2 GB. If total calls active at B is equal or above 0.2 GB, new call
requests are signaled to a central resource control for admission decision.
>
In this example, Node B has local control of a larger fraction of the total resource since it is
assumed that there are more subscribers at Node B than at Node A. The central resource
control is used to share a certain fraction of the total resource between A & B.
MUSE Summer School July 2006 — 19
Muse confidential
Distributed Call Admission Control
Step 1
User requests service (eg. TV)
AN
1
CPE
2
Step 2
1.If subscriber policy is in agreement
Policy Agent
•Allotted Quota
•Un-allotted Quota
•Subscribed BN IP
addresses
•Subscriber’s Policy
•IP adds of other SNs
2.If required QoS can be serviced
within allotted quota
3.Then call is admitted
Super Node
Subscriber Management
Server
Policy Server
3
3
Network Management
System
Step 3
1.If required QoS cannot be serviced
within allotted quota
2.Then bandwidth reserved in unallotted quota and confirmation
requested from BNs subscribed to
that un-allotted quota.
3.On confirmation from all
subscribed BNs, call is admitted
otherwise call not admitted.
AN
AN
EN
EN
Virtual Network
(Control Plane)
AN
MUSE Summer School July 2006 — 20
Central Resource
Database (Not Needed)
Muse confidential
Distributed Call Admission Control
Centralized subscriber and QoS policy system is distributed in a few Super
Nodes, which are used to configure and push necessary policy data to the Border
Nodes (ANs & ENs). A virtual control network is created between the Super
Nodes & the Border Nodes.
Some aspects of the local resource quota for a distributed resource system are
as follows:
•
•
•
•
•
A portion of quota that is controlled exclusively by that Border Node.
A pool of unallocated quota and the IP addresses of Border Nodes that are allowed
to use the pool.
IP address of Border Nodes (for requesting quota).
Information of quota that is reserved for future use (at a certain time & for a certain
time period) and IP addresses of reserving Border Nodes.
Quotas have a valid time period, in the sense they have to be synchronized after that
time period.
Usage of unallocated resource is done only after requesting all relevant Border
Nodes and getting confirmation, to fore come that multiple usage. If there is no
confirmation from all relevant Border Nodes, then there will be a time-out and call
request will be denied.
MUSE Summer School July 2006 — 21
Muse confidential
QoS Policy at Access Node
>
Access nodes are the first point of contact from end-user premise
equipment. It is also the first point of multiplexing in access networks
and so the policy enforcement is best done at the access node.
>
In a central CAC system, the policies are stored centrally hence for
each connection the central CAC system makes the admission
decision and signals the access nodes to either allow or block the
connection and enforce the policy for the duration of the connection.
QoS policy is thus pushed from the central system, adaptation is also
done at the central system.
>
In a distributed CAC, QoS policies are available locally and the
adaptation is also done locally using the thresholds and the
congestion state of the network. The QoS policies are based on
individual IP flows or aggregated flows.
>
Irrespective of where the CAC decision is made centrally or locally,
the control or gating is done at the access node.
MUSE Summer School July 2006 — 22
Muse confidential
QoS Policy at Aggregation Nodes
>
The network elements in the aggregation network are usually
Ethernet based in access networks.
>
The QoS policy consists of setting the priority for the different
queues defined by the QoS identifiers e.g. ‘p’ bits. This is
usually defined once during initial network planning and reflects
the QoS classes in the network.
>
The QoS policy in the aggregation nodes is provisioned using
the network or element management system.
MUSE Summer School July 2006 — 23
Muse confidential
QoS Policy at Edge Node
>
The edge nodes in access networks usually connect to service
nodes via transport connectivity through the regional network.
>
The QoS policy is usually applicable to aggregated flows and
is a part of the SLA between the access network operator and
the service provider.
>
The QoS policy is usually provisioned and is rather static, and
can be updated with a new SLA.
MUSE Summer School July 2006 — 24
Muse confidential
Policy Adaptation by Network Overload
>
The principle of adaptation by network overload is that the
border nodes modify the QoS policy based on the overload
conditions in the network.
>
Overload conditions for each link are determined by
performance monitoring & the QoS policy system is triggered in
the presence of overload conditions.
>
In case of calamities, if extreme network overload conditions
are present, the QoS policies could be adapted such that there
is a strict reservation of resources for emergency services (fire
brigade, police, alarm, etc.).
MUSE Summer School July 2006 — 25
Muse confidential
Policy Adaptation by Service Characteristics
>
QoS policy can also be adapted to prevent QoS degradation because
of sudden uptake of popular new services.
>
Assume that a new broadband “buzz” service is introduced which
attracts a large part of the population. Each day hundreds of new
subscription requests are received by the service provider.
>
In such cases, QoS policy adaptation could be implemented such that
based on the service uptake pattern, new service connections can be
rejected more often. This solution is more acceptable, than the
solution where existing users experience a degradation of the service.
>
A typical implementation could be that service based traffic overload
counters are compared to the actual guaranteed load in the
corresponding links. When a service exceeds the pre-defined
maximum load for that service, then the policy for that service is
adapted such that new calls for that service are rejected or a lower
QoS is assigned to the new calls.
MUSE Summer School July 2006 — 26
Muse confidential
Agenda
>
QoS with resource based admission control
>
Tight & loose QoS models
>
QoS in carrier & application network models
>
Distribution and adaptation of QoS policies
>
Performance monitoring for QoS
>
Comparison of Muse QoS principles to other QoS
proposals
>
Conclusion
MUSE Summer School July 2006 — 27
Muse confidential
Counters to Monitor Resources in Access
Networks
Counters:
1) Queue lengths per QoS class
2) Dropped packets per QoS class
3) Upstream / downstream throughput for all ports
A1
A2
A3
A
Ag1
B1
B2
B3
B
Ag4
En1
Ag2
C1
C2
C
>
Counters can be used to monitor the node & link resources.
>
The counters could count the events (e.g. dropped packets) & also
quantify the traffic throughput, total dropped packets, queue lengths,
etc. and provide averages per time period.
MUSE Summer School July 2006 — 28
Muse confidential
Performance Monitoring for Synchronization of
Resource Database
Performance Counters:
1) Queue lengths per QoS class
QoS: Resource
Database
2) Dropped packets per QoS
class
3) Upstream / downstream
throughput for all ports
1. Threshold crossing
events for counters
2. Available link capacities
3. Overload events
>
The performance monitoring system periodically updates the
QoS resource system with the status of the link capacities.
>
The QoS resource system compares it’s view of used
capacities of links, with the view from performance monitoring
and synchronizes it’s database if needed.
MUSE Summer School July 2006 — 29
Muse confidential
Agenda
>
QoS with resource based admission control
>
Tight & loose QoS models
>
QoS in carrier & application network models
>
Distribution and adaptation of QoS policies
>
Performance monitoring for QoS
>
Comparison of Muse QoS principles to other QoS
proposals
>
Conclusion
MUSE Summer School July 2006 — 30
Muse confidential
Overview of QoS Proposals
>
QoS proposals are proposed by international forums
& standardization bodies such as ETSI-TISPAN, ITU,
3GPP, etc.
>
The common element in these proposals is that node
based priority queuing combined with resource based
admission control provides QoS assurance to the
flows as per QoS class.
>
MUSE proposal after detailed analysis identified
many important differentiators from other proposals
while the overall concept is common.
MUSE Summer School July 2006 — 31
Muse confidential
ETSI-TISPAN (R1) Resource Model for QoS
AF
Gq’
NASS
e4
RACS
SPDF
Rq
A -RACF
Ra
Ia
Re
C - BGF
RCEF
CPE
Access
Node
L2T
Point
IP Edge
Transport Layer
MUSE Summer School July 2006 — 32
Muse confidential
Core
Di
Border Node
Ds
MUSE QoS Differentiators
The main differentiators of the MUSE proposal
are:
•
•
•
•
•
•
IP awareness in Access Nodes.
Concepts of loose & tight QoS.
Local / distributed resource & admission
control.
Policy enforcement at edges.
Flow or aggregate based enforcement based
on location.
Scalability.
MUSE Summer School July 2006 — 33
Muse confidential
Agenda
>
QoS with resource based admission control
>
Tight & loose QoS models
>
QoS in carrier & application network models
>
Distribution and adaptation of QoS policies
>
Performance monitoring for QoS
>
Comparison of Muse QoS principles to other QoS
proposals
>
Conclusion
MUSE Summer School July 2006 — 34
Muse confidential
Conclusion
 Good traffic engineering combined with CAC is the
key to providing QoS in access networks.
 Judicious choice of ‘Tight’ & ‘Loose’ QoS
 Centralized or distributed CAC as per size of network
domain.
 Regular load monitoring of network and corresponding
QoS policy adaptation.
 Policy enforcement based on flows as close as
possible to the origin of the flow, while based on
aggregated flows further into the network.
 Open architecture for multi service / multi provider
Acess
MUSE Summer School July 2006 — 35
Muse confidential
Thank You!
MUSE Summer School July 2006 — 36
Muse confidential