GD-Aggregate: A WAN Virtual Topology Building Tool for

Download Report

Transcript GD-Aggregate: A WAN Virtual Topology Building Tool for

GD-Aggregate: A WAN Virtual Topology
Building Tool for Hard Real-Time and
Embedded Applications
Qixin Wang*, Xue Liu**, Jennifer Hou*, and Lui Sha*
*Dept. of Computer Science, UIUC
**School of Computer Science, McGill Univ.
Demand
• Big Trend: converge computers with the physical world
Demand
• Big Trend: converge computers with the physical world
– Cyber-Physical Systems
Demand
• Big Trend: converge computers with the physical world
– Cyber-Physical Systems
– Real-Time and Embedded (RTE) GENI
Demand
• Big Trend: converge computers with the physical world
– Cyber-Physical Systems
– Real-Time and Embedded (RTE) GENI
– Virtual Organization
Demand
• Big Trend: converge computers with the physical world
– Cyber-Physical Systems
– Real-Time and Embedded (RTE) GENI
– Virtual Organization
• Calls for RTE-WAN with following features:
Demand
• Big Trend: converge computers with the physical world
– Cyber-Physical Systems
– Real-Time and Embedded (RTE) GENI
– Virtual Organization
• Calls for RTE-WAN with following features:
– Scalability:
Demand
• Big Trend: converge computers with the physical world
– Cyber-Physical Systems
– Real-Time and Embedded (RTE) GENI
– Virtual Organization
• Calls for RTE-WAN with following features:
– Scalability:
• Similar traffic aggregation.
Demand
• Big Trend: converge computers with the physical world
– Cyber-Physical Systems
– Real-Time and Embedded (RTE) GENI
– Virtual Organization
• Calls for RTE-WAN with following features:
– Scalability:
• Similar traffic aggregation.
• Global/local traffic segregation;
Demand
• Big Trend: converge computers with the physical world
– Cyber-Physical Systems
– Real-Time and Embedded (RTE) GENI
– Virtual Organization
• Calls for RTE-WAN with following features:
– Scalability:
• Similar traffic aggregation.
• Global/local traffic segregation;
• Network hierarchy and modularity
Demand
• Big Trend: converge computers with the physical world
– Cyber-Physical Systems
– Real-Time and Embedded (RTE) GENI
– Virtual Organization
• Calls for RTE-WAN with following features:
– Scalability:
• Similar traffic aggregation.
• Global/local traffic segregation;
• Network hierarchy and modularity, which also assists composability,
dependability, debugging etc.
Demand
• Big Trend: converge computers with the physical world
– Cyber-Physical Systems
– Real-Time and Embedded (RTE) GENI
– Virtual Organization
• Calls for RTE-WAN with following features:
– Scalability:
• Similar traffic aggregation.
• Global/local traffic segregation;
• Network hierarchy and modularity;
– Configurability:
Demand
• Big Trend: converge computers with the physical world
– Cyber-Physical Systems
– Real-Time and Embedded (RTE) GENI
– Virtual Organization
• Calls for RTE-WAN with following features:
– Scalability:
• Similar traffic aggregation.
• Global/local traffic segregation;
• Network hierarchy and modularity;
– Configurability:
• Runtime behavior regulation
Demand
• Big Trend: converge computers with the physical world
– Cyber-Physical Systems
– Real-Time and Embedded (RTE) GENI
– Virtual Organization
• Calls for RTE-WAN with following features:
– Scalability:
• Similar traffic aggregation.
• Global/local traffic segregation;
• Network hierarchy and modularity;
– Configurability:
• Runtime behavior regulation
– Flexibility:
Demand
• Big Trend: converge computers with the physical world
– Cyber-Physical Systems
– Real-Time and Embedded (RTE) GENI
– Virtual Organization
• Calls for RTE-WAN with following features:
– Scalability:
• Similar traffic aggregation.
• Global/local traffic segregation;
• Network hierarchy and modularity;
– Configurability:
• Runtime behavior regulation
– Flexibility:
• Ease of reconfiguration
Demand
• Big Trend: converge computers with the physical world
– Cyber-Physical Systems
– Real-Time and Embedded (RTE) GENI
– Virtual Organization
• Calls for RTE-WAN with following features:
– Scalability:
• Similar traffic aggregation.
• Global/local traffic segregation;
• Network hierarchy and modularity;
– Configurability:
• Runtime behavior regulation
– Flexibility:
• Ease of reconfiguration
– Hard Real-Time E2E Delay Guarantee
Demand
• Big Trend: converge computers with the physical world
– Cyber-Physical Systems
– Real-Time and Embedded (RTE) GENI
– Virtual Organization
• Calls for RTE-WAN with following features:
– Scalability:
• Similar traffic aggregation.
• Global/local traffic segregation;
• Network hierarchy and modularity;
– Configurability:
• Runtime behavior regulation
– Flexibility:
• Ease of reconfiguration
– Hard Real-Time E2E Delay Guarantee
Solution? The Train/Railway Analogy
Solution? The Train/Railway Analogy
• Similar traffic aggregation:
carriage  train
Solution? The Train/Railway Analogy
• Similar traffic aggregation:
• Global/local traffic segregation:
carriage  train
express vs. local train
Solution? The Train/Railway Analogy
• Similar traffic aggregation:
• Global/local traffic segregation:
• Hierarchical topology:
carriage  train
express vs. local train
express vs. local train
Solution? The Train/Railway Analogy
•
•
•
•
Similar traffic aggregation:
carriage  train
Global/local traffic segregation:
express vs. local train
Hierarchical topology:
express vs. local train
Configuration:
routing, capacity planning
Solution? The Train/Railway Analogy
•
•
•
•
•
Similar traffic aggregation:
carriage  train
Global/local traffic segregation:
express vs. local train
Hierarchical topology:
express vs. local train
Configuration:
routing, capacity planning
Flexibility:
change the train planning, not the railway
The Equivalent of Train in Network?
The Equivalent of Train in Network?
• An aggregate (of flows) is like a train
A
Legend
B
C
Member Flow
End Node
Aggregate.
Intermediate Node
The Equivalent of Train in Network?
• An aggregate (of flows) is like a train
– Sender End Node: merges member flows into the
aggregate
A
Legend
B
C
Member Flow
End Node
Aggregate.
Intermediate Node
The Equivalent of Train in Network?
• An aggregate (of flows) is like a train
– Sender End Node: merges member flows into the
aggregate
– Receiver End Node: disintegrates the aggregate into
original flows
A
Legend
B
C
Member Flow
End Node
Aggregate.
Intermediate Node
The Equivalent of Train in Network?
• An aggregate (of flows) is like a train
– Sender End Node: merges member flows into the
aggregate
– Receiver End Node: disintegrates the aggregate into
original flows
– Intermediate Nodes: only forward the aggregate packets
A
Legend
B
C
Member Flow
End Node
Aggregate.
Intermediate Node
The Equivalent of Train in Network?
• An aggregate (of flows) is like a train
A
Legend
B
C
Member Flow
End Node
Aggregate.
Intermediate Node
The Equivalent of Train in Network?
• An aggregate (of flows) is like a train
– Packets of member flows  carriages
A
Legend
B
C
Member Flow
End Node
Aggregate.
Intermediate Node
The Equivalent of Train in Network?
• An aggregate (of flows) is like a train
– Packets of member flows  carriages
– Sender End Node: assembles the carriages into a train
A
Legend
B
C
Member Flow
End Node
Aggregate.
Intermediate Node
The Equivalent of Train in Network?
• An aggregate (of flows) is like a train
– Packets of member flows  carriages
– Sender End Node: assembles the carriages into a train
– Receiver End Node: dissembles the train into carriages
A
Legend
B
C
Member Flow
End Node
Aggregate.
Intermediate Node
The Equivalent of Train in Network?
• An aggregate (of flows) is like a train
–
–
–
–
Packets of member flows  carriages
Sender End Node: assembles the carriages into a train
Receiver End Node: dissembles the train into carriages
Intermediate Nodes: only forward the train, but cannot add/remove
carriages
A
Legend
B
C
Member Flow
End Node
Aggregate.
Intermediate Node
The Equivalent of Train in Network?
• An aggregate (of flows) is like a train
Packets of member flows  carriages
Sender End Node: assembles the carriages into a train
Receiver End Node: dissembles the train into carriages
Intermediate Nodes: only forward the train, but cannot add/remove
carriages
– Forwarding (routing) on the per train basis, not per carriage basis
–
–
–
–
A
Legend
B
C
Member Flow
End Node
Aggregate.
Intermediate Node
The Equivalent of Train in Network?
• An aggregate (of flows) is like a train
Packets of member flows  carriages
Sender End Node: assembles the carriages into a train
Receiver End Node: dissembles the train into carriages
Intermediate Nodes: only forward the train, but cannot add/remove
carriages
– Forwarding (routing) on the per train basis, not per carriage basis
– Local Train: few hops (physical links)
–
–
–
–
A
Legend
B
C
Member Flow
End Node
Aggregate.
Intermediate Node
The Equivalent of Train in Network?
• An aggregate (of flows) is like a train
Packets of member flows  carriages
Sender End Node: assembles the carriages into a train
Receiver End Node: dissembles the train into carriages
Intermediate Nodes: only forward the train, but cannot add/remove
carriages
– Forwarding (routing) on the per train basis, not per carriage basis
– Local Train: few hops
– Express Train: many hops
–
–
–
–
A
Legend
B
C
Member Flow
End Node
Aggregate.
Intermediate Node
Virtual Link/Topology
• Aggregates with the same sender and receiver
end nodes collectively embody a virtual link
F1
A
C
F2
B
F3
Legend
Virtual Link
End Node
Aggregate. Thickness implies
the aggregate’s data throughput
Intermediate Node
Virtual Link/Topology
• Aggregates with the same sender and receiver
end nodes collectively embody a virtual link
• Many virtual links altogether build up virtual
topology
F1
A
C
F2
B
F3
Legend
Virtual Link
End Node
Aggregate. Thickness implies
the aggregate’s data throughput
Intermediate Node
State-of-the-Art: GR-Aggregate
• How to build virtual link with hard real-time
E2E delay guarantee?
State-of-the-Art: GR-Aggregate
• How to build virtual link with hard real-time
E2E delay guarantee?
• [SunShin05]: Guaranteed Rate Aggregate (GRAggregate)
State-of-the-Art: GR-Aggregate
Guaranteed Rate Server (GR-Server) [Goyal97a]:
A queueing server S is a GR-Server, as long as there exists a
constant rf (called guaranteed rate) for each of its flow f , such that
State-of-the-Art: GR-Aggregate
Guaranteed Rate Server (GR-Server) [Goyal97a]:
A queueing server S is a GR-Server, as long as there exists a
constant rf (called guaranteed rate) for each of its flow f , such that
L( p )  GRSFunc  f ' s past packetarrivalhistory, rf .
j
f
State-of-the-Art: GR-Aggregate
Guaranteed Rate Server (GR-Server) [Goyal97a]:
A queueing server S is a GR-Server, as long as there exists a
constant rf (called guaranteed rate) for each of its flow f , such that
L( p )  GRSFunc  f ' s past packetarrivalhistory, rf .
j
f
pfj: jth packet
of flow f
State-of-the-Art: GR-Aggregate
Guaranteed Rate Server (GR-Server) [Goyal97a]:
A queueing server S is a GR-Server, as long as there exists a
constant rf (called guaranteed rate) for each of its flow f , such that
L(p): time when
packet p leaves S
L( p )  GRSFunc  f ' s past packetarrivalhistory, rf .
j
f
pfj: jth packet
of flow f
State-of-the-Art: GR-Aggregate
Guaranteed Rate Server (GR-Server) [Goyal97a]:
A queueing server S is a GR-Server, as long as there exists a
constant rf (called guaranteed rate) for each of its flow f , such that
L(p): time when
packet p leaves S
A specific function,
called GRSFunc
L( p )  GRSFunc  f ' s past packetarrivalhistory, rf .
j
f
pfj: jth packet
of flow f
State-of-the-Art: GR-Aggregate
Guaranteed Rate Server (GR-Server) [Goyal97a]:
A queueing server S is a GR-Server, as long as there exists a
constant rf (called guaranteed rate) for each of its flow f , such that
L(p): time when
packet p leaves S
A specific function,
called GRSFunc
L( p )  GRSFunc  f ' s past packetarrivalhistory, rf .
j
f
pfj: jth packet
of flow f
State-of-the-Art: GR-Aggregate
Guaranteed Rate Server (GR-Server) [Goyal97a]:
A queueing server S is a GR-Server, as long as there exists a
constant rf (called guaranteed rate) for each of its flow f , such that
L(p): time when
packet p leaves S
A specific function,
called GRSFunc
rf: guaranteed
rate
L( p )  GRSFunc  f ' s past packetarrivalhistory, rf .
j
f
pfj: jth packet
of flow f
State-of-the-Art: GR-Aggregate
Guaranteed Rate Server (GR-Server) [Goyal97a]:
A queueing server S is a GR-Server, as long as there exists a
constant rf (called guaranteed rate) for each of its flow f , such that
L(p): time when
packet p leaves S
A specific function,
called GRSFunc
rf: guaranteed
rate
L( p )  GRSFunc  f ' s past packetarrivalhistory, rf .
j
f
pfj: jth packet
of flow f
State-of-the-Art: GR-Aggregate
• [Goyal97a] proves WFQ, WF2Q are GR-Server, with
rf = f C, where f is the weight of flow f (note f ≤ 1),
and C is the server output capacity.
State-of-the-Art: GR-Aggregate
• [Goyal97a] proves WFQ, WF2Q are GR-Server, with
rf = f C, where f is the weight of flow f (note f ≤ 1),
and C is the server output capacity.
• [SunShin05]: GR-Aggregate based Virtual Link:
State-of-the-Art: GR-Aggregate
• [Goyal97a] proves WFQ, WF2Q are GR-Server, with
rf = f C, where f is the weight of flow f (note f ≤ 1),
and C is the server output capacity.
• [SunShin05]: GR-Aggregate based Virtual Link:
– Sender End: aggregates virtual link’s member flows into an
aggregate F using a GR-Server;
State-of-the-Art: GR-Aggregate
• [Goyal97a] proves WFQ, WF2Q are GR-Server, with
rf = f C, where f is the weight of flow f (note f ≤ 1),
and C is the server output capacity.
• [SunShin05]: GR-Aggregate based Virtual Link:
– Sender End: aggregates virtual link’s member flows into an
aggregate F using a GR-Server;
– Intermediate Nodes: each forwards F with a GR-Server at a
guaranteed rate of RF, where RF ≥ F, and F is F’s data
throughput.
State-of-the-Art: GR-Aggregate
• [Goyal97a] proves WFQ, WF2Q are GR-Server, with
rf = f C, where f is the weight of flow f (note f ≤ 1),
and C is the server output capacity.
• [SunShin05]: GR-Aggregate based Virtual Link:
– Sender End: aggregates virtual link’s member flows into an
aggregate F using a GR-Server;
– Intermediate Nodes: each forwards F with a GR-Server at a
guaranteed rate of RF, where RF ≥ F, and F is F’s data
throughput.
– Receiver End: disintegrates F into original flows.
State-of-the-Art: GR-Aggregate
• [Goyal97a] proves WFQ, WF2Q are GR-Server, with
rf = f C, where f is the weight of flow f (note f ≤ 1),
and C is the server output capacity.
• [SunShin05]: GR-Aggregate based Virtual Link:
– Sender End: aggregates virtual link’s member flows into an
aggregate F using a GR-Server;
– Intermediate Nodes: each forwards F with a GR-Server at a
guaranteed rate of RF, where RF ≥ F, and F is F’s data
throughput.
– Receiver End: disintegrates F into original flows.
– E2E Delay d ≤  / RF + , where ,  are certain constants.
New Problem
• GR-Aggregate fits Internet traffic well.
New Problem
• GR-Aggregate fits Internet traffic well.
• When applied to Cyber-Physical Systems traffic
New Problem
• GR-Aggregate fits Internet traffic well.
• When applied to Cyber-Physical Systems traffic
– Real-time sensing/actuating aggregate:
New Problem
• GR-Aggregate fits Internet traffic well.
• When applied to Cyber-Physical Systems traffic
– Real-time sensing/actuating aggregate:
• Small data throughput, small E2E delay requirement
New Problem
• GR-Aggregate fits Internet traffic well.
• When applied to Cyber-Physical Systems traffic
– Real-time sensing/actuating aggregate:
• Small data throughput, small E2E delay requirement
– Real-time video aggregate:
New Problem
• GR-Aggregate fits Internet traffic well.
• When applied to Cyber-Physical Systems traffic
– Real-time sensing/actuating aggregate:
• Small data throughput, small E2E delay requirement
– Real-time video aggregate:
• Large data throughput, small E2E delay requirement
New Problem
• GR-Aggregate fits Internet traffic well.
• When applied to Cyber-Physical Systems traffic
– Real-time sensing/actuating aggregate:
• Small data throughput, small E2E delay requirement
– Real-time video aggregate:
• Large data throughput, small E2E delay requirement
– Non-real-time traffic
New Problem
• For real-time sensing/actuating aggregate:
New Problem
• For real-time sensing/actuating aggregate:
– Small data throughput, small E2E delay requirement
New Problem
• For real-time sensing/actuating aggregate:
– Small data throughput, small E2E delay requirement
– GR-Aggregate E2E delay d ≤  / RF + 
New Problem
• For real-time sensing/actuating aggregate:
– Small data throughput, small E2E delay requirement
– GR-Aggregate E2E delay d ≤  / RF + 
– Assigning small guaranteed rate RF (i.e., RF = F) 
large E2E delay;
New Problem
• For real-time sensing/actuating aggregate:
– Small data throughput, small E2E delay requirement
– GR-Aggregate E2E delay d ≤  / RF + 
– Assigning small guaranteed rate RF (i.e., RF = F) 
large E2E delay;
– Assigning large guaranteed rate RF (i.e., RF > F ) 
New Problem
• For real-time sensing/actuating aggregate:
– Small data throughput, small E2E delay requirement
– GR-Aggregate E2E delay d ≤  / RF + 
– Assigning small guaranteed rate RF (i.e., RF = F) 
large E2E delay;
– Assigning large guaranteed rate RF (i.e., RF > F ) 
other aggregates’ guaranteed rates < their data
throughputs (when link capacity is precious).
New Problem
• For real-time sensing/actuating aggregate:
– Small data throughput, small E2E delay requirement
– GR-Aggregate E2E delay d ≤  / RF + 
– Assigning small guaranteed rate RF (i.e., RF = F) 
large E2E delay;
– Assigning large guaranteed rate RF (i.e., RF > F ) 
other aggregates’ guaranteed rates < their data
throughputs (when link capacity is precious).
GR-Aggregate does not talk about this situation.
New Problem
• For real-time sensing/actuating aggregate:
– Small data throughput, small E2E delay requirement
– GR-Aggregate E2E delay d ≤  / RF + 
– Assigning small guaranteed rate RF (i.e., RF = F) 
large E2E delay;
– Assigning large guaranteed rate RF (i.e., RF > F ) 
other aggregates’ guaranteed rates < their data
throughputs (when link capacity is precious).
GR-Aggregate does not talk about this situation.
What will happen?
Solution Heuristic
• Observation:
Solution Heuristic
• Observation:
The purpose of using GR-Server to provide E2E delay
guarantee is to provide a constant per hop transmission
delay bound.
Solution Heuristic
• Observation:
The purpose of using GR-Server to provide E2E delay
guarantee is to provide a constant per hop transmission
delay bound.
• As long as we can provide such a bound, we are fine.
Solution Heuristic
• So far we know WFQ, WF2Q are GR-Servers, and if flow f is assigned
weight f , it is guaranteed a rate of rf = f C.
Solution Heuristic
• So far we know WFQ, WF2Q are GR-Servers, and if flow f is assigned
weight f , it is guaranteed a rate of rf = f C.
Conventionally, we assign weight f proportional to data throughput,
i.e., f C ≥ f.
Solution Heuristic
• So far we know WFQ, WF2Q are GR-Servers, and if flow f is assigned
weight f , it is guaranteed a rate of rf = f C.
Conventionally, we assign weight f proportional to data throughput,
i.e., f C ≥ f.
• What if we assign arbitrary weight?
Solution Heuristic
• So far we know WFQ, WF2Q are GR-Servers, and if flow f is assigned
weight f , it is guaranteed a rate of rf = f C.
Conventionally, we assign weight f proportional to data throughput,
i.e., f C ≥ f.
• What if we assign arbitrary weight?
Discovery: as long as every flow is token-bucket-constrained and f
≤ C, every flow still has a bounded transmission delay, and there is an
algorithm TDB({i},{limax},C) to calculate this transmission delay
bound f (l).
Solution Heuristic
• So far we know WFQ, WF2Q are GR-Servers, and if flow f is assigned
weight f , it is guaranteed a rate of rf = f C.
Conventionally, we assign weight f proportional to data throughput,
i.e., f C ≥ f.
• What if we assign arbitrary weight?
Discovery: as long as every flow is token-bucket-constrained and f
≤ C, every flow still has a bounded transmission delay, and there is an
algorithm TDB({i},{limax},C) to calculate this transmission delay
bound f (l).
To the extreme, we can mimic prioritized preemption by assigning
proper weights.
Solution Heuristic: What does arbitrary weight imply?
F1, data rate = 0.1
F2, data rate = 0.4
F3, data rate = 0.5
Solution Heuristic: What does arbitrary weight imply?
F1, data rate = 0.1
F2, data rate = 0.4
F3, data rate = 0.5
Server Capacity C = 1, all packet length l = 1.
Solution Heuristic: What does arbitrary weight imply?
F1, data rate = 0.1
F2, data rate = 0.4
F3, data rate = 0.5
Server Capacity C = 1, all packet length l = 1.
Data Rate Proportional Weight:
1 = 0.1, 2 = 0.4, 3 = 0.5
Solution Heuristic: What does arbitrary weight imply?
F1, data rate = 0.1
F2, data rate = 0.4
F3, data rate = 0.5
Server Capacity C = 1, all packet length l = 1.
Data Rate Proportional Weight:
1 = 0.1, 2 = 0.4, 3 = 0.5
0
1
2
3  2
3
4
5
6
 2  2.5
7
8
9
10 t (sec)
1  10
Transmission delay bound inverse
proportionally coupled with data
throughput
Solution Heuristic: What does arbitrary weight imply?
F1, data rate = 0.1
F2, data rate = 0.4
F3, data rate = 0.5
Server Capacity C = 1, all packet length l = 1.
Prioritized Weight:
1 = 0.999, 2 = 0.000999, 3 = 0.000001
Data Rate Proportional Weight:
1 = 0.1, 2 = 0.4, 3 = 0.5
0
1
2
3  2
3
4
5
6
 2  2.5
7
8
9
10 t (sec)
1  10
Transmission delay bound inverse
proportionally coupled with data
throughput
Solution Heuristic: What does arbitrary weight imply?
F1, data rate = 0.1
F2, data rate = 0.4
F3, data rate = 0.5
Server Capacity C = 1, all packet length l = 1.
Prioritized Weight:
1 = 0.999, 2 = 0.000999, 3 = 0.000001
Data Rate Proportional Weight:
1 = 0.1, 2 = 0.4, 3 = 0.5
0
1
2
3  2
3
4
5
6
 2  2.5
7
8
9
10 t (sec)
1  10
Transmission delay bound inverse
proportionally coupled with data
throughput
0
1
2
3
1  1  2 
4
5
20
 2.22
9
6
7
8
9
10
t (sec)
3  6
According to TDB algorithm, transmission
delay bound decoupled from data throughput,
and reflects priority: higher priority maps to
shorter 
Solution: GD-Aggregate
Proposal: Guaranteed Delay Server (GD-Server):
A queueing server S is a GD-Server, as long as there exists a nonnegative monotonically non-decreasing function f(l) (called
guaranteed delay function) for each of its flow f , such that
Solution: GD-Aggregate
Proposal: Guaranteed Delay Server (GD-Server):
A queueing server S is a GD-Server, as long as there exists a nonnegative monotonically non-decreasing function f(l) (called
guaranteed delay function) for each of its flow f , such that
L( p fj )  GDSFunc f ' s past packetarrivalhistory,  f .
Solution: GD-Aggregate
Proposal: Guaranteed Delay Server (GD-Server):
A queueing server S is a GD-Server, as long as there exists a nonnegative monotonically non-decreasing function f(l) (called
guaranteed delay function) for each of its flow f , such that
L( p fj )  GDSFunc f ' s past packetarrivalhistory,  f .
pfj: jth packet
of flow f
Solution: GD-Aggregate
Proposal: Guaranteed Delay Server (GD-Server):
A queueing server S is a GD-Server, as long as there exists a nonnegative monotonically non-decreasing function f(l) (called
guaranteed delay function) for each of its flow f , such that
L(p): time when
packet p leaves S
L( p fj )  GDSFunc f ' s past packetarrivalhistory,  f .
pfj: jth packet
of flow f
Solution: GD-Aggregate
Proposal: Guaranteed Delay Server (GD-Server):
A queueing server S is a GD-Server, as long as there exists a nonnegative monotonically non-decreasing function f(l) (called
guaranteed delay function) for each of its flow f , such that
L(p): time when
packet p leaves S
A specific function,
called GDSFunc
L( p fj )  GDSFunc f ' s past packetarrivalhistory,  f .
pfj: jth packet
of flow f
Solution: GD-Aggregate
Proposal: Guaranteed Delay Server (GD-Server):
A queueing server S is a GD-Server, as long as there exists a nonnegative monotonically non-decreasing function f(l) (called
guaranteed delay function) for each of its flow f , such that
L(p): time when
packet p leaves S
A specific function,
called GDSFunc
L( p fj )  GDSFunc f ' s past packetarrivalhistory,  f .
pfj: jth packet
of flow f
Solution: GD-Aggregate
Proposal: Guaranteed Delay Server (GD-Server):
A queueing server S is a GD-Server, as long as there exists a nonnegative monotonically non-decreasing function f(l) (called
guaranteed delay function) for each of its flow f , such that
L(p): time when
packet p leaves S
A specific function,
called GDSFunc
f(l) :
guaranteed
delay function
L( p fj )  GDSFunc f ' s past packetarrivalhistory,  f .
pfj: jth packet
of flow f
Solution: GD-Aggregate
Proposal: Guaranteed Delay Server (GD-Server):
A queueing server S is a GD-Server, as long as there exists a nonnegative monotonically non-decreasing function f(l) (called
guaranteed delay function) for each of its flow f , such that
L(p): time when
packet p leaves S
A specific function,
called GDSFunc
f(l) :
guaranteed
delay function
L( p fj )  GDSFunc f ' s past packetarrivalhistory,  f .
pfj: jth packet
of flow f
Solution: GD-Aggregate
• Discovery: If each ingress flow/aggregate is
token-bucket-constrained, WFQ and WF2Q
servers are GD-Servers.
Solution: GD-Aggregate
• Discovery: If each ingress flow/aggregate is
token-bucket-constrained, WFQ and WF2Q
servers are GD-Servers.
• Design: We modified the design of Sun and
Shin’s GR-Aggregate into GD-Aggregate,
(mainly) by changing GR-Servers to GDServers.
Solution: GD-Aggregate
• GD-Aggregate Features:
Solution: GD-Aggregate
• GD-Aggregate Features:
– E2E Delay d ≤ k(lmax) + , where k(l) is the guaranteed delay
function at the kth hop, lmax is the maximal packet length.
Solution: GD-Aggregate
• GD-Aggregate Features:
– E2E Delay d ≤ k(lmax) + , where k(l) is the guaranteed delay
function at the kth hop, lmax is the maximal packet length.
– We found a way to assign weight to mimic priority so that
Solution: GD-Aggregate
• GD-Aggregate Features:
– E2E Delay d ≤ k(lmax) + , where k(l) is the guaranteed delay
function at the kth hop, lmax is the maximal packet length.
– We found a way to assign weight to mimic priority so that
• An aggregate with small data throughput can have small k(l),
Solution: GD-Aggregate
• GD-Aggregate Features:
– E2E Delay d ≤ k(lmax) + , where k(l) is the guaranteed delay
function at the kth hop, lmax is the maximal packet length.
– We found a way to assign weight to mimic priority so that
• An aggregate with small data throughput can have small k(l),
• and hence small E2E delay guarantee.
Solution: GD-Aggregate
• GD-Aggregate Features:
– E2E Delay d ≤ k(lmax) + , where k(l) is the guaranteed delay
function at the kth hop, lmax is the maximal packet length.
– We found a way to assign weight to mimic priority so that
• An aggregate with small data throughput can have small k(l),
• and hence small E2E delay guarantee.
• No waste of link capacity
Solution: GD-Aggregate
• GD-Aggregate Features:
– E2E Delay d ≤ k(lmax) + , where k(l) is the guaranteed delay
function at the kth hop, lmax is the maximal packet length.
– We found a way to assign weight to mimic priority so that
•
•
•
•
An aggregate with small data throughput can have small k(l),
and hence small E2E delay guarantee.
No waste of link capacity
k(l) is a linear function of packet length l.
Solution: GD-Aggregate
• GD-Aggregate Features:
– E2E Delay d ≤ k(lmax) + , where k(l) is the guaranteed delay
function at the kth hop, lmax is the maximal packet length.
– We found a way to assign weight to mimic priority so that
•
•
•
•
•
An aggregate with small data throughput can have small k(l),
and hence small E2E delay guarantee.
No waste of link capacity
k(l) is a linear function of packet length l.
Each priority’s capacity and delay guarantee can be planned with simple
optimization tools.
Solution: GD-Aggregate
• GD-Aggregate Features:
– E2E Delay d ≤ k(lmax) + , where k(l) is the guaranteed delay
function at the kth hop, lmax is the maximal packet length.
– We found a way to assign weight to mimic priority so that
•
•
•
•
•
An aggregate with small data throughput can have small k(l),
and hence small E2E delay guarantee.
No waste of link capacity
k(l) is a linear function of packet length l.
Each priority’s capacity and delay guarantee can be planned with simple
optimization tools.
(8 Theorems, 4 Corollaries, 14 Lemmas)
Evaluation: Application Background
• Underground Mining: A Typical CyberPhysical Systems Application
An underground coal mine
300m
3000m
Panel 3
Panel 2
Panel 1
Coal
North
West
East
South
6000m
Active Mining
Area (Face)
Underground mines often cover
huge areas; and are dangerous.
300m
3000m
Panel 3
Panel 2
Panel 1
Coal
North
West
East
South
6000m
Active Mining
Area (Face)
Underground mines often cover
huge areas; and are dangerous.
300m
Panel 3
Panel 2
Panel 1
3000m
Need to replace all human workers
with remotely controlled
robots/vehicles.
Coal
North
West
East
South
6000m
Active Mining
Area (Face)
Vision: Human remotely controls
robots/vehicles from above-ground
control room, via wired WAN
backbone and wireless LANs
300m
3000m
Panel 3
Panel 2
Panel 1
Coal
North
West
East
South
6000m
Above-Ground
Remote Control
Room
Active Mining
Area (Face)
Coal
A wireless LAN base station (a.k.a.
access point, in the case of IEEE 802.11)
300m
A wireline physical link, part of the
underground mining RTE-WAN
3000m
Panel 3
Panel 2
Panel 1
North
West
East
South
6000m
Above-Ground
Remote Control
Room
Active Mining
Area (Face)
Coal
A wireless LAN base station (a.k.a.
access point, in the case of IEEE 802.11)
300m
A wireline physical link, part of the
underground mining RTE-WAN
3000m
Panel 3
Panel 2
Panel 1
A virtual link (may consist of
several GR/GD-aggregates)
with its two end nodes
North
West
East
South
Above-Ground
Remote Control
Room
6000m
A
B
Evaluation: Traffic Feature
• Remote underground mining creates all typical CPS
traffic (aggregates)
Evaluation: Traffic Feature
• Remote underground mining creates all typical CPS
traffic (aggregates)
• Virtual link AB may consist of following aggregates:
Evaluation: Traffic Feature
• Remote underground mining creates all typical CPS
traffic (aggregates)
• Virtual link AB may consist of following aggregates:
– F1: tele-robotic sensing/actuating aggregate
small data throughput, short hard real-time E2E delay
requirement ( ≤ 50ms)
Evaluation: Traffic Feature
• Remote underground mining creates all typical CPS
traffic (aggregates)
• Virtual link AB may consist of following aggregates:
– F1: tele-robotic sensing/actuating aggregate
small data throughput, short hard real-time E2E delay
requirement ( ≤ 50ms)
– F2: tele-robotic video aggregate
large data throughput, short hard real-time E2E delay
requirement ( ≤ 50ms)
Evaluation: Traffic Feature
• Remote underground mining creates all typical CPS
traffic (aggregates)
• Virtual link AB may consist of following aggregates:
– F1: tele-robotic sensing/actuating aggregate
small data throughput, short hard real-time E2E delay
requirement ( ≤ 50ms)
– F2: tele-robotic video aggregate
large data throughput, short hard real-time E2E delay
requirement ( ≤ 50ms)
– F3: Non-real-time traffic aggregate
tolerates seconds of E2E delay.
Evaluation: Result
Aggregate’s data throughput (kbps)
Evaluation: Result
Aggregate’s data throughput (kbps)
When link capacity C is precious, i.e., total data throughput of F1, F2, and F3 = = C.
Evaluation: Result
Aggregate’s data throughput (kbps)
When link capacity C is precious, i.e., total data throughput of F1, F2, and F3 = = C.
GR-Aggregate has to allocate guaranteed rate proportional to data throughput.
Evaluation: Result
Aggregate’s data throughput (kbps)
When link capacity C is precious, i.e., total data throughput of F1, F2, and F3 = = C.
GR-Aggregate has to allocate guaranteed rate proportional to data throughput.
Evaluation: Result
Aggregate’s data throughput (kbps)
When link capacity C is precious, i.e., total data throughput of F1, F2, and F3 = = C.
GR-Aggregate has to allocate guaranteed rate proportional to data throughput.
Evaluation: Result
Aggregate’s data throughput (kbps)
When link capacity C is precious, i.e., total data throughput of F1, F2, and F3 = = C.
GR-Aggregate has to allocate guaranteed rate proportional to data throughput.
GD-Aggregate can still let F1 has highest priority.
Evaluation: Result
Aggregate’s data throughput (kbps)
When link capacity C is precious, i.e., total data throughput of F1, F2, and F3 = = C.
GR-Aggregate has to allocate guaranteed rate proportional to data throughput.
GD-Aggregate can still let F1 has highest priority.
Evaluation: Result
Aggregate’s data throughput (kbps)
When link capacity C is precious, i.e., total data throughput of F1, F2, and F3 = = C.
GR-Aggregate has to allocate guaranteed rate proportional to data throughput.
GD-Aggregate can still let F1 has highest priority.
Related Work
• Overlay Network: soft real-time, statistic
Related Work
• Overlay Network: soft real-time, statistic
• DiffServ: FIFO, poor performance for bursty traffic
Related Work
• Overlay Network: soft real-time, statistic
• DiffServ: FIFO, poor performance for bursty traffic
• Real-Time Virtual Machine: still open problem,
especially on mutual exclusion and closed-form
schedulability formulae.
Related Work
• Overlay Network: soft real-time, statistic
• DiffServ: FIFO, poor performance for bursty traffic
• Real-Time Virtual Machine: still open problem,
especially on mutual exclusion and closed-form
schedulability formulae.
• [Geogiadis96] also found the decoupling technique,
fluid model, no aggregation.
Related Work
• Overlay Network: soft real-time, statistic
• DiffServ: FIFO, poor performance for bursty traffic
• Real-Time Virtual Machine: still open problem,
especially on mutual exclusion and closed-form
schedulability formulae.
• [Geogiadis96] also found the decoupling technique,
fluid model, no aggregation.
• [Goyal97b] per packet guaranteed rate, known a
priori, or refer to the minimum rate. Does not talk
about aggregation.
Conclusion
GD-Aggregate:
Conclusion
GD-Aggregate:
• Supports flow aggregation and E2E delay guarantee
Conclusion
GD-Aggregate:
• Supports flow aggregation and E2E delay guarantee
• A tool to build hard real-time virtual link/topology
Conclusion
GD-Aggregate:
• Supports flow aggregation and E2E delay guarantee
• A tool to build hard real-time virtual link/topology
• Decouples E2E delay guarantee from data throughput
Conclusion
GD-Aggregate:
• Supports flow aggregation and E2E delay guarantee
• A tool to build hard real-time virtual link/topology
• Decouples E2E delay guarantee from data throughput
• Supports priority
Conclusion
GD-Aggregate:
• Supports flow aggregation and E2E delay guarantee
• A tool to build hard real-time virtual link/topology
• Decouples E2E delay guarantee from data throughput
• Supports priority
• Simple linear closed-form formulae for analysis and
admission control
Conclusion
GD-Aggregate:
• Supports flow aggregation and E2E delay guarantee
• A tool to build hard real-time virtual link/topology
• Decouples E2E delay guarantee from data throughput
• Supports priority
• Simple linear closed-form formulae for analysis and
admission control
• Can be planned with simple optimization tools
References
[Fisher04] B. Fisher et al., “Seeing, hearing, and touching: Putting it all
together,” SIGGRAPH'04 Course, 2004.
[Georgiadis96] L. Georgiadis et al., “Efficient network QoS provisioning
based on per node traffic shaping,” IEEE/ACM Trans. on Networking, vol.
4, no. 4, August 1996.
[Goyal97a] P. Goyal et al., “Determining end-to-end delay bounds in
heterogeneous networks,” Multimedia Systems, no. 5, pp. 157-163, 1997.
[Goyal97b] P. Goyal and H. M. Vin, “Generalized guaranteed rate scheduling
algorithms: A framework,” IEEE/ACM Trans. on Networking, vol. 5, no. 4,
pp. 561-571, August 1997.
[Hartman02] H. L. Hartman and J. M. Mutmansky, Introductory Mining
Engineering (2nd Ed.). Wiley, August 2002.
[SunShin05] W. Sun and K. G. Shin, “End-to-end delay bounds for trafc
aggregates under guaranteed-rate scheduling algorithms,” IEEE/ACM
Trans. on Networking, vol. 13, no. 5, pp. 1188-1201, October 2005.
Thank You!