Virtual Slice Management and Resource
Download
Report
Transcript Virtual Slice Management and Resource
Virtual Network Embedding Survey
To map virtual networks onto physical network
resources.
Lilin Zhang
A starter example
Substrate Network node capacity is limited (indicated as integers in square box);
SN link capacity is infinite;
VNR (Virtual Network Request) arrives along the time, each VNR has VN node demands indicated;
After each VNR's arrival/departure, the remaining SN node capacity is updated (at t1, t2, t3, and t4);
After the departure of VNR1 and the arrival of VNR3, remapping (at time t4) is performed to make room
for VNR3.
Complexity of VNE
VNE, with constraints on virtual nodes and
virtual links: NP-hard.
Reduced to the NP-hard multi-way separator problem.
D.G.Andersen. "Theoretical Approaches to Node Assignment" (2002).CMU Computer Science
Department.Paper #86. http://repository.cmu.edu/compsci/86/
VLinkE, with all virtual nodes already mapped,
to optimally allocate a set of virtual links to
single substrate paths: NP-hard.
Becomes an offline load balancing routing problem where the source and destination are the ingress and
egress nodes of each Vlink and each flow has a unit demand.
Reduced to the NP-hard unsplittable flow problem (i.e., integer flow assignment in multi-source/sink flow
network).
J.Kleinberg. “Approximation algorithms for disjoint paths problems”. Ph.D. Dissertation, MIT, 1996.
S.Kolliopoulos and C.Stein. “Improved approximation algorithms for unsplittable flow problems”. In
Foundations of Computer Science, 1997.
Complexity of VNE
VNE, with constraints on virtual nodes and
virtual links: NP-hard.
Reduced to the NP-hard multi-way separator problem.
D.G.Andersen. "Theoretical Approaches to Node Assignment" (2002).CMU Computer Science
Department.Paper #86. http://repository.cmu.edu/compsci/86/
VLinkE, with all virtual nodes already mapped,
to optimally allocate a set of virtual links to
single substrate paths: NP-hard.
Becomes an offline load balancing routing problem where the source and destination are the ingress and
egress nodes of each Vlink and each flow has a unit demand.
Reduced to the NP-hard unsplittable flow problem.
J.Kleinberg. “Approximation algorithms for disjoint paths problems”. Ph.D. Dissertation, MIT, 1996.
S.Kolliopoulos and C.Stein. “Improved approximation algorithms for unsplittable flow problems”. In
Foundations of Computer Science, 1997.
Naive solution to VNE
A naive way to perform VN assignment is to treat Vnode mapping and Vlink
mapping as two independent sub-problems and solve them sequentially.
Cons: the optimality of Vnode mapping and optimality of Vlink mapping
cannot be achieved at the same time via the naive solution.
e.g., two ways of embedding 6 pairs of connected Vnodes onto three SN
nodes:
Optimal link embedding – i.e., SN
links are least loaded, but the
centre SN node is overprovisioned with 6 Vnodes.
Optimal node embedding – i.e., SN
nodes are least loaded, but both
SN links are provisioned with 4
Vlinks, more than the 3Vlinksscenario on the left.
Y.Zhu, M.Ammar. “Algorithms for Assigning Substrate Network Resources
to Virtual Network Components”. INFOCOM 2006
INFOCOM 06
To fight against the cons of naive solution, this study considers both SN node and SN link loads
while embedding new VNR.
Assume infinite capacity of substrate nodes and links to avoid VNR admission control, i.e., the
SN network keeps accepting arriving VNR's, and the research focuses on how to embed them
optimally.
Y.Zhu, M.Ammar. “Algorithms for Assigning Substrate Network Resources to Virtual Network
Components”. INFOCOM 2006.
Introduce a new metric to measure to performance of embedding – node/link stress ratio
Global graph metric: quantify the node stress
Global graph metric: quantify the link stress
Substrate node stress at time t,
substrate node
Substrate link stress at time t,
passes through the substrate link
, is the # of virtual nodes that are assigned to the
, is the # of virtual links whose substrate path
INFOCOM 06 Cont'1
The VN assignment for the i-th VN is formulated as:
is the time instance immediately after the i-th VNR arrival.
are weights to tradeoff the
optimization prone to node-efficiency or to link-efficiency, determined by specific application scenarios.
Two solutions:
VNA-I (V1 basic VN assignment taking the entire VN as one entity)
it follows the sequential process: (1) select a cluster of SN nodes as embedding candidates (2) perform a
one-to-one mapping for Vnodes (3) run shortest-distance path algorithm to determine Vlinks
In Step (1), the 1st SN node is identified through:
In Step (1), all subsequent SN node are identified through:
A multiplication of SN node stress and
its direct link stress
Count the distance to all previously
assigned Vnodes, i.e., assume “fullyconnected VN” ,regardless the actual
VN topology
In Step (2), the one-to-one mapping is done in the way that Vnode of higher degree are mapped to SN
node of higher NR.
This solution aims at minimizing the maximum node stress (node-opt)
INFOCOM 06 Cont'2
VNA-I (V2 - subVN subdivide the entire VN into a set of star sub-VN's)
(1 )Divide and Conquer
(2) didn’t assume “full graph”, but use
the actual VN topology to compute
For unconstrained subVN, perform VNA-1(V1)
For centre-constrained subVN, perform VNA-1(V1) and select subsequent SN nodes
using
For center-unconstrained subVN, perform VNA-1(V1)
VNA-I (V3 - link-opt to optimize link stress)
A summation of SN node's direct link
stress
A ratio of the sum of SN node distance
to all previously assigned Vnode
VNA-I (V4 - adaptive)
Iterate between node-opt solution and link-opt solution based on the network condition
Upon the arrival of i-th VNR, if
opt
, do link-opt; otherwise, do node-
Infocom 06 Cont'3
VNA-II (with reconfiguration)
Propose a selective reconfiguration scheme: reconfig critical VNs, in order to reduce
the maximum link/node stress.
Example of four VNs: the 2-node diamond VN is non-critical; all other VNs are critical
(the maximum link stress is 3).
(a) the embedding before reconfig; (b) the embedding if one critical VN is reconfiged;
(c) the embedding if the non-critical VN is reconfiged (max link stress remains the
same)
INFOCOM 06 Cont‘4
VNA-II (with reconfiguration)
Global marking: identify critical SN nodes and links; all
VNs that are using these nodes or links are marked for
reconfig
Per VN reconfig
use VNA-I (V1 or V2 or V3)
Didn’t prove if the “per VN” algorithm will
converge
INFOCOM 06 Cont‘5
Simulation setting (no-blocking VNE)
SN network – 100 nodes, 316 links, random topology generated by GT-ITM [1996]
VN networks (offline setting, i.e., the VNR arrival is known beforehand)
arrive in Poisson process at rate
,
lifetime exponentially distributed with average
Size is uniformly distributed from 2~10,
Random topology, link probability = 0.5
time units,
Experiment #1: Effects of VNR loads
changing the VN arrival rate, compare the VN assignments of VNA-I (V1), VNA-I
(subVN), VNA-I (adaptive), least-load (always select least stressed SN node for
Vnode and connect Vnodes using shortest distance path algorithm)
INFOCOM 06 Cont‘6
•
Experiment #2: VN assignment without reconfiguration - Benefits of subVN scheme.
•
changing the link probability in VN topology, and compare the Max-link-stress of VNA-I (V1basic) vs. VNA-I (V2 - subVN)
•
The SubVN algorithm over-performs the VNA-I (V1 – basic)
•
The difference decreases as the VN topology becomes denser
•
Reason is that VNA-I (V1 -basic) scheme assumes the VN is a fully connected graph; yet VNA-I (V2 –
subVN) has no such assumption, uses the actual VN topology and splits it into a set of star-subgraphs.
INFOCOM 06 Cont‘7
•
Experiment #3: VN assignment without reconfiguration - the effectiveness of adaptive
optimization strategy
•
change the potion use of link-opt and node-opt in VNA-I (V4-adaptive)
Ratio of maximum
node stress
Ratio of maximum link
stress
•
The VN assignment algorithms with different values of
form distinctive clusters
INFOCOM 06 Cont‘8
Experiment #4: VN assignment with reconfiguration - varying the reconfig threshold
The reconfig threshold controls the ratio of VNs that are allowed to do reconfig:
VNs using the highest stressed substrate nodes/links are allowed to reconfigure;
VNs are allowed to reconfigure.
= 0 means only the
= 1 means all
In (c), the increase in VN reconfig ratio between threshold [0.4,0.5] indicates that if
allows 50% of
VN to reconfig, the actual reconfig ratio jumps from 40% to 90% dramatically. The phenomenon shows
that 50% of VNs locate on most lightly loaded SN links (not necessarily on lightly loaded SN nodes).
It explains the strange bump in (b) between threshold [0.4,0.5]. Since the actual reconfig ratio is 90%
when
=0.5, a majority of VNs are reconfigured hence the max link stress drops for the segment
where threshold is [0.4,0.5]
INFOCOM 06 Cont‘9
Experiment #5: VN assignment with reconfiguration – the effects of reconfiguration
period
From (a) and (b), the performance of VNA-II adaptive scheme is in the middle for
maximum node/link stress.
From (c), the less frequent reconfigure events, the less cost (nearly exponentially
decreasing with the reconfig period)
INFOCOM 06 Cont‘10
Experiment #6: VN assignment on different network settings – effect of
different SN networks
The denser of the SN network, the better performance all kinds of VNA
schemes can achieve.
The absolute difference between different VNA schemes diminishes as the
SN network becomes denser.
INFOCOM 06 Cont‘11
Experiment #7: VN assignment on different network settings – effects of the
size of VN
INFOCOM 06 Cont‘11
Experiment #8: VN assignment on different network settings - multidomain SN
topology
10-node stub
domain
10-node stub
domain
10-node stub
domain
Setting: 124 nodes, 510 links, 4-node transit domain, each transit node attached by
three 10-node stub domain.
VNA schemes performs better than least-load scheme: the lowest stress may be
scattered in different domains, least-load algorithm will select them for a single VNR,
leading to highly loaded transit-stub links
In all previous papers before 2013, there is a common assumption that the SN network is fault-free, i.e., the
underlying InP network remains operational at all times.
The fault-free assumption is removed in the following paper, which considers a variant of VNE in the scenario
where the SN network is error-prone (more specifically, the assumption is that the underlying SN network has
single link failure in a certain distribution?).
M.R.Rahman, R.Boutaba, “SVNE: Survivable Virtual network Embedding
algorithms for network virtualization”. IEEE Transactions Network and
service management, 2013.
The fault-free assumption is removed in the following paper, which considers a variant of VNE in the scenario
where the SN network is error-prone (more specifically, the assumption is that the underlying SN network has
single link failure in a certain distribution?).
The protection against single SN link failure is to dedicate a certain percentage of bandwidth on each SN link for
backup.
IEEE Transactions 2013
VNE in the presence of arbitrary substrate node and link failures.
M.R.Rahman, R.Boutaba, “SVNE: Survivable Virtual network Embedding algorithms for network virtualization”.
IEEE Network and service management, 2013.
Deal with single substrate link failure, not multiple link failures due to its low probability of occurrence.
Do not deal with node failure. Because any node failure leads to the adjacent link failures, to address the
link failure is thus a priority.
Related work on VNE
Only consider the off-line version of VNE
Or assuming infinite capacity of SN nodes and links to avoid admission control
All in common assume the SN network to be operational at all times
Related work on survivability in IP-over-WDM
The design of IP-over-WDM is an offline problem
The objective is to ensure all nodes remain connected in presence of failure.
Unique in SVNE
The objective is to ensure all Vlinks are intact
Maximize the long term revenue for InP, and minimize the long term penalty of InP due to single SN link
failure.
IEEE Transactions 2013 Cont'1
Solution to online SVNE (two phases of decoupled node embedding and link embedding)
For each incoming VNR, node embedding heuristics is greedy
After Vnode embedded, the Vlink embedding is done via path selection
SVNE model and formulation
SN network: a weighted graph
and
. Each SN link
.
VN: a weighted graph
, each Vnode
has cpu and location requirements:
and
. Each Vlink
has bandwidth and delay requirements:
and
, each SN node
has two endpoints
has cpu and location attributes:
and bandwidth attribute:
.
Resource consumption: finite SN node and link resources. Do not accept a VNR unless there are
adequate resource to serve it (i.e., a naive admission control scheme - progressively consuming all SN
network). The capacity of residual SN node is defined:
. The capacity of
residual SN link is defined:
.
Both capacity values are updated (1) after successfully embedding a VNR; (2) each departure of a VNR;
(3) each SN link failure occurrence; and (4) after its repair.
For each SN link of bandwidth
,
percentage of the bandwidth is for primary VNE,
percentage of the bandwidth is reserved for backup use. In total
IEEE Transactions 2013 Cont'2
SVNE model and formulation
Revenue function
where
is the lifetime of the VNR
bandwidth and cpu usage.
. C1 and C2 are weights factors in calculating the revenue from
Penalty function
is the Vlink set affected by the failure of
MTTR( ) is the mean time to repair for failed SN link . db(v) is the difference between v's requested
bandwidth and the actual bandwidth supplied by InP.
Objective function
are the VNR sequence.
are the sequence of failed SN links.
Restoration model
Local link restoration: for each SN path
End-to-end path restoration: survivable flow from SN node u to v consists of a primary flow of value f
and a link-disjoint secondary flow of the same value f.
, each SN link in the path has a backup detour
IEEE Transactions 2013 Cont'3
Heuristics for SVNE
Proactive policy (provision redundant resource, i.e., for each Vlink, provision a primary flow and a
secondary flow)
Objective: min
subject to
Primary capacity constraint
is link-path indicator variable,
is the primary flow value
on the simple path p
for the virtual link v.
secondary capacity constraint
Primary bandwidth constraint
secondary bandwidth constraint
Link-disjoint constraint
?
Variables:
=1 if
= 0 otherwise.
IEEE Transactions 2013 Cont'4
Heuristics for SVNE
Proactive policy – using two sequential linear programming
Objective #1
min
Subject to
Minimize the SN usage for primary flows
Primary capacity constraint
Primary bandwidth constraint
Objective #2
min
Subject to
The algorithm fist assigns bandwidth for primary flows and marks the SN links
used. Marking a SN link is equivalent to deleting it while assigning bandwidth for
backup flows.
The algorithm is online, since it finds an embedding for each new VNR.
The algorithm is expected to perform worse than an optimal offline algorithm.
Minimize the SN usage for
secondary flows, and the costs
due to violating Vlink bandwidth
requirement
is a boolean variable, keeps track of
the SN links that have been used
for primary flow. It is used to keep
primary flow and secondary flow
link-disjoint, i.e.,
= 1 means
SN link s has been used for a
primary flow.
IEEE Transactions 2013 Cont'5
Heuristics for SVNE
Hybrid policy
Before VNR arrival, InP computes backup detour for each SN link;
Upon VNR arrival, InP first embeds the Vnodes using existing heuristics, then embeds the Vlinks
using multi-commodity flow algorithm;
Objective
min
is the amount of bandwidth allocated on
path p for Vlink v.
subject to
Primary capacity constraint
Primary bandwidth constraint
When a SN link fails, InP reroutes all affected flows along their backup detours.
Objective min
is the amount of rerouted bandwidth
on detour.
subject to
Secondary capacity constraint
Recovery constraint
IEEE Transactions 2013 Cont'6
Heuristics for node embedding
Greedy node embedding
solve
, where
is a product of residual
bandwidth and cpu capacity of x and the sum of its adjacent link residual bandwidth. Map Vnode to x
and iterate.
Heuristics for path selection
D-ViNE algorithm (a node embedding algorithm based on mixed integer programming, not elaborated in
the paper, referred to their 2009 Infocom paper)
K-shortest path algorithm (extension of Dijkstra's): static path selection - set the path for each Vlink and
the path remains constant during the lifetime of the VNR.
Performance Evaluation
Simulation environment
Random-graph network topology, Poisson process for VNR arrival, Poisson process for single SN
link failure.
GNU linear programming toolkit (glpk) to solve all LP on Ubuntu 12.04 on Windows 7; Intel i5, 8GB
RAM.
SN network 50 nodes (p=0.5); cpu and bandwidth are limited (uniformly distributed between 50~100)
Varying the parameter
VNR network size is a uniform r.v. between 2~20, (p=0.5 fixed); bandwidth requirement is a uniform r.v.
between 0~50; penalty
for a Vlink v is a uniform r.v. Between 2~15.
in the simulation.
IEEE Transactions 2013 Cont'7
Performance Evaluation
Business profit.
BLIND algorithm – recomputes a
new link embedding for each VN
affected by the SN link failure.
is the percentage of the SN link
reserved for primary flow
allocation.
Proactive algorithm – provision
redundant resource, for each
Vlink provision primary flow and
secondary flow.
Hybrid algorithm – first compute
backup detour for each SN link,
embed the VNR as the
decoupled node/link mapping,
reroute to backup detour upon
single SN link failure.
is the ratio of SN link failure arrival
rate to VNR arrival rate, i.e.,
IEEE Transactions 2013 Cont'8
Performance Evaluation
Acceptance ratio.
The acceptance ratio is calculated by
the # of VNR admitted and the # of
failed VNs, e.g. An accepted Vlink is
embedded on a SN link who later
fails, the VN is considered as a failed
request.
IEEE Transactions 2013 Cont'9
Performance Evaluation
Responsiveness to failures
Performance Evaluation
Bandwidth
efficiency
Proactive algorithm – pre-reserves additional BW for each
Vlink during embedding phase.
Hybrid algorithm – does not pre-reserve BW during
embedding, but pre-selects detour for each SN link and
only allocates the BW when a SN link failure occurs.
IEEE Transactions 2013 Cont'10
Performance Evaluation
Specific VN topology
The relative performance is unchanged for two specific VN topologies – Hub-and-spoke
and Mesh network: Hybrid algorithm > Proactive algorithm > Blind algorithm.
IEEE Transactions 2013 Cont'11
Performance Evaluation
Effect of k (the number of shortest paths to be computed)
The highest profit is achieved by hybrid
policy with DviNE node embedding
algorithm.
Insight is that the performance metrics
against k are affected by the selected node
embedding scheme
Last paper (Infocom 2006) addresses an online VNE problem, i.e., the VNRs arrive along the time.
It proposed a new metric – SN link/node stress, and optimizes a linear combination of the link stress and node
stress in the objective. Two kinds of algorithms are proposed: VNE without reconfiguration and VNE with
reconfiguration. The latter algorithm needs to have the global knowledge of the SN network topology and
embedding loads.
I.Houidi, W.Louati, and D.Zeghlache, “A distributed virtual network mapping
algorithm”, IEEE ICC, 2008.
The contribution of the next paper is to decentralize the VNE embedding among SN nodes. In order to exchange
local information, a VN Mapping protocol is therefore proposed for the efficient message-passing between the
agent-based SN nodes.
ICC08
One objective: to efficiently assign VNs to specific SN nodes
and links in distributed heuristic manner, while minimizing the
network cost.
Two assumptions
a. the SN network resource (e.g., CPU, bandwidth) is
unlimited to accept and handle all VN requests
b. all VN requests are defined and known in advance (i.e.,
offline problem)
ICC08 Cont' 1
“preprocessing” the VN topology
Decompose the VN into star-based subgraphs :
the star-decomposition is the same as
what proposed in Infocom 2006
Find the highest capacity Vnode as the center of the star
All of its direct neighbours in the VN are the leaf
Remove this star (center and leaf) from the VN and repeat to find the next star.
VN mapping algorithm
For each VN star-based decomposition, find the SN node with maximum available
resource and map it to the star center
Identify the set of SN nodes to be the leaf nodes of the star decomposition, based on
Shortest-Path algorithm and the capacity of the SN node
Remove the SN nodes and path assigned to the above star
Move on to the next VN star decomposition, and find the next SN node as its star center.
VN mapping protocol
The VN mapping is distributed: each SN node designated as the star center will be
responsible for selecting and mapping its own star decomposition. All the star center SN
nodes communicate/interact with each other to complete the VN mapping.
Protocol messages: MSG (exchange node capacity among SN nodes), START, NOTIFY,
NEXT, STOP
ICC08 Cont' 2
Distributed VN mapping algorithm
1) Decompose the VN topology into star like clusters
2) Start with one star cluster at a time,
3) Each SN node keeps an up-to-date sorted list of all available SN nodes and their current
capacity
4) find the SN node of maximum capacity (root) to be mapped to the center of the star cluster
5) Use shortest-path algorithm to identify the SN nodes connected to the root, of maximum
capacity
6) The Vnode of higher capacity demand is mapped to the SN node of higher available
capacity
7) Upon finishing the mapping of 1st star cluster, the root SN node sends NEXT msg to all SN
nodes
8) A next root node is to be found, i.e., repeat from (3)
ICC08 Cont' 3
Performance Evaluation platform
GRID5000 A real SN network emulator (different topologies available)
Java agent development framework (implement the distributed algorithm)
Agents are deployed on GRID5000 to emulate the SN nodes and to handle the VM mapping algorithm.
Performance evaluation
The overhead incurred by the node capacity sorting operation
The time overhead caused by sorting
the node capacity increases
exponentially when the SN topology
has over 80 nodes
The # of messages exchanged caused
by sorting the node capacity increases
exponentially with the size of the SN
network
ICC08 Cont' 4
The overhead incurred by the use of shortest-path algorithm
Blue – Full mesh SN topology; Red – partial mesh SN topology
The absolute difference in time
overhead between full mesh and partial
mesh increases as SN size grows
The absolute difference in # of msgs
exchanged between full mesh and
partial mesh increases as SN size
grows
ICC08 Cont' 5
The overhead incurred by the use of distributed VN mapping algorithm
A single VN request: consists of 25 Vnodes in two topologies: full mesh and partial mesh
Calibre with centralized VN mapping algorithm
In the centralized VN mapping, the central coordinator needs to gather parameters (of SN nodes and links)
globally.
In the distributed VN mapping, less overhead is due to that each SN node is already aware of the
parameters of its direct neighbour SN nodes and links
ICC08 Cont' 6
The time overhead incurred by the use of distributed VN mapping algorithm
On the left: 10 VN requests
each consists of 25 Vnodes
On the right: re-assign a VN in face of a SN node/link failure
Summarize so far: [Infocom06] proposes centralized VN mapping schemes: with/without VN reconfiguration
[ICC08] proposes to decentralize VN mapping in that each SN node once chosen to be mapped to the
center of the VN star decomposition, is responsible to choose a set of connected SN nodes to be mapped
as leafs of that star decomposition.
M.Chowdhury, F.Samuel, R.Boutaba, “Polyvine” policy-based virtual
network embedding across multiple domains”, ACM SIGCOMM 2010.
Both papers in previous concern the VN mapping in a single domain.
The next two papers extend it to propose VN mapping across multiple domains. The common approach of
the two is to decouple the inter-domain mapping and intra-domain mapping. Both use the idea of
partitioning the end-to-end VNR into several sub-graphs.
The difference is the approaching of graph partitioning. [SIGCOMM 2010] implements a simple “starting with
seed InPs and passing onto next hop InPs” way for each InP to determine on their own to embed the largest
connected component from the received VN topology. [CFI ACM 2011] makes use of the min k-cut
algorithm, Gomory-Hu Trees, to partition the end-to-end VNR into multiple subgraphs.
SIGCOMM10
The intra-domain algorithm of VNE is extended to multiple administrative domains.
M.Chowdhury, F.Samuel, R.Boutaba, “Polyvine” policy-based virtual network embedding across
multiple domains”, ACM SIGCOMM 2010.
Difference between intra-domain VNE and inter-domain VNE
For intra-domain VNE, typical assumption is the full knowledge of SN topology; typical
objective is SN node/link attributes.
For inter-domain VNE, usually no global topological knowledge of all participant SN; two
typical objects are :
1) How to partition the end-to-end VN request into K sub-graphs to be embedded onto K
SN networks;
2) How to embed to inter-connection between K sub-graphs onto the inter-domain SN
paths
SN network model
Control Network
SIGCOMM10 Cont'1
VN model and Embedding assignment
: Meta-VN request topology
VN Network
K-th sub-graph
is the set of VN links crossing domain boundaries
One possible InPlevel embedding
InP#2 has no embedded
Vnode, but is still in the
embedding for the interdomain connection purpose
SIGCOMM10 Cont'2
Decentralized embedding
The end-to-end VNR is sent to multiple InPs to begin
with
The embedding process starts from the initial InP
receivers and propagates to other InPs.
Embedding protocol
Six messages: EMBED(Req_id, InPSet),
SUCCESS(Req_id, M, InPSet), FAILURE(Req_id,
errorDesc), CONNECT(Req_id,
, InPSet),
RELAY(Req_id, G, InPSet, InP#), ACK(Req_id)
Sample embedding process (right plot)
How InP works
Local embedding: use intra-domain VNE algorithm
to decide which part of an end-to-end VNR to embed
locally and embed it (i.e., this is the authors' answer
to the question #1 of inter-domain VNE – how to
partition the end-to-end VN request into K subgraphs)
Forwarding: forward the rest of the VNR to other
InPs
Back-propagation: upon completely embedding or no
available InPs to send forwarding to, return
SUCCESS or FAILURE message
How to choose the InPs to forward the remaining VNR?
SIGCOMM10 Cont'3
Location aware forwarding
informed forwarding based on (a)
hierarchical address of the SN nodes
(2) prices announced by the other InPs
(3) reputation score of the InPs
Hierarchical addressing:
Vnode – NA.CA.*
SN node – NA.CA.ON.Toronto
Location awareness protocol
Every InP informs its neighbor InPs of
the address of its SN nodes along with
prices.
Each InP maintains a Location DB
In steady state, each InP knows about
all the InPs
SIGCOMM10 Cont'4
Numerical evaluation setting
100 InPs, each consists of 80~100 SN nodes,
540~600 SN links on average (random graph
topology)
Each SN node has CPU units uniformly ranging from
1 ~ 100 CPU units
Each SN link has 100 bandwidth units
InP performs a naive greedy node and link mapping
to embed the largest possible connected
component from the VNR received
Afterwards, InP randomly chooses a Vnode from the
remaining VNR to forward
InP chooses the top 3 InPs from its Location DB to
forward.
VNR drop if reaching 12-hop max among InP
propagation
In the evaluation setting, only the embedding process is
implemented and evaluated
the price propagation or the selection based on
lowest-price are not implemented
The performance of the mapping of inter-connection
between K VNR sub-graphs onto the inter-domain
SN paths is not considered (i.e., didn't answer to the
question #2 of inter-domain VNE - How to embed to
inter-connection between K sub-graphs onto the
inter-domain SN paths )
SP: Service provider, the
one who generates the VN
requests.
Experiment #1 – node mapping and hop count
VNR has 50 Vnodes and 200 Vlinks
(random graph topology)
The first InP that received the complete
VNR embeds the largest # of Vnodes
The # of embedded Vnodes decreases
exponentially as the remaining VNR
forwarded along
Reason is uncertain: (a) the random graph
topology (b) simple greedy algorithm
SIGCOMM10 Cont‘5
Experiment #2 – node mapping and VNR size
Observe the # of Vnodes being embedded by
the immediate InP contacted by the SP
Varying the size of VNR from 10~70 Vnodes
Each VNR is a generated sparse random
graph, with n Vnodes and 4*n Vlinks
Observation: the # of Vnodes mapping by the
first-hop InP grows linearly with the VNR size.
Experiment #3 – InPs involved in a successful mapping
Same setting as Experiment #2
Observation: the # of InPs that have Vnodes
embedded onto their SN nodes grows linearly
with the size of VNR.
The last paper, SIGCOMM2010, introduces the two challenges of inter-domain VNE: (1) how to partition
the end-to-end VNR into K sub-graphs for K InPs to embed (2) how to efficiently use the inter-domain
SN links to embed the inter-connection Vlinks of the K sub-graphs.
Yet in its experiments, the authors evaluated numerical relations using random graph topology and
simple greedy embedding algorithm within each InP.
Further, their answer to the challenge #1 is delegated to intra-domain VNE, i.e., the intra-domain VNE
decides which part of the end-to-end VNR to embed locally. And there was no explicit answer to the
challenge #2 in the paper.
Y.Xin, I.Baldine, A.mandal, C.Heermann, J. Chase, and A. Yumerefendi.
“Embedding virtual topologies in networked clouds”, International
conference on Future Internet Technologies (CFI) ACM , 2011.
The next paper also consider the VNE across distributed sites – embedding over multiple cloud sites.
Unlike the paper SIGCOMM2010, this paper explicitly answered the question – how to partition the endto-end VNR, via using a minimum k-cut algorithm. However this is a 4-page short paper, absent of
formulation details.
CFI ACM 11
The intra-domain algorithm of VNE is extended to geo-distributed cloud computing environments.
Y.Xin, I.Baldine, A.mandal, C.Heermann, J. Chase, and A. Yumerefendi. “Embedding virtual
topologies in networked clouds”, International conference on Future Internet Technologies, 2011.
The paper distinguish between Cloud providers and Transit network providers (provides
inter-cloud virtual network services, such as on-demand QoS-guaranteed paths)
Contributions:
1) ORCA, a broker-based architecture to federate multiple resource providers (including cloud
sites)
2) [NEuca] An extension to Eucalyptus cloud middleware. It embeds the VN within a cloud site
by using the VLAN technology
3) The software and its deployment testbed provide a platform to experiments with different
approaches to VNE problem in a multi-domain environment with brokers.
[NEuca] “GENI-ORCA Control Framework,” http://geni-orca.renci.org
CFI ACM 11 Cont'1
Inter-cloud VNE model
Physical infrastructure:
VN request:
Embedding solution:
Vlinks within its cloud domain.
clouds
and
.
Cost function: a linear weighted sum of (1) cost of embedding Vnodes (2) cost of embedding Vlinks
within a cloud site (3) cost of embedding Vlinks over transit networks across cloud sites:
clouds interconnected by
transit providers
with BW requirement on each link. Each Vnode is provisioned as a VM
. Each cloud site
provisions
VMs and
represents the Vlinks between two VN partitions mapped in two
Where the individual costs are:
is the available VM resources in cloud site
is the available network resources in cloud site
Upon VNR arrival, 1st question is whether to assign the VN to a single cloud site or partition it across
multiple clouds.
The proof was based on the
cost function of embedding
Vnodes alone.
CFI ACM 11 Cont'2
VNR partition:
To partition a virtual topology, for a fixed k, the minimum k-cut problem is polynomial time solvable. But
the problem is NP-complete when k is part of the input. A popular algorithm to solve k-cut is Gomory-Hu
tree [GHTree], which has an approximation ratio of 2 – 2/k.
Given M number of cloud providers, for each k in [1,M], construct the partition graph from the VNR. Each
partition graph consists of k nodes as the k partitions and the edges among the partitions.
Run subgraph isomorphism detection algorithm to find a mapping from the partition graph to the cloud
infrastructure graph.
Normally multiple subgraph solutions exist (their assumption!). Choose the solution of minimum cost
evaluated by the cost function:
and record the cost
value as C(k) for this fixed k.
Continue to iterate if C(k) < C(k-1). Exit the iteration if C(k) >= C(k-1). Keep the (k-1) partition as the
optimal partition (another assumption that the value of the cost function monotonically decreases with
the # of partitions of the VNR!)
Once the optimal partition is found, expand the partition graph by retrieving all the Vlinks in the VNR,
combining which with the inter-provider connections in the partition graph.
Embed the expanded partition graph onto the infrastructure.
After embedding a VNR, update the available resources at each cloud provider and transit provider:
,
[GHTree] R. E. Gomory, T. C. Hu. Multi-terminal network flows. Journal of the Society for Industrial and Applied Mathematics, vol. 9, 1961.