Présentation PowerPoint

Download Report

Transcript Présentation PowerPoint

Scalable Mobile Backhauling
with Information-Centric Networking
Luca Muscariello
Orange Labs Networks
Network Modeling and Planning
and IRT SystemX.
Joint work with
G. Carofiglio, M. Gallo, D. Perino, Bell Labs, Alcatel-Lucent
motivation
 trends
 Content-centric nature of Internet usage highlights inefficiencies of
the host-centric transport model
 Higher costs in mobile infrastructure to sustain traffic growth with no
innovation at network layer
 Reduced margins for MNOs (…ok in Europe!)
 ISP countermeasures
 Quest for novel business opportunities in service delivery value chain
 Increased network control to lower costs: network cost optimization is
constrained to the ‘Traffic Engineering Triangle’
outline
mobile backhaul opportunities
evaluation scenario and results
introducing ICN in today’s mobile backhaul
outline
mobile backhaul opportunities
evaluation scenario and results
introducing ICN in today’s mobile backhaul
scalable mobile backhaul with ICN
WHERE
 objective: need for innovative network solutions to cope with huge
mobile traffic growth with no significant capacity upgrades
 tool: real traffic observations from our network and joint BL/OL
experimental campaign over ~100 nodes with real workload/topology
 achievements: our ICN design provides a content-aware network
substrate in the mobile backhaul, compatible with 3GPP standard
5
traffic observations in the backhaul
WHERE
We focus on HTTP transactions of the following predominant applications
In one peak hour for a set of macro cells covering a metro area.
 web browsing
 audio/video
 You Tube
‒ cacheability: % of requests of
objects requested at least twice
in a given time period.
‒ In average 52% of total
requests are cacheable
‒ Audio/video applications and
You Tube in particular can
attain values up to 86%
6
outline
mobile backhaul opportunities
evaluation scenario and results
introducing ICN in today’s mobile backhaul
outline
mobile backhaul opportunities
evaluation scenario and results
introducing ICN in today’s mobile backhaul
Methodology
 We need to experiment with the full stack of
protocols
– CS/PIT/FIB
– caching, queuing
– flow-control, congestion control,
 Realistic experiments
– realistic workload
 Repeatable experiments
– control your 100% of your experiment
– run and monitor it continuously
Lurch
From protocol design to large scale experimentation

A newly designed protocol need to be tested

Event driven simulation:



Large scale experiments:


limited in the number of events (hence topology size)
computation is hard to parallelize
Complex to manage
We needed a test orchestrator
Lurch



Lurch is a test orchestrator for CCNx1 (soon CCN-lite and NFD)
Simplify and automate ICN’s protocol testing over a list of
interconnected servers (i.e. G5K).
Lurch run on a separate machine and control the test
Controller
Lurch
Architecture

Lurch controller:



Virtualized Data plane
Control Plane
Application layer
Control Plane
Virtualized
Data Plane
Management
Application
Data Plane
Protocol stack
CCNx
TCP/UDP
Virtualized IP
IP layer
PHY layer
Lurch
Topology management


1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Network iptunnels to build virtualized interfaces
One physical interface (eth0), multiple virtual interfaces (tap0,..,)
10.0.0.1
10.0.0.2 tap0
tap0
#!/bin/bash
sysctl -w net.ipv4.ip_forward=1
modprobe ipip
tap1
10.0.0.3
iptunnel add tap0 mode ipip local 172.16.49.50
remote 172.16.49.5
ifconfig tap0 10.0.0.2 netmask 255.255.255.255 up
route add 10.0.0.1 tap0
iptunnel add tap1 mode ipip local 172.16.49.50
remote 172.16.49.51
ifconfig tap1 10.0.0.3 netmask 255.255.255.255 up
route add 10.0.0.4 tap1
Virtual

Create virtual interfaces between nodes (i.e. G5K)
Bash configuration file computed remotely by the orchestrator and transfered
to experiment nodes
tap0
10.0.0.4
172.16.9.50 172.16.49.5 172.16.49.51
Controller
eth0
eth0
eth0
Physical

Lurch
Resource management


Remotely assign network resources to nodes preserving physical bandwidth
constraints
Bash configuration file computed remotely by the orchestrator and transferred
to experiment nodes

6.
7.
8.
9.
Virtual
4.
5.
#!/bin/bash
tc qdisc del dev eth0 | cut -d " " -f 1) root
tc qdisc add dev eth0 | cut -d " " -f 1) root
handle 1: htb default 1
tc class add dev eth0 | cut -d " " -f 1) parent
1: classid 1:1 htb rate 100.0mbit ceil 10.0mbit
tc filter add dev eth0 | cut -d " " -f 1) parent
1: prio 1 protocol ip u32 match ip dst
172.16.49.5 flowid 1:1
tc class add dev eth0 | cut -d " " -f 1) parent
1: classid 1:2 htb rate 100.0mbit ceil 50.0mbit
Physical
1.
2.
3.
Traffic Control Linux tool to limit bandwidth, add delay, packet loss, etc..
tc filter add dev eth0 | cut -d " " -f 1) parent
Controller
1: prio 1 protocol ip u32 match ip dst
172.16.49.51
flowid 1:2
1Gbps
Lurch
Name-based control plane


Remotely control name-based forwarding tables
Bash configuration file computed remotely by the orchestrator and transferred
to experiment nodes

CCNx’s FIB control command ccndc
face
ccnx:/music
0
ccnx:/video
1
#!/bin/bash
ccndc add ccnx:/music UDP 10.0.0.1
ccndc add ccnx:/video UDP 10.0.0.4
Controller
Physical
1.
2.
3.
4.
5.
Name prefix
Virtual
FIB
Lurch
Application Workload

Remotely control experiment workload
File download application started according experiment’s needs



Arrival process: Poisson,CBR…
File popularity: Zipf, Weibull, et..
Trace driven

Two ways:


tap1
10.0.0.3
Centralize workload generation at the
controller
Delegated workload generation to clients
for performance improvement
Virtual
10.0.0.1
10.0.0.2 tap0
tap0
tap0
10.0.0.4
172.16.9.50 172.16.49.5 172.16.49.51
Controller
eth0
eth0
eth0
Physical

Lurch
Measurements

Remotely control experiment statistic’s
Bash start/stop commands sent remotely



CCNx’s statistics (e.g. caching, forwarding) through logs
top / vmstat monitoring active processes CPU usage (e.g. ccnd)
Ifstat monitoring link rate
tap1
10.0.0.3

At the end of the experiment
statistics are collected and
transferred to the user
Virtual
10.0.0.1
10.0.0.2 tap0
tap0
tap0
10.0.0.4
172.16.9.50 172.16.49.5 172.16.49.51
Controller
eth0
eth0
eth0
Physical

Running large scale experimentation on Content-Centric Networking via
the Grid’5000 platform
EXPERIMENTS
Experiments

Large topologies



Up to 100 physical nodes
More than 200 links
Realistic scenarios

Mobile Backhaul
WHERE
network topology
A down-scaled model of a backhaul network.
 4 “regional” PDN GWs connected by a full mesh
 SGWs are assumed to be co-located with the PDN-GW
 2 CDN servers external to the backhaul, reached via two PDN-GWs
 each PDN-GW is the root of a fat tree topology composed of 20 nodes
 eNodeBs aggregate traffic generated by three adjacent cells
 every eNodeB serves the same average traffic demand
21
WHERE
methodology
 Software:
- We used an ICN prototype (http://www.ccnx.org)
- with optimized distributed congestion control and multipath forwarding
mechanisms (Carofiglio et al. IEEE ICNP 2013), based on decomposition
Lagrangian multipliers with physical meaning:
- network latency (measured in CCN/NDN by request/reply)
- network node flow rate unbalance (registered in the pending request table)
- LRU data replacement, cache along the path (dumb caching).
 Experimental Testbed:
- On the Grid 5000
- Bootable customized kernels with our network prototype
- Lurch: our network experiment orchestrator (i.e. statistics collection, etc. ).
 Workload:
- Down-scaling of the traffic characterization obtained from Orange traces
- Requests are aggregated at macro cell level
22
the platform
23
evaluated solutions
WHERE
 we compare at equal cache budget
– Baseline
– Traffic is routed through a single shortest path.
– ICN
– ICN transport, multi-path forwarding and LRU caching
– PDNCache
– Caches are deployed at PDN GWs only.
– Traffic is routed through a single shortest path.
– eNodeBCache
– Caches are deployed at eNodeBs only.
– Traffic is routed through a single shortest path.
– ICN + PDNCache
24
results – latency reduction
WHERE
 ICN shows the better QoE in terms of delivery time
 Improved user QoE due to:
 in-network caching.
 dynamic multipath transfer.
―
a factor 3 reduction in average delivery time
25
results – bandwidth savings
WHERE
 ICN sensibly decreases bandwidth utilisation inside the mobile backhaul
w.r.t. alternative solutions, allowing potential cost reduction
in the backhaul
from outside the backhaul
– up to 40% bandwidth savings in backhaul.
26
results - enhancing network flexibility
WHERE
 We emulate a flash crowd phenomenon on a link and compare the link
load over time for ICN and for the baseline scenario without caching:
ICN link load and average delivery time are almost not impacted by
the flash crowd (in virtue of transport/caching interplay and multipath).
27
outline
mobile backhaul opportunities
evaluation scenario and results
introducing ICN in today’s mobile backhaul
integrating ICN in today’s backhaul
ICN HEADER INTRODUCTION
Two alternatives:
1. in GTP-U encapsulation
2. After IP (IPsec) header with a specific protocol value
ICN DATA DELIVERY PROCESS
Two alternatives:
a) ICN proxy co-located with eNodeB (with DPI)
b) HTTP plugin at end-user
POLICY-CHARGING
Every node sends periodical reports to control plane
elements via ad-hoc GTP-C functions about traffic statistics
29
WHERE
conclusion and current work





ICN allows to remove anchoring to manage mobility
Mobility is not a technical problem
Communication is connection-less
Multi-path, multi-homing, multi-cast are native
In-network caching is native and outperforms PoP caching
 Currently high-speed prototype at Alcatel-Lucent (40Gbps)
 Ongoing discussion on ALU 7750 edge router…
 Demonstrations:
– Common demonstration at Bell Labs Future X Days in September 2014
– Demonstration at ACM SIGCOMM ICN 2014 to be held in Paris, September
24-26
Questions
publications
1. G. Carofiglio, M. Gallo, L. Muscariello, Bandwidth and storage sharing performance in informationcentric networking, in ACM SIGCOMM ICN 2011 workshop, Toronto, Canada.
2. G. Carofiglio, M. Gallo, L. Muscariello, D.Perino, Modeling data transfer in content-centric networking, in
Proc. of 23rd International Teletraffic Congress, ITC23 San Francisco, CA, USA, 2011
3. G. Carofiglio, M. Gallo, L. Muscariello, ICP: design and evaluation of an Interest control Protocol for
Content-Centric networks, IEEE INFOCOM NOMEN WORKSHOP, Orlando, USA, March 2012
4. G. Carofiglio, M. Gallo, L. Muscariello, Joint Hop-by-Hop and Receiver Driven Interest control Protocol
for content-Centric Networks, in ACM SIGCOMM workshop on information-centric networking, Helsinki,
Finland, 2012, awarded as best paper .
5. G. Carofiglio, M. Gallo, L. Muscariello, On the Performance of Bandwidth and Storage Sharing in
Information-Centric Networks, Elsevier Computer Networks, 2013.
6. G. Carofiglio, M. Gallo, L. Muscariello, D. Perino,Evaluating per-application storage management in
content-centric networks, Elsevier Computer Communications: Special Issue on Information-Centric
Networking, 2013.
7. M. Gallo, B. Kaumann, L. Muscariello, A. Simonian, C. Tanguy, Performance Evaluation of the Random
Replacement Policy for Networks of Caches, Elsevier Performance Evaluation, 2013.
8. G. Carofiglio, M. Gallo, L. Muscariello, M. Papalini, Multipath Congestion Control in Content-Centric
Networks In proc. of IEEE INFOCOM, NOMEN Workshop, Turin, Italy, April 2013.
9. G. Carofiglio, M. Gallo, L. Muscariello, M.Papalini, S. Wang Optimal Multipath Congestion Control and
Request,Forwarding in Information-Centric Networks To appear in proc. of IEEE ICNP, Goettingen,
Germany, October 2013.
10. White Paper in collaboration with Bell Labs, SCALABLE MOBILE BACKHAULING VIA INFORMATION
CENTRIC NETWORKING. A glimpse into the benefits of an Information Centric Networking approach
to data delivery., 2013