Transcript ppt
ScaleNet: A Platform for Scalable
Network Emulation
by
Sridhar Kumar Kotturu
Under the Guidance of
Dr. Bhaskaran Raman
Outline
Introduction
Related Work
Design and Implementation of ScaleNet
Experimental Results
Conclusions
Future Work
Introduction
Why protocol development environments
Rapid growth of the Internet and evolution of network
protocols
Types of environments
Simulation, Real deployed networks and Emulation
Why emulation
In Simulation, we may not exactly model the desired setting.
In real deployed networks, it is difficult to reconfigure and
its behaviour is not easily reproducible
Existing emulation platforms
Dummynet, NIST Net, Netbed etc..
ScaleNet
An emulation platform for creating large scale networks
using limited resources
Created several virtual hosts on a physical machine
Challenges in building ScaleNet
Creating multiple virtual hosts and
assigning routing tables to each virtual
host and associating applications with
virtual hosts
Routing between different IP aliases
Outline
Introduction
Related Work
Design and Implementation of ScaleNet
Experimental Results
Conclusions
Future Work
Related Work
Dummynet
Built with modifications to FreeBSD network stack
Emulates the effects of finite queues, bandwidth
limitations and delays
Can not emulate complex topologies
Implementation exists only between TCP and IP
Can not apply the effects for selected data flows
Can not apply effects such as packet duplication,
delay variation
Exists only for FreeBSD
Related Work (Cont..)
NIST Net
Emulates the behavior of any network, at a
particular router. It applies that network
behavior on the packets passing through it
Won’t create any virtual hosts and it won’t
do any routing
Designed to be scalable in terms of the
#emulation entries and amount of
bandwidth it can support
Related Work (Cont..)
FreeBSD Jail Hosts
Creates several virtual hosts on a PM
Routing is not possible
Theoretical upper limit of 24 jail hosts per PM
Netbed
Extension of Emulab
Automatically maps virtual resources onto
available physical resources
Uses FreeBSD jail functionality for creating several
virtual hosts
Scales upto 20 virtual hosts per physical machine
Related Work (Cont..)
User Mode Linux
It is a Linux kernel that can be run as a normal
user process
Useful for kernel development and debugging
Can create arbitrary network topology
Runs applications inside itself at 20% slowdown
compared to the host system
Lot of extra overhead in creating a virtual host,
since entire kernel image is used for creating
virtual host
Related Work (Cont..)
Alpine
Moves unmodified FreeBSD network stack into a
userlevel library
Uses libpcap for receiving packets and raw socket
for sending outgoing packets and prevents the
kernel from processing of packets destined for
Alpine using firewall
If the network is too busy or machine is slow, this
won’t work. Kernel allocates fixed buffer for
queueing the received packets. If the application
is not fast enough in processing these packets, the
queue overflows and subsequent packets are
dropped
Comparison of Emulated
Platforms
Performance
Many VMs per
PM
Hardware
Resources
Scalable
OS
Dummynet
High
No
Low
-
FreeBSD
NIST Net
High
No
Low
-
Linux
FreeBSD jail
hosts
Low
Yes
High
No
Netbed
High
Yes
High
User Mode
Linux
Low
Yes
High
No
Linux
No
Low
No
FreeBSD
Yes
Low
Yes
Linux
Alpine
ScaleNet
Low
High
Partly
FreeBSD
FreeBSD
Outline
Introduction
Related Work
Design and Implementation of ScaleNet
Experimental Results
Conclusions
Future Work
Netfilter Hooks
Available hooks
NF_IP_PRE_ROUTING
NF_IP_LOCAL_IN
NF_IP_FORWARD
NF_IP_LOCAL_OUT
NF_IP_POST_ROUTING
Hook Called...
After sanity checks, before
routing decisions
After routing decisions if
packet is for this host
If the packet is destined for
another interface
For packets coming from local
processes on their way out
Just before outbound packets
"hit the wire"
Netfilter Hooks (Cont..)
Design and Implementation of
ScaleNet
NIST Net
Applies bandwidth limitation, delay etc.
It is designed to be scalable in terms of the number of
emulation entries and the amount of bandwidth it can
support.
Linux
NIST Net exists only for Linux
It is so popular and good documentation is available
Kernel Modules
Modules can be loaded and unloaded dynamically
No need to rebuild and reboot the kernel
ScaleNet Architecture
ioctl.c
rt_init.c
route command
Bind call
pidip_ioctl.c
chardev
Routing
Tables
syscall_
hack
PID-IP
values
Pid_ip
Routing
Tables
IP-IP-in
NIST Net
IP-IP-out
dst_entry_
export
dst_entry
object
Kernel Module
Kernel Data
User-level program
Userlevel
Kernel level
Illustration of packet passing
between virtual hosts
Packet after
changes
Kernel Module
Original
Packet
Virtual
Host
Source IP 1
Dest IP 2
Source IP 2
Dest IP 3
Source IP 1
Dest IP 3
Source IP 1
Dest IP 3
Source IP 1
Dest IP 3
Data
Data
Data
IP-IP-out
IP-IP-in
IP-IP-in
Source IP 1
Dest IP 3
Source IP 1
Dest IP 2
Source IP 2
Dest IP 3
Data
Source IP 1
Dest IP 3
Source IP 1
Dest IP 3
Data
Data
1
NIST Net
2
NIST Net
3
Processing of Outgoing Packets
Capture Packet at netfilter hook
NF_IP_LOCAL_OUT
Pkt Src IP belongs
to local virtual host
No
return
NF_ACCEPT
Yes
Nexthop
available
No
Return
NF_DROP
Yes
Create extra IP header
Dst IP <- nexthop
Src IP <- Current V.H.
Nexthop on
same machine
Yes
No Dst MAC <- nexthop MAC
Processing of Outgoing Packets (Cont..)
Space available at
beg. of sk_buff
No
Create new sk_buff
with extra space.
Add extra IP header followed
by rest of the packet.
No
Space available at
end. of sk_buff
Yes
Copy original IP header
to the end of sk_buff.
place extra IP header
at the beg. of sk_buff.
return
NF_ACCEPT
Yes
Add extra IP header
at the beg. of sk_buff
Processing of Incoming Packets
Capture packet at netfilter hook
NF_IP_PRE_ROUTING
pkt. dst. belongs
to local V.H.
No
return
NF_ACCEPT
Yes
Remove NIST Net
marking
Packet reaches
final destination
Yes
Remove outer
IP header
No
nexthop
available
Yes
No
return
NF_DROP
Processing of Incoming Packets (Cont ..)
Change fields of extra IP header.
Dst. IP <- nexthop
Src. IP <- current V.H.
Nexthop on
same m/c
Yes
Call dev_queue_xmit()
return
NF_STOLEN
No
Dst. MAC <- nexthop MAC
Virtual Hosts
Creating multiple virtual Hosts.
Assign different IP aliases to the Ethernet card and
treat each IP alias as a Virtual Host.
#ifconfig eth0:1 10.0.0.1
Assign routing tables to each Virtual
Host. (According to the topology of the network)
Association between Applications
and Virtual Hosts
A wrapper program is associated with a
virtual host. It acts just like a shell.
All the Application programs belong to a
virtual host are executed in the corresponding
wrapper shell.
To know the virtual host that a process
belongs to, we traverse the parent, grand
parent process etc. until we reach some
wrapper program, which corresponds to a
virtual host.
System Call Redirection
bind and route system calls are hacked
Whenever a process tries to
access/modify a routing table, first we
find the virtual host of the process as
explained in the previous section and
the system call is redirected to act on
that virtual host’s routing table instead
of system’s routing table
Outline
Introduction
Related Work
Design and Implementation of ScaleNet
Experimental Results
Conclusions
Future Work
Experimental Results
NIST Net bandwidth limitation tests
Tests on the emulation platform
consisting of 20 virtual hosts per
physical machine
Tests on the emulation platform
consisting of 50 virtual hosts per
physical machine.
NIST Net bandwidth limitation tests
Tests are performed using both TCP and
UDP packets
Tests are performed in two cases.
Client and Server are running on the same
machine
Client and Server are running on the
different machines.
TCP packets. Client and Server on the same
machine
Bandwidth applied
using NIST Net
(Bytes/Sec)
1000
2000
4000
5000
6000
8000
10000
20000
40000
100000
131072
262144
524288
1048576
Throughput obtained
using 50 packets
(Bytes/Sec)
Throughput obtained
using 100 packets
(Bytes/Sec)
Throughput: sending
packets continuously
(Bytes/Sec)
1468
2935
5868
7335
8802
11903
14931
29000
(50000, 60000)
(139000, 174000)
249036
(655360, 724828)
(5767168, 6946816)
(5636096, 7864320)
1001
2001
4000
5001
6001
8000
10002
20136
40536
105312
146800
(314572, 340787)
(655360, 1966080)
(5242880, 6553600)
3276
(6553, 9830)
9830
19661
49152
133693
271319
737935
4949278
Excess throughput is coming than the applied one
TCP Packets. Client and Server on the same machine.
MTU of loopback packets changed to 1480 bytes.
Bandwidth applied
using NIST Net
(Bytes/Sec)
Throughput obtained
using 100 packets
(Bytes/Sec)
1000
2000
4000
8000
16000
32000
64000
131072
262144
524288
1048576
2097152
4194304
Expected results are coming
866
1915
3818
7637
15275
30551
60784
131334
261488
507248
983040
1867776
3377725
Throughput: Sending
packets continuously
(Bytes/Sec)
925
1856
3855
7568
15200
30416
60900
130809
261619
522977
1045954
2093219
4182507
UDP Packets
#Packets
50
50
50
50
50
50
17300
Bandwidth applied
using NIST Net
(Bytes/Sec)
1000
2000
4000
8000
16000
20000
20000
Expected results are coming
Throughput
(Bytes/Sec)
972
1945
3831
7786
15562
19461
19650
UDP Packets (Contd..)
Sending 17400 packets, each packet of size
1000 bytes.
Bandwidth applied
using NIST Net
(Bytes/Sec)
10000
10000
10000
10000
Throughput (Bytes/Sec)
(For every 100 packets rcvd)
22109
9824
9825
9825
Excess throughput is coming than the applied one
Creating 20 Virtual Hosts per System
A network topology consisting of 20 nodes per system
Creating 20 Virtual Hosts per System (Cont..)
Sending 40000 TCP packets from 10.0.1.1 to
10.0.4.10. Each link has 10ms delay.
Bandwidth applied
using NIST Net
(Bytes/Sec)
4096
8192
16384
32768
65536
131072
262144
393216
Throughput
(Bytes/Sec)
3892
7784
15572
31136
62222
123933
154125
154361
Creating 20 Virtual Hosts per System (Cont..)
TCP window size is 65535 bytes.
There are 39 links between 10.0.1.1 and 10.0.4.10.
Each link has 10ms delay in the forward direction and
no delay in the backward direction. For 100 Mbps
link, the transmit time for 65535 bytes is around 5ms.
So RTT is 395ms.
The maximum possible data transferred is
65535bytes/395ms, i.e. 165911 bytes/sec.
We are getting throughput around 154000 bytes/sec.
If we add headers size(9240 bytes), it is 163240. So
we are getting expected results. So the emulation
platform scales well for 20 virtual hosts per physical
machine
Creating 20 Virtual Hosts per System (Cont..)
Sending 40000 UDP packets from 10.0.1.1 to 10.0.4.10.
Each link has 10ms delay.
Bandwidth applied
using NIST Net
(Bytes/Sec)
Throughput
(Bytes/Sec)
2048
4096
8192
16384
32768
65536
131072
262144
393216
524288
655360
786432
1048576
1179648
Expected results are coming
1954
3908
7816
15634
31268
62537
125069
250146
375278
500321
625504
750347
1001276
11264645
Client Send
rate (Bytes/Sec)
2500
4500
8500
16500
33000
66000
132000
262500
393500
524500
655500
786500
1048600
1180000
Creating 20 Virtual Hosts per System (Cont..)
Sending 40000 TCP packets from 10.0.1.1 to
10.0.4.10. Each link has 5ms delay.
Bandwidth applied
using NIST Net
(Bytes/Sec)
4096
8192
16384
32768
65536
131072
262144
393216
524288
Expected results are coming
Throughput
(Bytes/Sec)
3892
7784
15572
31141
62262
124395
248082
305150
305366
Creating 20 Virtual Hosts per System (Cont..)
Sending 40000 UDP packets from 10.0.1.1 to
10.0.4.10. Each link has 5ms delay.
Bandwidth applied
using NIST Net
(Bytes/Sec)
2048
4096
8192
16384
32768
65536
131072
262144
393216
524288
655360
786432
1048576
1179648
Throughput
(Bytes/Sec)
1954
3908
7816
15633
31268
62534
125064
250140
375260
500293
625454
750276
1001152
Not all packets reached
destination
Client send
rate (Bytes/Sec)
2500
4500
8500
16500
33000
66000
132000
262500
393500
524500
655500
786500
1048600
1180000
Receive buffer at the destination drops some packets in case of b/w 1179648
Creating 50 Virtual Hosts per System
A network topology consisting of 50 nodes per system
Creating 50 Virtual Hosts per System (Cont.. )
Sending 40000 TCP packets from 10.0.1.1
to 10.0.4.25. Each link has 10ms delay.
Bandwidth applied
using NIST Net
(Bytes/Sec)
4096
8192
16384
32768
65536
131072
262144
393216
524288
Expected results are coming
Throughput
(Bytes/Sec)
3892
7782
15561
31079
61024
61921
61995
62011
62013
Creating 50 Virtual Hosts per System (Cont..)
Sending 40000 UDP packets from 10.0.1.1 to
10.0.4.25. Each link has 10ms delay.
Bandwidth applied
using NIST Net
(Bytes/Sec)
4096
8192
16384
32768
65536
131072
262144
393216
524288
786432
917504
1048576
Throughput
(Bytes/Sec)
3908
7816
15634
31268
62537
125069
250146
375279
500325
750346
875913
Not all packets
reached destination
Client send
rate (Bytes/Sec)
4500
8500
16500
33000
66000
131500
262500
393500
524500
786500
918000
1049000
Receive buffer at the destination drops some packets in case of b/w 1048576
Creating 50 Virtual Hosts per System (Cont.. )
Sending 40000 TCP packets from 10.0.1.1
to 10.0.4.25. Each link has 5ms delay.
Bandwidth applied
using NIST Net
(Bytes/Sec)
4096
8192
16384
32768
65536
131072
262144
393216
524288
Expected results are coming
Throughput
(Bytes/Sec)
3892
7784
15569
31122
62158
120066
122024
122155
122270
Creating 50 Virtual Hosts per System (Cont..)
Sending 40000 UDP packets from 10.0.1.1 to
10.0.4.25. Each link has 5ms delay.
Bandwidth applied
using NIST Net
(Bytes/Sec)
4096
8192
16384
32768
65536
131072
262144
393216
524288
786432
917504
1048576
Throughput
(Bytes/Sec)
3908
7816
15633
31267
62537
125066
250138
375259
500293
750277
875775
Not all packets
reached destination
Client send
rate (Bytes/Sec)
4500
8500
16500
33000
66000
131500
262500
393500
524500
786500
918000
1049000
Receive buffer at the destination drops some packets in case of b/w 1048576
Outline
Introduction
Related Work
Design and Implementation of ScaleNet
Experimental Results
Conclusions
Future Work
Conclusions
Created an emulation platform which emulates largescale networks using limited physical resources
Several virtual hosts are created in each physical
machine and applications are associated with virtual
hosts
Routing tables are setup for each virtual host
With this emulation platform any kind of network
protocol may be tested
Performance analysis and debugging can be done
In F. Hao et. al. 2003, BGP simulation is done using
11806 AS nodes. In ScaleNet, this can be done using
about 240 systems. Similarly OSPF protocol and peerto-peer networks can be studied.
Outline
Introduction
Related Work
Design and Implementation of ScaleNet
Experimental Results
Conclusions
Future Work
Future Work
Automatic mapping of user specified topology to the
physical resources
Identifying and redirecting other system calls
Locking of shared data structures in case of SMP
machine
Avoid changing MAC header
Analyzing the memory and processing requirements
by running a networking protocol
System is crashing sometimes during the initialization
of the emulation platform
Graphical user interface
Thank You