The Georgia Tech Network Simulator (GTNetS)

Download Report

Transcript The Georgia Tech Network Simulator (GTNetS)

The Georgia Tech Network Simulator
(GTNetS)
ECE6110
August 25, 2008
George F. Riley
Overview
Network Simulation Basics
 GTNetS Design Philosophy
 GTNetS Details
 BGP++
 Scalability Results
 FAQ
 Future Plans
 Demos

2
Network Simulation Basics - 1

Discrete Event Simulation
◦ Events model packet transmission, receipt,
timers, etc.
◦ Future events maintained in sorted Event List
◦ Processing events results in zero or more
new events
 Packet transmit event generates a future packet
receipt event at next hop
3
Network Simulation Basics - 2

Create Topology
◦ Nodes, Links, Queues, Routing, etc.

Create Data Demand on Network
◦ Web Browsers, FTP transfers, Peer-to-Peer
Searching and Downloads, On--Off Data
Sources, etc.
Run the Simulation
 Analyze Results

4
Network Simulation Basics - 3
TCP Server 1
TCP Client 1
100 Mbps, 5ms
100 Mbps, 5ms
10 Mbps, 20ms
100 Mbps, 5ms
TCP Client 2
100 Mbps, 5ms
TCP Server 2
5
GTNetS Designed Like Real Networks

Nodes have one or more Interfaces
◦ Interfaces have IP Address and Mask
◦ Interfaces have an associated Link object



Packets append and remove PDU’s
Clear distinction between protocol stack layers
Packet received at an Interface
◦
◦
◦
◦
Forwards to Layer 2 protocol object for processing
Forwards to Layer 3 based on protocol number (800 is IPV4)
Forwards to Layer 4 based on protocol number (6 is TCP)
Forwards to application based on port number
6
GTNetS Design Philosophy
Written Completely in C++
 Released as Open Source
 All network modeling via C++ objects
 User Simulation is a C++ main program
 Include our supplied “#include” files
 Link with our supplied libraries
 Run the resulting executable

7
GTNetS Details - Node
Queue
Interface
Link
L2 Protocol
Queue
Node
Interface
Link
L2 Protocol
Routing Info
Port Map
Location
8
GTNetS Details - Packet
Header
Packet
Header
Header
Header
Unique ID
Size
Timestamp
9
GTNetS Applications
Web Browser (based on Mah’1997)
 Web Server - including Gnutella GCache
 On-Off Data Source
 FTP File Transfer
 Bulk Data Sending/Receiving
 Gnutella Peer-to-Peer
 Syn Flood
 UDP Storm
 Internet Worms
 VOIP

10
GTNetS Protocols

TCP, complete client/server
◦ Tahoe, Reno, New-Reno
◦ Sack (in progress)
◦ Congestion Window, Slow Start, Receiver Window
UDP
 IPV4 (IPV6 Planned)
 IEEE 802.3 (Ethernet and point-to-point)
 IEEE 802.11 (Wireless)
 Address Resolution Protocol (ARP)
 ICMP (Partial)

11
GTNetS Routing
Static (pre-computed routes)
 Nix-Vector (on-demand)
 Manual (specified by simulation application)
 EIGRP
 BGP
 OSPF
 DSR
 AODV

12
GTNetS Support Objects

Random Number Generation
◦ Uniform, Exponential, Pareto, Sequential, Emiprical, Constant

Statistics Collection
◦ Histogram, Average/Min/Max
Command Line Argument Processing
 Rate, Time, and IP Address Parsing

◦ Rate(“10Mb”), Time(“10ms”)
◦ IPAddr(“192.168.0.1”)
13
GTNetS Distributed Simulation
Split topology model into several parts
 Each part runs on separate workstation or separate
CPU in SMP
 Each simulator has complete topology picture

◦ “Real” nodes and “Ghost” nodes



Time management and message exchange via Georgia
Tech “Federated Developers Kit”.
Allows larger topologies that single simulation
May run faster
14
Example
//
// Simple
Simple GTNetS
GTNetS example
example
// George F. Riley, Georgia Tech, Winter 2002
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
"simulator.h"
// Definitions for the Simulator Object
"node.h"
//
"node.h"
// Definitions
Definitions for
for the
the Node
Node Object
Object
"linkp2p.h"
//
"linkp2p.h"
// Definitions
Definitions for
for point-to-point
point-to-point link
link objects
objects
"ratetimeparse.h" // Definitions for Rate and Time objects
"application-tcpserver.h"
"application-tcpserver.h" //
// Definitions
Definitions for
for TCPServer
TCPServer application
application
"application-tcpsend.h" // Definitions for TCP Sending app
"tcp-tahoe.h"
//
"tcp-tahoe.h"
// Definitions
Definitions for
for TCP
TCP Tahoe
Tahoe
int main()
{{
// Create the simulator object
Simulator
Simulator s;
s;
// Create and enable IP packet tracing
Trace*
tr =and
Trace::Instance();
// Create
enable IP packet tracing
tr->IPDotted(true);
Trace* tr = Trace::Instance(); // Get a pointer to global trace object
tr->Open("intro1.txt");
tr->IPDotted(true);
// Trace IP addresses in dotted notation
TCP::LogFlagsText(true);
tr->Open("intro1.txt");
// Create the trace file
IPV4::Instance()->SetTrace(Trace::ENABLED);
TCP::LogFlagsText(true);
// Log TCP flags in text mode
IPV4::Instance()->SetTrace(Trace::ENABLED);// Enable IP tracing all nodes
//// Create
Createthe
the nodes
nodes
Node*
Node*c1
c1==new
newNode();
Node(); // //Client
Clientnode
node1 1
Node*
Node*c2
c2==new
newNode();
Node(); // //Client
Clientnode
node2 2
Node*
Node*r1r1==new
newNode();
Node(); // //Router
Routernode
node1 1
Node*
Node*r2r2==new
newNode();
Node(); // //Router
Routernode
node2 2
Node*
Node*s1
s1==new
newNode();
Node(); // //Server
Servernode
node1 1
Node*
s2
new
Node();
// //Server
node
22
s2a==link
new
Node();
Server
node
//Node*
Create
object
template,
100Mb
bandwidth,
5ms delay
Linkp2p l(Rate("100Mb"), Time("5ms"));
// Add
Create
link object
template,
100Mb
5ms delay
//
thea links
to client
and server
leafbandwidth,
nodes
Linkp2p l(Rate("100Mb"),
Time("5ms"));
c1->AddDuplexLink(r1,
l, IPAddr("192.168.0.1"));
// c1 to r1
//
Add the links to clientl,and
server leaf nodes // c2 to r1
c2->AddDuplexLink(r1,
IPAddr("192.168.0.2"));
c1->AddDuplexLink(r1,
s1->AddDuplexLink(r2, l,
l, IPAddr("192.168.0.1"));
IPAddr("192.168.1.1")); //
// c1
s1 to
to r1
r2
c2->AddDuplexLink(r1,
l,
IPAddr("192.168.0.2"));
//
c2
s2->AddDuplexLink(r2, l, IPAddr("192.168.1.2")); // s2 to
to r1
r2
s1->AddDuplexLink(r2, l, IPAddr("192.168.1.1")); // s1 to r2
s2->AddDuplexLink(r2,
l, IPAddr("192.168.1.2"));
s2 to r2
//
Create a link object template,
10Mb bandwidth, //100ms
delay
Linkp2p r(Rate("10Mb"), Time("100ms"));
// Add
Create
link object
template,
//
thea router
to router
link 10Mb bandwidth, 100ms delay
Linkp2p r(Rate("10Mb"),r);Time("100ms"));
r1->AddDuplexLink(r2,
// Add the router to router link
TCPServer*
server1 = new
r1->AddDuplexLink(r2,
r); TCPServer(TCPTahoe());
server2
= new TCPServer(TCPTahoe());
//TCPServer*
Create the TCP
Servers
server1->BindAndListen(s1,
80); // Application on s1, port 80
TCPServer*
server1 = new TCPServer(TCPTahoe());
server2->BindAndListen(s2,
80); // Application on s2, port 80
TCPServer*
server2 = new TCPServer(TCPTahoe());
server1->SetTrace(Trace::ENABLED);
// Trace TCP
actions
at server1
server1->BindAndListen(s1,
80); // Application
on s1,
port 80
server2->SetTrace(Trace::ENABLED);
// Trace TCP
actions
at server2
server2->BindAndListen(s2,
80); // Application
on s2,
port 80
server1->SetTrace(Trace::ENABLED); // Trace TCP actions at server1
//
Create the TCP Sending Applications
server2->SetTrace(Trace::ENABLED);
// Trace TCP actions at server2
TCPSend* client1 = new TCPSend(TCPTahoe(c1),
s1->GetIPAddr(),
80,
// Create the TCP Sending
Applications
TCPSend* client1 =Uniform(1000,10000));
new TCPSend(TCPTahoe(c1),
TCPSend* client2 =s1->GetIPAddr(),
new TCPSend(TCPTahoe(c2),
80,
s2->GetIPAddr(), 80,
Uniform(1000,10000));
TCPSend* client2 =Constant(100000));
new TCPSend(TCPTahoe(c2),
s2->GetIPAddr(), 80,
// Enable TCP trace for
all clients
Constant(100000));
client1->SetTrace(Trace::ENABLED);
// Enable TCP trace for all clients
client2->SetTrace(Trace::ENABLED);
client1->SetTrace(Trace::ENABLED);
client2->SetTrace(Trace::ENABLED);
// Set random starting times for the applications
Uniform
startRv(0.0,
2.0);
// Set random
starting
times for the applications
client1->Start(startRv.Value());
Uniform startRv(0.0, 2.0);
client2->Start(startRv.Value());
client1->Start(startRv.Value());
client2->Start(startRv.Value());
s.Progress(1.0);
messagesmessages
s.Progress(1.0); // Request// progress
Request progress
s.StopAt(10.0);
at time 10.0
s.StopAt(10.0); // Stop the
// simulation
Stop the simulation
at time 10.0
s.Run();
// Run //the
simulation
s.Run();
Run
the simulation
std::cout
std::cout<<
<<"Simulation
"SimulationComplete"
Complete"<<
<<std::endl;
std::endl;
}}
TCP Client 1
TCP Server 1
TCP Client 2
TCP Server 2
// Create the TCP Servers
//
CreateGTNetS
a link object
template,
//
Simple
example
TCPServer*
//
//Enable
Create TCP
and
server1
enable
trace
for
=IP
new
all
packet
clients
TCPServer(TCPTahoe());
tracing
//100Mb
bandwidth,
5msTech,
delay
//
George
F.
Riley,
Georgia
Winter
2002
TCPServer*
client1->SetTrace(Trace::ENABLED);
Trace* tr = Trace::Instance();
server2 = new TCPServer(TCPTahoe());
Linkp2p l(Rate("100Mb"), Time("5ms"));
server1->BindAndListen(s1,
client2->SetTrace(Trace::ENABLED);
tr->IPDotted(true);
80);
#include "simulator.h"
server2->BindAndListen(s2,
tr->Open("intro1.txt");
80);
#include
"node.h"
//
Add the
links to client and server leaf nodes
server1->SetTrace(Trace::ENABLED);
//
TCP::LogFlagsText(true);
Set random
starting times for the applications
#include
"linkp2p.h
c1->AddDuplexLink(r1,
l, IPAddr("192.168.0.1"));
server2->SetTrace(Trace::ENABLED);
Uniform
IPV4::Instance()->SetTrace(Trace::ENABLED);
startRv(0.0,
2.0);
#include
"ratetimeparse.h"
c2->AddDuplexLink(r1, l, IPAddr("192.168.0.2"));
client1->Start(startRv.Value());
#include "application-tcpserver.h"
s1->AddDuplexLink(r2,
l, IPAddr("192.168.1.1"));
//#include
client2->Start(startRv.Value());
//Create
Create"application-tcpsend.h"
the
theTCP
nodes
Sending Applications
s2->AddDuplexLink(r2,
l, IPAddr("192.168.1.2"));
#include
TCPSend*
Node* c1"tcp-tahoe.h"
=client1
new Node();
= new TCPSend(TCPTahoe(c1),
// Client node 1
s.Progress(1.0);
Node* c2 = new Node();
//s1->GetIPAddr(),
Request//progress
Client node
80,messages
2
// Create
int
main() a link object template,
s.StopAt(10.0);
Node* r1 = new Node();
//Uniform(1000,10000));
Stop the
// Router
simulation
node
at1time 10.0
{//10Mb bandwidth, 100ms delay
TCPSend*
s.Run();
Node*
r2 the
=client2
new
Node();
//=Run
new
the
TCPSend(TCPTahoe(c2),
// simulation
Router node 2
// Create
simulator
object
Linkp2p
r(Rate("10Mb"),
Time("100ms"));
std::cout
Node*
s1<<
=s;new
"Simulation
Node();
s2->GetIPAddr(),
Complete"
// Server 80,
node
<< std::endl;
1
//Simulator
Add the router
to router link
}Node* s2 = new Node();
Constant(100000));
// Server node 2
r1->AddDuplexLink(r2, r);
UNC Chapel Hill, Feb 3, 2006
15
Integration of Zebra bgpd into ns2/GTNetS

Zebra bgpd:
◦ One BGP router per process
(C).
◦ Works on real-time.
◦ Blocking routines.
◦ BSD sockets.

ns-2/GTNetS:
◦ Multiple BGP routers per
process (C++).
◦ Works on simulation-time.
◦ Non-blocking routines.
◦ Simulator’s TCP implementation.
Convert C code to C++.
 Convert real-time to simulation-time functions.
 Remove blocking routines and interleave schedulers.
 Replace sockets with the simulator TCP implementation.

16
BGP++ scalability

Compact routing table structure.
◦ Observations:
 Memory demand, O(n3), driven by memory
required for representing routing tables.
 BGP attributes account for most of the
memory required for a single routing table
entry.
 Different entries often have common BGP
attributes.
◦ Solution: Use a global data structure to
store and share BGP attributes. Avoid
replication.
◦ Proof of concept simulations of up to 4,000
ASs in a single workstation with 2GB RAM.

Extend BGP++ to support
parallel/distributed BGP simulations.
Solve memory bottleneck problem.
Up to 62% memory savings,
47% on average.
17
Other BGP++ features



BGP++ inherits Zebra’s CISCO-like configuration
language.
Develop a tool to automatically generate ns-2/GTNets
configuration from simple user input.
Develop a tool to automatically partition topology and
generate pdns configuration from ns-2 configuration, or
distributed GTNetS topology.
◦ Model simulation topology as a weighted graph: node weights
reflect expected workload, link weights reflect expected traffic.
◦ Graph partitioning problem: find a partition in k parts that
minimizes the edge-cut under the constraint that the sum of the
nodes’ weights in each part is balanced.
18
Scalability Results - PSC
Pittsburgh Supercomputer Center
 128 Systems, 512 CPU’s, 64-bit HP
Systems
 Topology Size

◦
◦
◦
◦
◦
15,064 Nodes per System
1,928,192 Nodes Total Topology
1,820,672 Total Flows
18,650,757,866 Simulation Events
1,289 Seconds Execution Time
19
Questions?
20