Design Presentation - ECpE Senior Design

Download Report

Transcript Design Presentation - ECpE Senior Design

1
Client: Lockheed Martin

Team Members
Team Leader:
Adam Jackson
 Communication
Coordinator:
Nick Ryan
 Bader Al-Sabah
 David Feely
 Richard Jones

Faculty Advisor
Dr. Ahmed Kamal


Client Contacts
Aaron Cordes
Rick Stevens
2

At this time, the maximum real-world
throughput of 10 Gbps network
configurations is unknown.
3

Lockheed Martin (LM) needs a test plan
designed and executed to measure the
maximum real-world throughput of a 10
Gbps network composed of Commercial Off
the Shelf (COTS) components.
4

Create and test a network capable of reaching
10 Gbps with COTS components

Topology has to use fiber optics

Remain within approx. $3500 budget
5



PCI Express (PCI-E) Network Cards with an
XFP Switch
PCI Extended (PCI-X) Network Cards with an
XFP Switch
Advanced TCA or MicroTCA (µTCA)
Architectures
6


Testing will be completed
with two systems directly
connected
Used for testing
bandwidth and bandwidth
efficiency
Graphic inspired by previous HSOI team
7


Composed of three
nodes and a Ethernet
switch
Used for testing
switching time, latency,
and quality of service
Graphic inspired by previous HSOI team
8

Same node strategy as PCI-E

Bus speed max of approximately 8 Gbps

Client requirement of 10 Gbps makes this an
unfeasible solution
9


Testing should be done with a single node
due to the high cost of components
Single Node composed of the following
Three 10 Gbps Network Interface Cards
 µTCA Carrier Hub
 Power module
 Control Processor
 Switching Fabric


Nodes can be connected in various ways
10
Node
10 GbE
10 GbE
Switching
Fabric
10 GbE
Control
Processor
Node
Node
10 GbE
10 GbE
Switching
Fabric
10 GbE
10 GbE
10 GbE
Control
Processor
Switching
Fabric
10 GbE
Control
Processor
Node
Node
10 GbE
10 GbE
10 GbE
Switching
Fabric
10 GbE
10 GbE
Control
Processor
Node
Switching
Fabric
Control
Processor
10 GbE
Control
Processor
Node
10 GbE
10 GbE
Switching
Fabric
10 GbE
10 GbE
10 GbE
Switching
Fabric
10 GbE
Control
Processor
Diagram courtesy of LMCO
11
µTCA

PCI-E
Advantages

 Modular design allows for
expansion
 262.5 Gbps maximum
throughput for Advanced
Mezzanine Cards (AMC)

 Readily available optical 10
Gbps NICs
 Variety of 10 Gbps XFP
Switches
 Relatively low cost
components
Disadvantages
 AMC Network Interface
Cards at 10 Gbps are not
readily available
 Costly components
Advantages

Disadvantages
 Lack of PCI-E systems at ISU
Source: http://www.compactpci-systems.com/columns/Tutorial/pdfs/4.2005.pdf
12
Interface
Maximum Transfer Rate
PCI-X 100-MHz
6.4 Gbps (800 MB/sec)
PCI-X 133-MHz
8 Gbps (1 GB/sec)
PCI-E x1
4 Gbps (500 MB/sec)
PCI-E x4
16 Gbps (2 GB/sec)
PCI-E x8
32 Gbps (4 GB/sec)
PCI-E x16
64 Gbps (8 GB/sec)
uTCA (AMC)
262.5 Gbps (32.8 GB/sec)
Source: http://www.dell.com/content/topics/global.aspx/vectors/en/2004_pciexpress?c=us&l=en&s=corp
13

µTCA will not fit into budget

µTCA components may not be available in time

PCI-X is not fast enough
14

Capable of PCI-E x1/x4/x8

Operating System Dual-Boot (Windows, Linux)

Approximate Cost: $500/system

Separate system needed for each node
15

Source: NetXen website
http://www.netxen.com/products/boardsolutions/NXB10GXxR.html
Pluggable XFP optical
interface

10GBASE-SR and –LR support

PCI-E Ver. 1.1 Interface

x1/x4/x8 compatible

32 Gbps throughput

Linux and Windows OS
supported
16

Model Number: SMC8708L2

Supports up to 8 XFP ports

Delivers 10-Gigabit Ethernet

Switching fabric – 160Gbps

AC Input – 100 to 240 V,
50 – 60 Hz, 2 A
http://www.pcworld.com/product/pricing/prtprdid,9311286-sortby,retailer/pricing.html
17

SMC10GXFP-SR


http://ecx.imagesamazon.com/images/I/11jHA98YEEL._AA160_.jpg

TigerAccess™ XFP 10G
Transceiver
1-Port 10GBASE-SR
(LC) XFP Transceiver
Used for 10 Gbps
connections
18
(As recommended by Lockheed Martin.)
NXB-10GXxR Intelligent NIC®
10 Gigabit Ethernet PCIe Adapter with pluggable XFP optical interface
(http://www.netxen.com/products/boardsolutions/NXB-10GXxR.html)
Node 3
Node 1
Node 2
TigerSwitch 10G 8-Port Standalone XFP 10Gigabit Ethernet
Managed Layer 2 Switch
SMC Networks, Inc.
19
Purchased by
Team
Purchased by
Client
Provided by
Department
NICs
2-3
0-1
-
Switch
-
1
-
XFP Transceiver
(for use on switch)
-
2-3
-
Optical Cabling
As needed
-
Available,
details unknown
Computer Systems
If necessary
and available
in budget
-
Supplied to
senior design
lab
20
Resource
Quantity
Unit Cost
Total Cost
Optical NICs
2
$1000
$2000
Optical NICs
1
$1000
On loan from client1
XFP Switch
1
$6500
On loan from client
XFP Transceiver
3
$1880
On loan from client
Fiber optic cables
3
$80
$240
Host System
3
$500
$15002
Total
$3740
1 One
2 ISU
optical NIC will need to be borrowed if the team must purchase the host systems
ECpE Department’s update of the Senior Design lab may cover this cost
21
Qcheck
 Packet generation program
 Can be used to test bandwidth, bandwidth
efficiency, and latency
 Ethereal
 Packet capture program
 Can be used for bandwidth efficiency testing
 IP Traffic Test & Measure
 Network testing suite
 Can be used for quality of service and latency
testing

22


Testing will be predominantly software based, the
test bench will be executed on the computer
system described previously.
If issues arise and the signal needs to be observed,
an Agilent 86100A oscilloscope is available from
the department.
 Availability of splitters is unknown.
23

Bandwidth Measurement

Channel Capacity

Bandwidth Efficiency (Throughput)

Switching Time Measurement

Latency Measurement

Quality of Service Measurement
24

Execute each test multiple times to ensure precise
results

Provide appropriate statistics from results

Use both UDP and TCP protocols when possible

Vary data size to avoid skewed results due to
packet header overhead
25



Bandwidth
 Compare link usage for each node under varying
workload types
Bandwidth Efficiency
 Show a comparison of the amount of OSI Layer 1
data sent for different OSI Layer 7 data block
sizes
Switching Time
 Compare switching time and link load for cases
when 2 and 3 nodes are connected to the
network
26


Latency
 Compare the latency between nodes under
different network loads
Quality of Service
 Show the amount data received from each
sending node for each endpoint node over time
27
28