PowerPoint - OptIPuter
Download
Report
Transcript PowerPoint - OptIPuter
Southern California Infrastructure
Philip Papadopoulos
Greg Hidley
UCSD Packet Test Bed
OptIPuter Year 2
OptIPuter UCSD / San Diego Network and Nodes -- 2004
SDSC
HP
28-node
cluster
(shared)
CSE
8-node
cluster
(shared)
Infiniband
64 nodes
Infiniband
4 nodes
Sun
Sun
24-32-node 128-node
compute
compute
cluster
cluster
IBM
48-node
storage
cluster
JSOE
Preuss
Sun
24-32-node
compute
cluster
HP
4-node
control
Dell Viz
10
Dell 5224
Chiaro
Extreme 400
Enstara
4
2
1
Extreme 400
9-node
cluster 7-node cluster
(shared)
(shared)
To UCI and ISI via
CalREN-HPR Juniper T320
10
Extreme 400
Dell 5224
CRCA
1
1
Bonded
GigE
1
Geowall 2
Tiled Display
IBM 9 mpixel
display pairs
1
3-node
Viz cluster
Cisco 6509
4
Dell
Geowall
4
Dell 5224
Dell 5224
SIO
Dell 5224
IBM 128-node
CPU cluster
IBM 9-node
Viz cluster
Sun
Sun
24-32-node 5-node
cluster
cluster
IBM 9 mpixel
display pairs
SOM
Sun
22-node
Viz cluster
6th
College
Dell 5224
Goals
• Expand our Campus-wide Research Instrument
– Support of Researcher Needs – Focus on Application
Needs
– Deployment of Scalable Endpoints
– Continued Evolution of the Packet-Based OptIPuter
– Begin Deployment of 10 GigE Technologies
– Drive Towards Goal of 1:1 Bisection Bandwidth
– 5:1 For This Year
• Evaluation of Parallel Storage Systems with Remote
Access
• Expand the UCSD-based OptIPuter from Campus-only
to Southern California and Eventually to Chicago
Year 2 Accomplishments
• Staffing
– Hired UCSD Optiputer Project Manager Aaron Chin
– Additional 1.5 FTE Effort Towards Infrastructure Deployment:
– Max Okumoto, Sean O’Connell, David Hutches, Mason Katz
• Node Buildout
–
–
–
–
–
48 Node (IA-32), 21 TB (300 Spindles) IBM Storage Cluster (JSOE)
128 Node (IA-32) Sun Compute Cluster (SDSC)
22 Node (Opteron) Sun Viz Cluster (SOM) Geowall2
3 Node (IA-32) Shuttle Viz Cluster (CRCA) Display Wall
Expected By 10/1/04
– 3 Sun (Opteron) Clusters, Size to be Determined
• Network Buildout
– 1 Gigabit Transport
– Connectivity To Campus VLAN Infrastructure And Border Router (T320)
– 10 Gigabit Transport
– Single Interface for Chiaro
– Extreme 400 (48 Port Gige with two 10gige Uplinks) Layer 2 Fabric
– 1 Each at JSOE, CSE and SDSC
– 5:1 Bisection for the Storage Cluster
Making The Campus-Wide OptIPuter
Usable
•
Solving Network Issues
– Converted to Public IP Address Space to Facilitate Off-campus Connectivity
– 4 GigE Channel Bonding – Worked With Chiaro to Improve Performance
– Load-balancing Algorithm Among Links is Handled Differently at Layer 2
(Dell) and Layer 3 (Chiaro)
– 35% of Bonded Link Capacity Before “Fix”, 60% Afterwards. L2-L2 (Dell-toDell) is 95% For Comparison
– PVFS
– 8 IBM Storage Nodes Running as PVFS Servers
– Clients Running at SIO, JSOE and SOM
– Performance Reasonable but not Significantly Larger than 1 Gigabit
•
Developing Support for Users [ HTTP://web.optiputer.net ]
–
–
–
–
–
Account Request Tools
Network Monitoring Facilities
Technical Network Configuration Information
Node Configuration Information
IP Allocation Map
SoCal CalREN-XD OptIPuter Build-Out
Expected Completion July 2004
ONS15540
CENIC
at LA
2 1GE
ONS15540
CENIC
at USC
2 1GE
ONS15808
CENIC
at LA
Foundry
ISI
1GE to StarLight
1GE to NASA Goddard
10GE
Foundry
USC
ONS15530
CENIC
at Tustin
ONS15808
CENIC
at Tustin
2 1GE
ONS15808
CENIC
at UCSD
ONS15808
CENIC
at UCSD
CENIC proposed solution for
UCSD to UCI, UCSD to ISI,
UCSD to StarLight and UCSD to
NASA Goddard
2 1GE transponders for 15540s
10GE transponders for 15808s
ONS15530s at UCSD and LA
ONS15530
CENIC
at LA
ONS15540
CENIC
at Tustin
ONS15540
CENIC
at UCI
2 1GE
Cisco
UCI
10GE
ONS15530
CENIC
at UCSD
WDM
1GigE
OptIPuter
LambdaGrid
10GigE
Chiaro
UCSD
1GigE proposed
Year 3 Plans:
Enhance Campus OptIPuter
•
A Substantial Portion of the Physical Build Completes in Year 2
– Endpoints, Cross-campus Fiber, Commodity Endpoints
•
Increase Campus Bandwidth
– Work Towards More Extensive 10GigE Integration
– Optiputer HW Budget Limited In Year 3, Focus is on Network Extension
– Connect Two Campus Sites with 32-node Clusters At 10GigE
– 3:1 Campus Bisection Ratio
•
Add/Expand a Moderate Number of new Campus Endpoints
– Add New Endpoints Into The Chiaro Network
– UCSD Sixth College
– JSOE (Engineering) Collaborative Visualization Center
– New Calit2 Research Facility
– Add 3 General-purpose Sun Opteron Clusters at Key Campus Sites
(Compute and Storage); Clusters Will All Have PCI-X (100 Mhz, 1Gbps)
– Deploy Infiniband on Our IBM Storage Cluster and on a Previously-Donated
Sun 128-node Compute Cluster
•
Complete Financial Acquisition of the Chiaro Router
Year Three Goals
Integrate New NSF Quartzite MRI
• Goal -- integration of Packet-based (SoCal) and Circuit-based
(Illinois) Approaches a Hybrid System
– Add Additional O-O-O Switching Capabilities Through a Commercial
(Calient Or Glimmerglass) All-optical Switch and the Lucent (Precommercial) Wavelength Selective Switch
– Begin CWDM Deployment to Extend Optical Paths Around UCSD and
Provide Additional Bandwidth
– Add Additional 10GigE in Switches and Cluster Node NICs
• MRI Proposal (Quartzite, Recommended for Funding) Allows Us
to Match the Network to the Number of Existing Endpoints
• This is a New Kind of Distributed Instrument
– 300+ Components Distributed Over the Campus
– Simple and Centralized Control for Other Optiputer Users
UCSD Quartzite Core at Completion (Year 5 of
OptIPuter)
Quartzite Communications
Core Year 3
To 10GigE cluster
node interfaces
.....
Quartzite
Core
Wavelength
Selective
Switch
• Recommended for Funding
• Physical HW to Enable
Optiputer and Other Campus
Networking Research
• Hybrid Network Instrument
To 10GigE cluster
node interfaces and
other switches
To cluster nodes
.....
To cluster nodes
.....
GigE Switch with
Dual 10GigE Upliks
32 10GigE
Production
OOO
Switch
To cluster nodes
.....
To
other
nodes
GigE Switch with
Dual 10GigE Upliks
...
GigE Switch with
Dual 10GigE Upliks
Chiaro Enstara
GigE
10GigE
4 GigE
4 pair fiber
Juniper T320
CalREN-HPR
Research
Cloud
Campus Research
Cloud