The California Institute for Telecommunications and

Download Report

Transcript The California Institute for Telecommunications and

OptIPuter Goal:
Removing Bandwidth Barriers to e-Science
ALMA
LHC
Sloan Digital Sky Survey
ATLAS
Why Optical Networks
Will Become the 21st Century Driver
Performance per Dollar Spent
Optical Fiber
(bits per second)
(Doubling time 9 Months)
Silicon Computer Chips
(Number of Transistors)
(Doubling time 18 Months)
0
1
2
3
Number of Years
Scientific American, January 2001
Data Storage
(bits per square inch)
(Doubling time 12 Months)
4
5
The OptIPuter Project –
Removing Bandwidth as an Obstacle In Data Intensive Sciences
• NSF Large Information Technology Research Proposal
– UCSD and UIC Lead Campuses—Larry Smarr PI
– USC, UCI, SDSU, NW, TA&M Partnering Campuses
• Industrial Partners: IBM, Sun, Telcordia/SAIC, Chiaro, Calient
• $13.5 Million Over Five Years
• Optical IP Streams From Lab Clusters to Large Data Objects
NIH Biomedical Informatics
Research Network
NSF EarthScope
http://ncmir.ucsd.edu/gallery.html
siovizcenter.ucsd.edu/library/gallery/shoot1/index.shtml
Application Barrier One:
Gigabyte Data Objects Need Interactive Visualization
• Montages--Hundred-Million Pixel 2-D Images
– Microscopy or Telescopes
– Remote Sensing
• GigaZone 3-D Objects
– Seismic or Medical Imaging
– Supercomputer Simulations
• Interactive Analysis and Visualization of Such
High Resolution Data Objects Requires:
– Scalable Visualization Displays
– Montage and Volumetric Visualization Software
– UIC EVL’s JuxtaView and Vol-a-Tile
OptIPuter Project Goal:
Scaling to 100 Million Pixels
• JuxtaView (UIC EVL)
on PerspecTile LCD
Wall
– Digital Montage
Viewer
– 8000x3600 Pixel
Resolution~30M
Pixels
• Display Is Powered By
– 16 PCs with
Graphics Cards
– 2 Gigabit Networking
per PC
NCMIR –
Brain Microscopy
(2800x4000 24 layers)
Source: Jason Leigh, EVL, UIC; USGS EROS
Application Barrier Two:
Campus Grid Infrastructure is Inadequate
• Campus Infrastructure is Designed for Web Objects
– Being Swamped by Sharing of Digital Multimedia Objects
– Little Strategic Thinking About Needs of Data Researchers
• Challenge of Matching Storage to Bandwidth
– Need To Ingest And Feed Data At Multi-Gbps
– Scaling to Enormous Capacity
– Use Standards-Based Commodity Clusters (Rocks)
• OptIPuter Aims at Prototyping a National Architecture
–
–
–
–
Federated National and Global Data Repositories
Lambdas on Demand
Campus Laboratories Using Clusters with TeraBuckets
Campus Eventually with a Shared PetaCache
OptIPuter 2004 @ UCSD
Coupling Linux Clusters with High Resolution Visualization
8-node cluster
(shared)
JSOE
SDSC
Annex
SDSC
Sun 128-node Sun 32-node 8-node cluster
compute
Storage
(shared)
(shared)
cluster
10
GigE Switch
10GigE Uplink
2
2
CSE
4
IBM 48-node 4-node Sun 32-node
Storage
compute
control
Cluster
cluster
21TB
Chiaro
Enstara
10
Dell
Geowall
GigE Switch
10GigE Uplink
1
1
2
100-node cluster
(shared)
SIO
Preuss
School
UCSD 6509
Shared VLAN
4
Dell 5224
Bonded 4
GigE
1
Dell 5224
Fiber to
CRCA
SOM
Dell 5224
IBM 128-node IBM 9-node Sun 32-node
compute
Compute Cluster Viz Cluster
cluster
(shared)
8-node cluster Sun 32-node
compute
(shared)
cluster
Dell 5224
Fiber to
6th College
op-nodes-ucsd-y1.5 9/26/03 -grh
OptIPuter is Studying the Best Application Usage
for Both Routed vs. Switched Lambdas
• OptIPuter Evaluating Both:
– Routers
– Chiaro, Juniper, Cisco, Force10
– Optical Switches
– Calient, Glimmerglass
– Lightpath Accelerators
– BigBandWidth
Chiaro
Estara
Glimmerglass
• UCSD Focusing on Routing Initially
• UIC Focusing on Switching Initially
• Next Year Merge into Mixed Optical Fabric
Application Barrier Three:
Shared Internet Makes Interactive Gigabyte Impossible
• NASA Earth Observation System
– Over 100,000 Users Pull Data from Federated Repositories
– Two Million Data Products Delivered per Year
– 10-50 Mbps (May 2003) Throughput to Campuses
– Typically Over Abilene From Goddard, Langley, or EROS
• Biomedical Informatics Research Network (BIRN)
– Between UCSD and Boston
– Similar Story
– Lots of Specialized Networking Tuning Used
– 50-80 Mbps
• Remote Interactive Megabyte is Possible
• But Interactive Gigabyte is Impossible
IP over Lambdas with Alternate Protocols
Multi-Latency OptIPuter Laboratory
National-Scale Experimental Network
“National Lambda Rail” Partnership
Serves Very High-End Experimental and Research Applications
4 x 10GB Wavelengths Initially
Capable of 40 x 10Gb wavelengths at Buildout
Chicago
OptIPuter
StarLight
NU, UIC
USC, UCI
UCSD, SDSU
SoCal
OptIPuter
2000 Miles
10 ms
=1000x Campus Latency
Source: Tom West, CEO NLR (Booth 3409)
An International-Scale OptIPuter is Operational over the
First Set of 76 International GE TransLight Lambdas
Northern
Light
UKLight
CERN
Source: Tom DeFanti, EVL, UIC
European lambdas to US
–8 GEs Amsterdam—
Chicago
–8 GEs London—Chicago
Canadian lambdas to US
–8 GEs Chicago—
Canada —NYC
–8 GEs Chicago—
Canada —Seattle
US lambdas to Europe
–4 GEs Chicago—
Amsterdam
–3 GEs Chicago—
CERN
European lambdas
–8 GEs Amsterdam—
CERN
–2 GEs Prague—
Amsterdam
–2 GEs Stockholm—
Amsterdam
–8 GEs London—
Amsterdam
TransPAC lambda
–1 GE Chicago—Tokyo
IEEAF lambdas (blue)
–8 GEs NYC—Amsterdam
–8 GEs Seattle—Tokyo