The California Institute for Telecommunications and

Download Report

Transcript The California Institute for Telecommunications and

Why Optical Networks
Will Become the 21st Century Driver
Performance per Dollar Spent
Optical Fiber
(bits per second)
(Doubling time 9 Months)
Silicon Computer Chips
(Number of Transistors)
(Doubling time 18 Months)
0
1
2
3
Number of Years
Scientific American, January 2001
Data Storage
(bits per square inch)
(Doubling time 12 Months)
4
5
Imagining a Fiber Optic Infrastructure Supporting
Interactive Visualization--SIGGRAPH 1989
“What we really have to do is eliminate distance between individuals who
want to interact with other people and with other computers.”
― Larry Smarr, Director
National Center for Supercomputing Applications, UIUC
ATT &
Sun
“Using satellite technology…demo of
What It might be like to have high-speed
fiber-optic links between advanced
computers in two different geographic locations.”
― Al Gore, Senator
Chair, US Senate Subcommittee on Science, Technology and Space
http://sunsite.lanet.lv/ftp/sun-info/sunflash/1989/Aug/08.21.89.tele.video
The OptIPuter Project –
Removing Bandwidth as an Obstacle In Data Intensive Sciences
• NSF Large Information Technology Research Proposal
– UCSD and UIC Lead Campuses—Larry Smarr PI
– USC, UCI, SDSU, NW Partnering Campuses
• Industrial Partners: IBM, Sun, Telcordia/SAIC, Chiaro, Calient
• $13.5 Million Over Five Years
• Optical IP Streams From Lab Clusters to Large Data Objects
NIH Biomedical Informatics
Research Network
NSF EarthScope
http://ncmir.ucsd.edu/gallery.html
siovizcenter.ucsd.edu/library/gallery/shoot1/index.shtml
Interactive Visual Analysis of Large Datasets:
Lake Tahoe Tidal Wave Threat Analysis
Graham Kent,
Scripps Institution of
Oceanography
3 Megapixel Panoram Display
Cal-(IT)2 Visualization Center at Scripps Institution of Oceanography
http://siovizcenter.ucsd.edu/library/gallery/shoot1/index.shtml
OptIPuter End User Building Blocks:
Scalable Compute + Storage + Viz Linux Clusters
•
•
•
•
•
•
•
Cluster: 16 – 128 Nodes (Typically Two Intel Processors)
Storage: 0.1-1 TB per Node
Graphics: Nvidia Card Per Node
Visualization Displays: Desktop, Wall, Theatre, Tiled, VR
Specialized Data Source/Sink Instruments
All Nodes Have 1 or 10 GigE I/O
Clusters Connected by Lambdas or Fibers
Commodity GigE Switch
Fibers or Lambdas
OptIPuter is Studying the Best Application Usage
for Both Routed vs. Switched Lambdas
• OptIPuter Evaluating Both:
– Routers
– Chiaro, Juniper, Cisco, Force10
– Optical Switches
– Calient, Glimmerglass, BigBangWidth
• UCSD Focusing on Routing Initially
• UIC Focusing on Switching initially
• Next Year Merge into Mixed Optical Fabric
MEMS
1x
OOO
Calient, Glimmerglass
Data Plane
Switching (Layer 2)
10x
OEO
Cisco
Control Plane
Routing (Layer 3)
100x
OEOEO
Chiaro
Routing
Optical Switch Workshop
October 2002
OptIPuter Software Architecture
for Distributed Virtual Computers v1.1
OptIPuter Applications
DVC/
Middleware
Visualization
DVC #1
Higher Level
Grid Services
DVC #2
Security
Models
DVC #3
Data Services: Real-Time Layer 5: SABUL, RBUDP,
DWTPHigh-Speed
Objects
Fast, GTP
Transport
Grid and Web Middleware – (Globus/OGSA/WebServices/J2EE)
Layer 4: XCP
Optical
Signaling/Mgmtl-configuration, Net Management
Node Operating Systems
Physical Resources
Andrew Chien
OptIPuter Systems Software Architect
The Dedicated Optical Grid:
The UCSD OptIPuter Deployment
OptIPuter Campus-Scale Experimental Network
0.320 Tbps
Backplane
Bandwidth
Juniper
T320
To CENIC
Forged a New Level
Of Campus Collaboration
In Networking Infrastructure
SDSC
SDSC
JSOE
Engineering
20X
SOM
6.4 Tbps
Backplane
Bandwidth
Medicine
Phys. Sci Keck
SDSC
Annex
SDSC Preuss
Annex
High School
CRCA
6th
College
Collocation
Node M
Earth
Sciences
SIO
Chiaro
Estara
½ Mile
Source: Phil Papadopoulos, SDSC;
Greg Hidley, Cal-(IT)2
2 Miles
0.01 ms
OptIPuter 2004 @ UCSD
Coupling Linux Clusters with High Resolution Visualization
OptIPuter Project Goal:
Scaling to 100 Million Pixels
• JuxtaView (UIC EVL)
on PerspecTile LCD
Wall
– Digital Montage
Viewer
– 8000x3600 Pixel
Resolution~30M
Pixels
• Display Is Powered By
– 16 PCs with
Graphics Cards
– 2 Gigabit Networking
per PC
NCMIR –
Brain Microscopy
(2800x4000 24 layers)
Source: Jason Leigh, EVL, UIC; USGS EROS
Multi-Latency OptIPuter Laboratory
National-Scale Experimental Network
“National Lambda Rail” Partnership
Serves Very High-End Experimental and Research Applications
4 x 10GB Wavelengths Initially
Capable of 40 x 10Gb wavelengths at Buildout
Chicago
OptIPuter
StarLight
NU, UIC
USC, UCI
UCSD, SDSU
SoCal
OptIPuter
2000 Miles
10 ms
=1000x Campus Latency
Source: John Silvester, Dave Reese, Tom West-CENIC
Chicago Metro-Scale OptIPuter
Uses I-WIRE and OMNInet Fiber
• Optically linking EVL and NU using I-WIRE and OMNInet Fiber
– OMNInet is a 10GigE Metro-Scale Testbed
– I-WIRE is a $7,500,000 State of Illinois Initiative
Chicago
1
E
g
i
G
1
6x
UIC
Source: Tom DeFanti, EVL, UIC
An International-Scale OptIPuter is Operational over the
First Set of 76 International GE TransLight Lambdas
Northern
Light
UKLight
CERN
Source: Tom DeFanti, EVL, UIC
European lambdas to US
–8 GEs Amsterdam—
Chicago
–8 GEs London—Chicago
Canadian lambdas to US
–8 GEs Chicago—
Canada —NYC
–8 GEs Chicago—
Canada —Seattle
US lambdas to Europe
–4 GEs Chicago—
Amsterdam
–3 GEs Chicago—
CERN
European lambdas
–8 GEs Amsterdam—
CERN
–2 GEs Prague—
Amsterdam
–2 GEs Stockholm—
Amsterdam
–8 GEs London—
Amsterdam
TransPAC lambda
–1 GE Chicago—Tokyo
IEEAF lambdas (blue)
–8 GEs NYC—Amsterdam
–8 GEs Seattle—Tokyo
The OptIPuter GeoWall2
at Supercomputing ‘03
See the National Center for Data Mining
Booth 2935 SC ‘03