What is the OptIPuter?

Download Report

Transcript What is the OptIPuter?

The OptIPuter and NLR
Tom DeFanti, Maxine Brown, Jason Leigh, Alan Verlo,
Linda Winkler, Joe Mambretti
Chicago
Larry Smarr, Mark Ellisman, Phil Papadopoulos, Greg Hidley
San Diego
Ron Johnson, Dave Richardson
Seattle
What is the OptIPuter?
• Optical networking, Internet Protocol,
computer storage, processing and
visualization technologies
• Tightly couples computational resources
over parallel optical networks using the IP
communication mechanism
• The OptIPuter exploits a new world in which
the central architectural element is optical
networking, not computers - creating
"supernetworks"
What is the OptIPuter?
• The goal of this new architecture is
to enable scientists who are
generating terabytes and petabytes
of data to interactively visualize,
analyze, and correlate their data from
multiple storage sites connected to
optical networks.
Why Now?
• e-Science requires OptIPuter capabilities
– Many 2D GigaPixel and 3D GigaZone data objects
– Cyberinfrastructure plans are being made-influence designs NOW!
• We can build on Global Grid Research
– Grid Middleware has much of what we need
– But Grid is built on stochastic foundation
• “Best Effort” Internet means unpredictable latency
– Need determinism: predictable reservable lambdas
• Availability of Technology for Dedicated Light Pipes
– State, National, and International Dark Fiber Nets Being Turned-On
– Lots of Parallel High Bandwidth Clusters for Endpoints of Users
– Shakeout of Optical Switch Market
• 302 Companies, Both OptIPuter Anchors, Inexpensive (~$1000/port)
– Starlight/PacificWave Have Driven Global Lambda Connectivity
– Cost of Bandwidth not the bottleneck anymore
George Seweryniak
U.S. Department of Energy
• “Optical Networks are critical to the
Federal Agencies
– Need to closely work together with private
industry on development to deployment
– Federal agencies have stepped up to the
plate
• Current Science trends need them now
• No sign of abating requirements for big
pipes and dynamic provisioning”
10GE CAVEwave
on the National LambdaRail
EVL
Map Source: John Silvester, Dave Reese
Tom West, Ron Johnson
I-WIRE in Illinois Connects EVL and NCSA to StarLight and CAVEwave Connects StarLight
in Chicago to Seattle and then Joins Pacific WAVE to Cal-(IT)2 in San Diego and Irvine
(later Bay Area and Los Angeles) for OptIPuter/GLIF Experiments.
What’s a 10GE CAVEwave Cost?
• Less than a 1GE Internet connection
• About what a network engineer or full
professor costs, fully loaded
• Much less than the computers at each
end to keep it busy
• Much less than the router capacity
needed at each end to accept the 10GE
• Your institution needs to be a NLR
member (priceless)
Actual TransLight Experimental
Lit up Lambdas Today
Northern
Light
UKLight
European lambdas to US (red)
–10Gb Amsterdam—Chicago
–10Gb London—Chicago
–10Gb Amsterdam—NYC
Canadian lambdas to US (white)
–30Gb Chicago-Canada-NYC
–30Gb Chicago-Canada-Seattle
Japan
US sublambdas to Europe (grey)
–6Gb Chicago—Amsterdam
CERN
Japan JGN II lambda to US (cyan)
–10Gb Chicago—Tokyo
European lambdas (yellow)
–10Gb Amsterdam—CERN
–2.5Gb Prague—Amsterdam
–2.5Gb Stockholm—Amsterdam
–10Gb London—Amsterdam
IEEAF lambdas (blue)
–10Gb NYC—Amsterdam
–10Gb Seattle—Tokyo
PNWGP
CAVEwave/PacificWave (purple)
–10Gb Chicago—Seattle
–10Gb Seattle—LA—San Diego
–10Gb Seattle—LA
Manhattan
Landing
OptIPuter Experiment #1
Wide-Area Vol-a-Tile and JuxtaView Applications
–
–
Electronic Visualizaton Laboratory (EVL), University of Illnois at Chicago
University of Amsterdam
JuxtaView displays ultra-high resolution 2D images,
such as USGS maps. It invokes LambdaRAM, which
pre-fetches portions of large datasets before an
application needs them and stores the data in the local
cluster's memory for display.
Vol-a-Tile dynamically retrieves, renders and displays
large volumetric datasets from remote storage. It
invokes OptiStore, which extracts relevant information
from raw volumetric datasets and produces visual
objects for display.
Aggressive pre-fetching and large bandwidth utilization can overcome network latency
that hinders interactive applications.
Datasets are stored on a cluster at University of Amsterdam, streamed on demand over
a 4Gbps transatlantic link, and displayed on EVL’s GeoWall2 visualization cluster.
www.evl.uic.edu/cavern/optiputer
30 Megapixel Viewport into a
10 GigaPixel Dataset
OptIPuter Experiment #2
Terabyte Data Juggling and DVC Framework over the OptIPuter Network
–
–
Concurrent Systems Architecture Group (CSAG), UCSD
Cal-(IT)2/JSOE, UCSD
Group Transport Protocol (GTP) is a transport protocol
that efficiently manages receiver contention likely to
arise in high-bandwidth networks. GTP achieves both
high-transfer bandwidth for single flows and also efficient
sharing and coordination among multiple convergent
flows, enabling efficient data fetching from multiple
distributed data sources. A multi-gigabyte SIO dataset
is moved across four UCSD OptIPuter node sites and
between UIC and Amsterdam.
Dynamic Virtual Computer (DVC) middleware ties together novel configurable optical
network capabilities with traditional Grid resources. Using an interactive GUI, a
prototype implementation of DVC shows dynamic resource aggregation – resource
discovery, selection and binding – as well as the running of a message-passing
application across these resources. Resources are assembled from several UCSD sites
and from University of Amsterdam.
www-csag.ucsd.edu/projects/Optiputer.html
OptIPuter Experiment #3
Grid-Based Visualization Pipeline for OptIPuter Clusters
–
–
Information Sciences Institute (ISI), University of Southern California
Electronic Visualization Laboratory (EVL), UIC
The Grid Visualization Utility (GVU) facilitates the
construction of scalable visualization pipelines, based
on the Globus Toolkit. GVU is used to enable real-time
interactive viewing of high-resolution time-series
volume datasets.
Current efforts examine how load-balancing issues can
be alleviated by fine-grained decomposition and
distribution of datasets across clusters and other
distributed compute resources.
GVU currently supports interactive viewing of 3D
structures culled from large datasets on a single
display; Bolivia Earthquake data is used. Future work
will focus on tiled displays, such as GeoWall2, for
distributing the rendering load to handle the interactive
viewing of multiple, complex structures.
www.isi.edu/~thiebaux/gvu
OptIPuter Experiment #4
Trans-Pacific HDTV Feedback & Remote-Control Scenarios of
Remote Instrumentation
– National Center for Microscopy and Imaging Research (NCMIR) and Biomedical
Informatics Research Network (BIRN), UCSD
– Osaka University
– KDDI R&D Labs
NCMIR researchers demonstrate live streaming HDTV
from the world’s largest microscope in Osaka, Japan.
The video data is streamed to UIC/EVL and UCSD
while being controlled by project scientists in San
Diego.
High-quality video is essential for resolving useful
information, such as changes in gradients in a highnoise, low-contrast environment. HDTV combined with
dedicated lambdas will provide lower latencies and
control of network jitter, especially important in these
large streams of video data. This step is the first in a
data acquisition feedback loop for instrumentation
steering, control, computation and visualization.
http://ncmir.ucsd.edu
OptIPuter Experiment #5
Application-Controlled Light-Path Provisioning over Multi-Domain OptIPuter
Environments
–
–
–
Electronic Visualizaton Laboratory (EVL), University of Illnois at Chicago
Int’l. Center for Advanced Internet Research (iCAIR), Northwestern Univ.
University of Amsterdam (UvA)
In this demonstration, end-to-end light paths are
provisioned across multiple domains, and then
local domain controllers are invoked to access and
set up optical switches, using the following tools:
•
•
•
•
EVL’s Photonic Inter-domain Negotiator (PIN)
EVL’s Photonic Data Controller (PDC)
iCAIR’s Optical Dynamic Intelligent Network (ODIN)
over Chicago optical metro OMNInet
UvA’s Inter-domain Generic Authorization,
Authentication, and Accounting (AAA) procedures
TeraVision streamed fractal animation
by Dan Sandin, EVL
Using EVL’s TeraVision application for capturing and
streaming high-resolution computer graphics over gigabit networks, an animation is
streamed, first between two domains (UvA to EVL) over multi-gigabit transoceanic
links, and then among three domains (UvA, NU via OMNInet, and EVL).
www.evl.uic.edu/research/res_project.php3?indi=217
Multi-Domain Lightpaths
Photonic Interdomain Negotiator
Cluster
University of Illinois
at Chicago
All-optical LAN
All-optical LAN
Cluster
University of
Amsterdam
Cluster
StarLight
OC-192
(Chicago)
NetherLight
All-optical LAN
(Amsterdam)
PDC
PDC
BOD/AAA
PIN
PIN
PIN
OMNInet
All-optical MAN
ODIN/GMPLS
Chicago
and
Northwestern at Evanston
PIN
Cluster
OptIPuter Experiment #6
JuxtaView, Vol-a-Tile and GVU at SIO and SIO Visual Objects Distribution
–
–
–
–
–
Scripps Institution of Oceanography (SIO), UCSD
SDSC, UCSD
Cal-(IT)2/JSOE, UCSD
Electronic Visualizaton Laboratory (EVL), University of Illnois at Chicago
Information Sciences Institute (ISI), University of Southern California
This demonstration showcases
OptIPuter applications that display
high-resolution imagery (JuxtaView),
volume rendering (Vol-a-Tile and GVU)
and scene files composed of
heterogeneous geologic datasets (SIO
Visual Objects).
Here, SIO scientist Debi Kilb interacts
with IKONOS satellite imagery using
JuxtaView (on left panel) and a scene
file that combines the same imagery
with topography, bathymetry and seismic images (displayed on right panel). Both
datasets are fetched over the campus OptIPuter fiber from the UCSD storage cluster.
http://siovizcenter.ucsd.edu
OptIPuter Experiment #7
Online Brain Maps: Deposition, Distribution and Visualization of Large-Scale Brain
Maps in a Near-Real-Time Environment
–
–
–
National Center for Microscopy and Imaging Research (NCMIR) and Biomedical
Informatics Research Network (BIRN), UCSD
SDSC, UCSD
Cal-(IT)2/JSOE, UCSD
NCMIR researchers demonstrate a system
within the Telescience Portal that allows
users to do biological studies involving
electron microscopic tomography by guiding
them through the process, from acquisition
to analysis.
GridFTP is used to distribute very large (>1Gb)
brain maps from the UCSD multi-photon microscope to other OptIPuter on-line sites in
Southern California (UCSD, UCI, SDSU, and USC). Data is sent from the OptIPuter’s
IBM storage cluster to local resources, such as the Geowall2 or computational resources.
The goal is to be able to transfer data from an instrument to OptIPuter-connected
resources in real time, enabling researchers to steer the data acquisition process and
monitor progress, which can currently take as long as 22 days.
http://ncmir.ucsd.edu, https://telescience.ucsd.edu
The Crisis Response Room
of the Future
SHD Streaming TV -- Immersive Virtual Reality – 100 Megapixel Displays
The Living Room of 2010?
The CineGrid Home Office and Game Room
Global Lambda Integrated Facility
GLIF World Map – December 2004
Predicted international Research & Education Network bandwidth, to be made available
for scheduled application and middleware research experiments by December 2004.
www.glif.is
Visualization by
Bob Patterson, NCSA.
Global Lambda Integrated Facility
Bandwidth for Experiments, December 2004
www.glif.is
Visualization by
Bob Patterson, NCSA.
Global Lambda Integrated Facility
Bandwidth for Experiments, December 2004
www.glif.is
Visualization by
Bob Patterson, NCSA.
Announcing…
iGrid
2oo5
THE GLOBAL LAMBDA INTEGRATED FACILITY
September 26-30, 2005
University of California, San Diego
California Institute for Telecommunications and Information Technology [Cal-(IT)2]
United States
Thank You!
• TransLight planning, research, collaborations, and outreach efforts
are made possible, in major part, by funding from:
– National Science Foundation (NSF) awards SCI-9980480, SCI9730202, CNS-9802090, CNS-9871058, SCI-0225642, and CNS0115809
– State of Illinois I-WIRE Program, and major UIC cost sharing
– Northwestern University for providing space, power, fiber,
engineering and management
– Pacific Wave, StarLight, National LambdaRail, CENIC, PNWGP,
CANARIE, SURFnet, UKERNA, and IEEAF for Lightpaths
• DoE/Argonne National Laboratory for StarLight and I-WIRE network
engineering and design