2003 KDD talk - Microsoft Research

Download Report

Transcript 2003 KDD talk - Microsoft Research

Online Science
The World-Wide Telescope
as a Prototype For
the New Computational Science
Jim Gray
Microsoft Research
http://research.microsoft.com/~gray
Alex Szalay
Johns Hopkins University
http://tarkus.pha.jhu.edu/~szalay/
most of the slides are “hidden”,
to view the entire presentation,
look at it in PowerPoint.
Slides at http://research.microsoft.com/~gray/talks
Note:
1
Outline
1. Digression: Infinite storage
means full employment for you and me
2. Computational-X (  X ) evolves
from simulation
to include X-info (  X ):
data analysis and visualization
3. The World Wide Telescope,
an archetype for this trend and
what I have been doing
3
Infinite Storage Means
Full Employment for you and me
• The Terror Bytes are Here
– 1 TB costs 1k$ to buy
– 1 TB costs 300k$/y to own
• Management & curation are expensive
– Searching 1TB takes minutes or hours
• I am Petrified by Peta Bytes
We are here
• But… people can “afford” them so,
we plumbers, and you data miners
have lots to do – Automate!
Yotta
Zetta
Exa
Peta
Tera
Giga
Mega
4
Kilo
How much information is there?
Yotta
• Soon everything can be
recorded and indexed
• Most bytes will never be
seen by humans.
• Data summarization,
trend detection
anomaly detection
are key technologies
See Mike Lesk:
How much information is there:
Everything
!
Recorded
All Books
MultiMedia
Zetta
Exa
Peta
All books
(words)
.Movi
e
Tera
Giga
http://www.lesk.com/mlesk/ksg97/ksg.html
See Lyman & Varian:
How much information
http://www.sims.berkeley.edu/research/projects/how-much-info/
24 Yecto, 21 zepto, 18 atto, 15 femto, 12 pico, 9 nano, 6 micro, 3 milli
A Photo
A Book
Mega
5
Kilo
First Disk 1956
• IBM 305 RAMAC
• 4 MB
• 50x24” disks
• 1200 rpm
• 100 ms access
• 35k$/y rent
• Included computer &
accounting software
(tubes not transistors)
7
Storage capacity
beating Moore’s law
• Improvements:
Capacity
60%/y
Bandwidth
40%/y
Access time 16%/y
• 1000 $/TB
today
• 100 $/TB in 2007
Moores law
58.70% /year
TB growth
112.30% /year since 1993
Price decline 50.70% /year since 1993
Most (80%) data is personal (not enterprise)
This will likely remain true.
Disk TB Shipped per Year
1E+7
1998 Disk Trend (Jim Porter)
http://www.disktrend.com/pdf/portrpkg.pdf.
ExaByte
1E+6
1E+5
disk TB
growth:
112%/y
Moore's Law:
58.7%/y
1E+4
1E+3
1988
1991
1994
1997
10
2000
Disk Storage Cheaper Than Paper
• File
Cabinet:
Cabinet (4 drawer)
Paper (24,000 sheets)
Space (2x3 @ 10€/ft2)
Total
0.03 $/sheet
3 pennies per page
• Disk:
disk (250 GB =)
250$
ASCII: 100 m pages
2e-6 $/sheet(10,000x cheaper)
micro-dollar per page
Image: 1 m photos
3e-4 $/photo (100x cheaper)
milli-dollar per photo
250$
250$
180$
700$
• Store everything on disk
Note: Disk is 100x to 1000x cheaper than RAM
12
Trying to fill a terabyte in a year
Item
Items/TB
Items/day
300 KB JPEG
3M
9,800
1 MB Doc
1M
2,900
1 hour 256 kb/s
MP3 audio
1 hour 1.5 Mbp/s
MPEG video
9K
26
290
0.8
15
Portable Computer: 2010?
• 100 Gips processor
•
1 GB RAM
•
1 TB disk
•
1 Gbps network
• “Some” of your software
finding things
is a data mining challenge
16
80% of data is personal / individual.
But, what about the other 20%?
• Business
– Wall Mart online: 1PB and growing….
– Paradox: most “transaction” systems < 1 PB.
– Have to go to image/data monitoring for big data
• Government
– Government is the biggest business.
• Science
– LOTS of data.
22
Q: Where will the Data Come From?
A: Sensor Applications
• Earth Observation
– 15 PB by 2007
• Medical Images & Information + Health Monitoring
– Potential 1 GB/patient/y  1 EB/y
• Video Monitoring
– ~1E8 video cameras @ 1E5 MBps
 10TB/s  100 EB/y
 filtered???
• Airplane Engines
– 1 GB sensor data/flight,
– 100,000 engine hours/day
– 30PB/y
• Smart Dust: ?? EB/y
http://robotics.eecs.berkeley.edu/~pister/SmartDust/
http://www-bsac.eecs.berkeley.edu/~shollar/macro_motes/macromotes.html
23
Premise:
DataGrid Computing
• Store exabytes twice
(for redundancy)
• Access them from anywhere
• Implies huge archive/data
centers
• Supercomputer centers
become super data centers
• Examples:
Google, Yahoo!, Hotmail,
BaBar, CERN, Fermilab,
SDSC, …
28
Thesis
• Most new information is digital
(and old information is being digitized)
• A Computer Science Grand Challenge:
– Capture
– Organize
– Summarize
– Visualize
this information
• Optimize Human Attention as a resource
• Improve information quality
29
Outline
1. Digression: Infinite storage
means full employment for you and me
2. Computational-X (  X ) evolves
from simulation
to include X-info (  X ):
data analysis and visualization
3. The World Wide Telescope,
an archetype for this trend and
what I have been doing
31
The Evolution of Science
• Observational Science
– Scientist gathers data by direct observation
– Scientist analyzes data
• Analytical Science
– Scientist builds analytical model
– Makes predictions.
• Computational Science
– Simulate analytical model
– Validate model and makes predictions
• Data Exploration Science
Data captured by instruments
Or data generated by simulator
– Processed by software
– Placed in a database / files
– Scientist analyzes database / files
32
Information Avalanche
• Both
– better observational instruments and
– Better simulations
are producing a data avalanche
• Examples
Image courtesy of C. Meneveau & A. Szalay @ JHU
– Turbulence: 100 TB simulation
then mine the Information
– BaBar: Grows 1TB/day
2/3 simulation Information
1/3 observational Information
– CERN: LHC will generate 1GB/s
10 PB/y
– VLBA (NRAO) generates 1GB/s today
– NCBI: “only ½ TB” but doubling each year, very rich dataset.
– Pixar: 100 TB/Movie
33
Computational Science Evolves
• Historically, Computational Science = simulation.
• New emphasis on informatics:
–
–
–
–
–
Capturing,
Organizing,
Summarizing,
Analyzing,
Visualizing
• Largely driven by
observational science, but
also needed by simulations.
• Too soon to say if
comp-X and X-info
will unify or compete.
BaBar, Stanford
P&E
Gene Sequencer
From
http://www.genome.uci.edu/
34
Space Telescope
What’s X-info Needs from us (cs)
(not drawn to scale)
Tools
Scientists
Science Data
& Questions
Question &
Answer
Visualization
Plumbers
Database
To store data
Execute
Queries
Data Mining
Algorithms
Miners
36
Next-Generation Data Analysis
• Looking for
– Needles in haystacks – the Higgs particle
– Haystacks: Dark matter, Dark energy
• Needles are easier than haystacks
• Global statistics have poor scaling
– Correlation functions are N2, likelihood techniques N3
• As data and computers grow at same rate,
we can only keep up with N logN
• A way out?
– Discard notion of optimal (data is fuzzy, answers are
approximate)
– Don’t assume infinite computational resources or memory
37
• Requires combination of statistics & computer science
Organization & Algorithms
• Use of clever data structures (trees, cubes):
–
–
–
–
Up-front creation cost, but only N logN access cost
Large speedup during the analysis
Tree-codes for correlations (A. Moore et al 2001)
Data Cubes for OLAP (all vendors)
• Fast, approximate heuristic algorithms
– No need to be more accurate than data variance
– Fast CMB analysis by Szapudi et al (2001)
• N logN instead of N3 => 1 day instead of 10 million years
• Take cost of computation into account
– Controlled level of accuracy
– Best result in a given time, given our computing resources
• Use parallelism
– Many disks
– Many cpus
38
Analysis and Databases
• Much statistical analysis deals with
–
–
–
–
–
–
–
–
–
Creating uniform samples –
data filtering
Assembling relevant subsets
Estimating completeness
censoring bad data
Counting and building histograms
Generating Monte-Carlo subsets
Likelihood calculations
Hypothesis testing
• Traditionally these are performed on files
• Most of these tasks are much better done inside DB
• Bring Mohamed to the mountain, not the mountain to him
39
Data Access is hitting a wall
FTP and GREP are not adequate
•
•
•
•
You can GREP 1 MB in a second
You can GREP 1 GB in a minute
You can GREP 1 TB in 2 days
You can GREP 1 PB in 3 years.
•
•
•
•
You can FTP 1 MB in 1 sec
You can FTP 1 GB / min (= 1 $/GB)
… 2 days and 1K$
… 3 years and 1M$
• Oh!, and 1PB ~5,000 disks
• At some point you need
indices to limit search
parallel data search and analysis
• This is where databases can help
40
Smart Data (active databases)
• If there is too much data to move around,
take the analysis to the data!
• Do all data manipulations at database
– Build custom procedures and functions in the database
• Automatic parallelism guaranteed
• Easy to build-in custom functionality
– Databases & Procedures being unified
– Example temporal and spatial indexing
– Pixel processing
• Easy to reorganize the data
– Multiple views, each optimal for certain types of analyses
– Building hierarchical summaries are trivial
• Scalable to Petabyte datasets
41
Challenge:
Make Data Publication & Access Easy
• Augment FTP with data query:
Return intelligent data subsets
• Make it easy to
– Publish: Record structured data
– Find:
• Find data anywhere in the network
• Get the subset you need
– Explore datasets interactively
• Realistic goal:
– Make it as easy as
publishing/reading web sites today.
52
Data Federations of Web Services
• Massive datasets live near their owners:
–
–
–
–
Near the instrument’s software pipeline
Near the applications
Near data knowledge and curation
Super Computer centers become Super Data Centers
• Each Archive publishes a web service
– Schema: documents the data
– Methods on objects (queries)
• Scientists get “personalized” extracts
• Uniform access to multiple Archives
– A common global schema
Federation
• Challenge:
– What is the object model for your science?
54
Web Services: The Key?
• Web SERVER:
– Given a url + parameters
– Returns a web page (often dynamic)
Your
program
Web
Server
• Web SERVICE:
– Given a XML document (soap msg)
– Returns an XML document
– Tools make this look like an RPC.
• F(x,y,z) returns (u, v, w)
– Distributed objects for the web.
– + naming, discovery, security,..
• Internet-scale
distributed computing
Your
program
Data
In your
address
space
Web
Service
55
Grid and Web Services Synergy
• I believe the Grid will be many web services
• IETF standards Provide
– Naming
– Authorization / Security / Privacy
– Distributed Objects
Discovery, Definition, Invocation, Object Model
– Higher level services: workflow, transactions, DB,..
• Synergy: commercial Internet & Grid tools
• Each science can now define its object models
56
Outline
1. Digression: Infinite storage
means full employment for you and me
2. Computational-X (  X ) evolves
from simulation
to include X-info (  X ):
data analysis and visualization
3. The World Wide Telescope,
an archetype for this trend and
what I have been doing
57
World Wide Telescope
Virtual Observatory
http://www.astro.caltech.edu/nvoconf/
http://www.voforum.org/
• Premise: Most data is (or could be online)
• So, the Internet is the world’s best telescope:
–
–
–
–
It has data on every part of the sky
In every measured spectral band: optical, x-ray, radio..
As deep as the best instruments (2 years ago).
It is up when you are up.
The “seeing” is always great
(no working at night, no clouds no moons no..).
– It’s a smart telescope:
links objects and data to literature on them.
58
Why Astronomy Data?
IRAS 25m
•It has no commercial value
–No privacy concerns
–Can freely share results with others
–Great for experimenting with algorithms
2MASS 2m
•It is real and well documented
– High-dimensional data (with confidence intervals)
– Spatial data
– Temporal data
DSS Optical
•Many different instruments from
many different places and
many different times
•Federation is a goal
•There is a lot of it (petabytes)
•Great sandbox for data mining algorithms
IRAS 100m
WENSS 92cm
–Can share cross company
–University researchers
•Great way to teach both
Astronomy and
Computational Science
NVSS 20cm
59
ROSAT ~keV
GB 6cm
Making Discoveries
• Where are discoveries made?
–
–
–
–
At the edges and boundaries
Theory interacts with observation or,
New instrument or,
Going deeper, collecting more data, using more colors….
• Metcalfe’s law
– Utility of computer networks grows as the
number of possible connections: O(N2)
• Szalay’s data law
– Federation of N archives has utility O(N2)
– Possibilities for new discoveries grow as O(N2)
• Current sky surveys have proven this
– Very early discoveries from SDSS, 2MASS, DPOSS
• Hence the desire to federate science archives.
– Allow easy cross-comparison.
60
SkyServer
SkyServer.SDSS.org
or Skyserver.Pha.Jhu.edu/DR1/
• Sloan Digital Sky Survey
Data: Pixels + Data Mining
• About 400 attributes per
“object”
• Spectrograms for 1% of
objects
• Demo: pixel space
record space
set space
teaching
61
What You Just Saw
• Showed Desktop SkyServer
– 1 GB data, web server
– Code & data is public,
download from my homepage
•
•
•
•
Did not show Query log (its public)
Did not show Weblog (its public)
We have 1GB, 30GB, 1TB versions
10 TB is coming (2007).
62
Image Web Service
Images & annotation from DB
63
SkyQuery (http://skyquery.net/)
• Distributed Query tool using a set of web services
• Four astronomy archives from
Pasadena, Chicago, Baltimore, Cambridge (England).
• Feasibility study, built in 6 weeks
– Tanu Malik (JHU CS grad student)
– Tamas Budavari (JHU astro postdoc)
– With help from Szalay, Thakar, Gray
• Implemented in C# and .NET
• Allows queries like:
SELECT o.objId, o.r, o.type, t.objId
FROM SDSS:PhotoPrimary o,
TWOMASS:PhotoPrimary t
WHERE XMATCH(o,t)<3.5
AND AREA(181.3,-0.76,6.5)
AND o.type=3 and (o.I - t.m_j)>2
64
SkyQuery Structure
• Each SkyNode publishes
– Schema Web Service
– Database Web Service
• Portal is
– Plans Query (2 phase)
– Integrates answers
– Is itself a web service
Image
Cutout
SDSS
INT
SkyQuery
Portal
FIRST
2MASS
65
Recent Events With SkyQuery
• Many others plan to join federation
• Adding a MyDB feature to the portal
– You can create a small (few GB) DB at portal
– You can do your analysis there
(moving Mohamed to the mountain).
• Writing more detailed OpenSkyQuery Spec
http://skyservice.pha.jhu.edu/develop/vo/adql/
• Using it as a test vehicle of OGSA.
66
Outline
1. Digression: Infinite storage
means full employment for you and me
2. Computational-X (  X ) evolves
from simulation
to include X-info (  X ):
data analysis and visualization
3. The World Wide Telescope,
an archetype for this trend and
what I have been doing
67
Outline
1. Digression: Infinite storage
means full employment for you and me
2. Computational-X (  X ) evolves
from simulation
to include X-info (  X ):
data analysis and visualization
3. The World Wide Telescope,
an archetype for this trend and
what I have been doing
77
Call to Action
• If you do data visualization: we need you
(and we know it).
• If you do databases:
here is some data you can practice on.
• If you do distributed systems:
here is a federation you can practice on.
• If you do data mining
here is a dataset to test your algorithms.
• If you do astronomy educational outreach
here is a tool for you.
78
SkyServer references
http://SkyServer.SDSS.org/
http://SkyServer.Pha.Jhu.edu/DR1/
http://research.microsoft.com/pubs/
http://research.microsoft.com/Gray/SDSS/ (download personal SkyServer)
• Data Mining the SDSS SkyServer Database
Gray; Kunszt; Slutz; Szalay; Thakar; Vandenberg; Stoughton Jan. 2002 http://arxiv.org/abs/cs.DB/0202014
• SkyServer–Public Access to Sloan Digital Sky Server Data
Gray; Szalay; Thakar; Z. Zunszt; Malik; Raddick; Stoughton; Vandenberg November 2001 11 p.: Word 1.46 Mbytes PDF 456 Kbytes
• The World-Wide Telescope
Gray; Szalay Science, August 2001 6 p.: Word 684 Kbytes PDF 84 Kbytes
• Designing and Mining Multi-Terabyte Astronomy Archives
Brunner; Gray; Kunszt; Slutz; Szalay; Thakar June 1999 8 p.: Word (448 Kybtes) PDF (391 Kbytes)
• SkyQuery: http://SkyQuery.net/
79