Query Processing, Resource Management and Approximate in a
Download
Report
Transcript Query Processing, Resource Management and Approximate in a
CS 765
1. The Age
of
Infinite Storage
Section 1
#1
1. The Age
of
Infinite Storage
has begun
Many of us have enough money in our pockets right now to buy all
the storage we will be able to fill for the next 5 years.
So having the storage capacity is no longer a problem.
Managing it is a problem (especially when the volume gets large).
How much data is there?
Section 1
#2
Tera Bytes (TBs) are Here
1 TB costs < 1k$ to buy
1 TB may cost ~ 300k$/year to own
Management and curation are the expensive part
Searching 1 TB takes hours
I’m Terrified by TeraBytes
We are here
I’m Petrified by PetaBytes
I’m completely Exafied by
ExaBytes
I’m too old to ever be Zettafied by ZettaBytes, but you
may be in your lifetime.
You may be Yottafied by YottaBytes.
You may not be Googified by GoogiBytes,
Googi 10100
...
Yotta
1024
Zetta
1021
Exa
1018
Peta
1015
Tera
1012
Giga
109
Mega
106
Kilo
103
but the next generation may be?
Section 1
#3
Yotta
How much information is there?
Zetta
Soon everything can be
recorded and indexed.
Most of it will never be
seen by humans.
Data summarization,
trend detection, anomaly
detection, data mining,
are key technologies
Exa
Everything!
Recorded
Peta
All Books
MultiMedia
Tera
All books (words)
.Movie
Giga
A Photo
Mega
A Book
Kilo
10-24 Yocto, 10-21 zepto, 10-18 atto, 10-15 femto, 10-12 pico, 10-9 nano, 10-6 micro, 10-3 milli
Section 1
#4
First Disk, in 1956
IBM 305 RAMAC
4 MB
50 24” disks
1200 rpm
100
(revolutions per minute)
milli-seconds (ms) access time
35k$/year to rent
Included computer &
accounting software
(tubes not transistors)
Section 1
#5
1.6 meters
10 years later
30 MB
Section 1
#6
Kilo
Mega
Disk Evolution
Giga
Tera
Peta
Exa
Zetta
Yotta
Section 1
#8
Memex
As We May Think, Vannevar Bush, 1945
“A memex is a device in which an
individual stores all his books, records,
and communications, and which is
mechanized so that it may be consulted
with exceeding speed and flexibility”
“yet if the user inserted 5000 pages of
material a day it would take him
hundreds of years to fill the repository,
so that he can enter material freely”
Section 1
#9
Can you fill a terabyte in a year?
Item
Items/TB
Items/day
a 300 KB JPEG image
3M
9,800
a 1 MB Document
1M
2,900
a 1 hour, 256 kb/s MP3
audio file
9K
26
a 1 hour 1 MPEG video
290
0.8
Section 1
# 10
On a Personal Terabyte,
How Will We Find Anything?
Need Queries, Indexing, Data Mining,
Scalability, Replication…
If you don’t use a DBMS, you will
implement one of your own!
Need for Data Mining, Machine Learning is
more important then ever!
Of the digital data in existence today,
80% is personal/individual
DBMS
20% is Corporate/Governmental
Section 1
# 11
We’re awash with data!
Network data:
10 exabytes
~ 1019 Bytes
10 zettabytes
~ 1022 Bytes
WWW (and other text collections)
~ 1016 Bytes
Sensor data from sensors (including Micro & Nano -sensor networks)
15 petabytes
National Virtual Observatory (aggregated astronomical data)
~ 1014 Bytes
US EROS Data Center archives Earth Observing System (near Sioux Falls SD)
Remotely Sensed satellite and aerial imagery data
100 terabytes
10 yottabytes by 2020
~ 1025 Bytes
Genomic/Proteomic/Metabolomic data (microarrays, genechips, genome sequences)
10 gazillabytes by 2030
~ 1028 Bytes?
I made up these Name! Projected data sizes are
overrunning our ability to name their orders of magnitude!
Stock Market prediction data (prices + all the above?)
10 supragazillabytes by 2040 ~ 1031 Bytes?
Useful information must be teased out of these large volumes of raw data.
AND these are some of the 1/5th of Corporate or Governmental data collections.
The other 4/5ths of data sets are personnel!
Section 1
# 12
Parkinson’s Law (for data)
Data expands to fill available storage
Disk-storage version of Moore’s Law
Available storage doubles every 9 months!
How do we get the information we need from the massive
volumes of data we will have?
Querying (for the information we know is there)
Data mining (for answers to questions we don't know to ask precisely
Moore’s Law with respect to processor performance seems to be
over (processor performance doubles every x months…). Note that
the processors we find in our computers today are the same (or less powerful) as
the ones we found a few years ago. That’s because that technology seems to
have reached a limit (minaturizing). Now the directions is to put multiple
processor on the same chip or die (e.g. Itel Nehalem has 16 or more) and to use
other types of processor (such as General Purpose Graphics Processor, GP-GPUs)
to increase performance. Main memory sizes are shooting up. What does that
mean for database systems?
Section 3
# 13
Thank
you.
Section 3
#1