Big Data and Clouds - Digital Science Center

Download Report

Transcript Big Data and Clouds - Digital Science Center

Big Data and Clouds
May 7 2013
Geoffrey Fox
[email protected]
http://www.infomall.org http://www.futuregrid.org
School of Informatics and Computing
Digital Science Center
Indiana University Bloomington
https://portal.futuregrid.org
Abstract
• We explore the principle that much of “the future” will be characterized by “Using
Clouds running Data Analytics processing Big Data to solve problems in XInformatics”. Applications (values of X) include explicitly already Astronomy,
Biology, Biomedicine, Business, Chemistry, Crisis, Energy, Environment, Finance,
Health, Intelligence, Lifestyle, Marketing, Medicine, Pathology, Policy, Radar,
Security, Sensor, Social, Sustainability, Wealth and Wellness with more fields
defined implicitly. We discuss the implications of this concept for education and
research. Education requires new curricula – generically called data science – which
will be hugely popular due to the many millions of jobs opening up in both “core
technology” and within applications where of course there are most opportunities.
We discuss possibility of using MOOC’s to jumpstart field. On research side, big
data (i.e. large applications) require big (i.e. scalable) algorithms on big
infrastructure running robust convenient programming environments. We discuss
clustering and information visualization using dimension reduction as examples of
scalable algorithms. We compare Message Passing Interface MPI and extensions of
MapReduce as the core technology to execute data analytics.
• We mention FutureGrid and a software defined Computing Testbed as a Service
https://portal.futuregrid.org
2
Big Data Ecosystem in One Sentence
Use Clouds running Data Analytics Collaboratively processing
Big Data to solve problems in X-Informatics ( or e-X)
X = Astronomy, Biology, Biomedicine, Business, Chemistry, Climate, Crisis, Earth
Science, Energy, Environment, Finance, Health, Intelligence, Lifestyle, Marketing,
Medicine, Pathology, Policy, Radar, Security, Sensor, Social, Sustainability, Wealth and
Wellness with more fields (physics) defined implicitly
Spans Industry and Science (research)
Education: Data Science see recent New York Times articles
http://datascience101.wordpress.com/2013/04/13/new-york-times-data-science-articles/
X-Informatics Class http://www.infomall.org/X-InformaticsSpring2013/
Big data MOOC http://x-informatics.appspot.com/preview
https://portal.futuregrid.org
Social Informatics
https://portal.futuregrid.org
Issues of Importance
• Economic Imperative: There are a lot of data and a lot of jobs
• Computing Model: Industry adopted clouds which are attractive for
data analytics
• Research Model: 4th Paradigm; From Theory to Data driven science?
• Confusion in a new-old field: lack of consensus academically in
several aspects of data intensive computing from storage to
algorithms, to processing and education
• Progress in Data Intensive Programming Models
• Progress in Academic (open source) clouds
• Progress in scalable robust Algorithms: new data need better
algorithms?
• Progress in Data Science Education: opportunities at universities
https://portal.futuregrid.org
5
Economic Imperative
There are a lot of data and a lot of jobs
https://portal.futuregrid.org
6
Data Deluge
https://portal.futuregrid.org
7
Some Trends
The Data Deluge is clear trend from Commercial (Amazon, ecommerce) , Community (Facebook, Search) and Scientific
applications
Light weight clients from smartphones, tablets to sensors
Multicore reawakening parallel computing
Exascale initiatives will continue drive to high end with a
simulation orientation
Clouds with cheaper, greener, easier to use IT for (some)
applications
New jobs associated with new curricula
Clouds as a distributed system (classic CS courses)
Data Analytics (Important theme in academia and industry)
Network/Web Science
https://portal.futuregrid.org
8
Some Data sizes
~40 109 Web pages at ~300 kilobytes each = 10 Petabytes
Youtube 48 hours video uploaded per minute;
in 2 months in 2010, uploaded more than total NBC ABC CBS
~2.5 petabytes per year uploaded?
LHC 15 petabytes per year
Radiology 69 petabytes per year
Square Kilometer Array Telescope will be 100
terabits/second
Earth Observation becoming ~4 petabytes per year
Earthquake Science – few terabytes total today
PolarGrid – 100’s terabytes/year
Exascale simulation data dumps – terabytes/second
https://portal.futuregrid.org
9
http://cs.metrostate.edu/~sbd/ Oracle
https://portal.futuregrid.org
MM = Million
https://portal.futuregrid.org
Ruh VP Software GE http://fisheritcenter.haas.berkeley.edu/Big_Data/index.html
“Taming the Big Data Tidal Wave” 2012
(Bill
Franks,
Chief
Analytics
Officer
Teradata)
• Web Data (“the original big data”)
– Analyze customer web browsing of e-commerce site to see topics looked at etc.
•
Auto Insurance (telematics monitoring driving)
– Equip cars with sensors
•
Text data in multiple industries
– Sentiment analysis, identify common issues (as in eBay lamp example), Natural Language processing
•
Time and location (GPS) data
– Track trucks (delivery), vehicles(track), people(tell them nearby goodies)
•
Retail and manufacturing: RFID
– Asset and inventory management,
•
Utility industry: Smart Grid
– Sensors allow dynamic optimization of power
•
Gaming industry: Casino Chip tracking (RFID)
– Track individual players, detect fraud, identify patterns
•
Industrial engines and equipment: sensor data
– See GE engine
•
Video games: telemetry
– This is like monitoring web browsing but rather monitor actions in a game
•
Telecommunication and other industries: Social Network data
– Connections make this big data.
– Use connections to find new customers with similar interests
https://portal.futuregrid.org
Why need cost effective
Computing!
Full Personal Genomics: 3
petabytes per day
https://portal.futuregrid.org
http://www.genome.gov/sequencingcosts/
The Long Tail of Science
Collectively “long tail” science is generating a lot of data
Estimated at over 1PB per year and it is growing fast.
80-20 rule: 20% users generate 80% data but not necessarily 80% knowledge
From Dennis Gannon Talk
https://portal.futuregrid.org
Jobs
https://portal.futuregrid.org
15
Jobs v. Countries
http://www.microsoft.com/en-us/news/features/2012/mar12/03-05CloudComputingJobs.aspx
https://portal.futuregrid.org
16
McKinsey Institute on Big Data Jobs
• There will be a shortage of talent necessary for organizations to take
advantage of big data. By 2018, the United States alone could face a
shortage of 140,000 to 190,000 people with deep analytical skills as well as
1.5 million managers and analysts with the know-how to use the analysis of
big data to make effective decisions.
• Informatics aimed at 1.5 million jobs. Computer Science covers the 140,000
http://www.mckinsey.com/mgi/publications/big_data/index.asp.
to 190,000
https://portal.futuregrid.org
17
Tom Davenport Harvard Business School https://portal.futuregrid.org
http://fisheritcenter.haas.berkeley.edu/Big_Data/index.html Nov 2012
Computing Model
Industry adopted clouds which are
attractive for data analytics
https://portal.futuregrid.org
19
5 years Cloud Computing
2 years Big Data Transformational
https://portal.futuregrid.org
Amazon making money
• It took Amazon Web Services (AWS) eight
years to hit $650 million in revenue, according
to Citigroup in 2010.
• Just three years later, Macquarie Capital
analyst Ben Schachter estimates that AWS will
top $3.8 billion in 2013 revenue, up from $2.1
billion in 2012 (estimated), valuing the AWS
business at $19 billion.
https://portal.futuregrid.org
Physically Clouds are Clear
• A bunch of computers in an efficient data center
with an excellent Internet connection
• They were produced to meet need of publicfacing Web 2.0 e-Commerce/Social Networking
sites
• They can be considered as “optimal giant data
center” plus internet connection
• Note enterprises use private clouds that are
giant data centers but not optimized for Internet
access
https://portal.futuregrid.org
Virtualization made several things more
convenient
• Virtualization = abstraction; run a job – you know not
where
• Virtualization = use hypervisor to support “images”
– Allows you to define complete job as an “image” – OS +
application
• Efficient packing of multiple applications into one
server as they don’t interfere (much) with each other
if in different virtual machines;
• They interfere if put as two jobs in same machine as
for example must have same OS and same OS
services
• Also security model between VM’s more robust than
between processes
https://portal.futuregrid.org
Clouds Offer From different points of view
• Features from NIST:
– On-demand service (elastic);
– Broad network access;
– Resource pooling;
– Flexible resource allocation;
– Measured service
• Economies of scale in performance and electrical power (Green IT)
• Powerful new software models
– Platform as a Service is not an alternative to Infrastructure as a
Service – it is instead an incredible valued added
– Amazon is as much PaaS as Azure
• They are cheaper than classic clusters unless latter 100% utilized
https://portal.futuregrid.org
24
Research Model
4th Paradigm; From Theory to Data
driven science?
https://portal.futuregrid.org
25
http://www.wired.com/wired/issue/16-07
https://portal.futuregrid.org
September 2008
The 4 paradigms of Scientific Research
1. Theory
2. Experiment or Observation
•
E.g. Newton observed apples falling to design his theory of
mechanics
3. Simulation of theory or model
4. Data-driven (Big Data) or The Fourth Paradigm: DataIntensive Scientific Discovery (aka Data Science)
•
•
http://research.microsoft.com/enus/collaboration/fourthparadigm/ A free book
More data; less models
https://portal.futuregrid.org
More data usually beats better algorithms
Here's how the competition works. Netflix has provided a large
data set that tells you how nearly half a million people have rated
about 18,000 movies. Based on these ratings, you are asked to
predict the ratings of these users for movies in the set that they
have not rated. The first team to beat the accuracy of Netflix's
proprietary algorithm by a certain margin wins a prize of $1
million!
Different student teams in my class adopted different approaches
to the problem, using both published algorithms and novel ideas.
Of these, the results from two of the teams illustrate a broader
point. Team A came up with a very sophisticated algorithm using
the Netflix data. Team B used a very simple algorithm, but they
added in additional data beyond the Netflix set: information
about movie genres from the Internet Movie Database(IMDB).
Guess which team did better?
Anand Rajaraman is Senior Vice President at Walmart Global
eCommerce, where he heads up the newly created
@WalmartLabs,
http://anand.typepad.com/datawocky/2008/03/more-datausual.html
https://portal.futuregrid.org
20120117berkeley1.pdf Jeff Hammerbacher
Confusion in the new-old data field
lack of consensus academically in several aspects
from storage to algorithms, to processing and
education
https://portal.futuregrid.org
29
Data Communities Confused I?
• Industry seems to know what it is doing although it’s secretive –
Amazon’s last paper on their recommender system was 2003
– Industry runs the largest data analytics on clouds
– But industry algorithms are rather different from science
• Academia confused on repository model: traditionally one stores
data but one needs to support “running Data Analytics” and one is
taught to bring computing to data as in Google/Hadoop file system
– Either store data in compute cloud OR enable high performance networking
between distributed data repositories and “analytics engines”
• Academia confused on data storage model: Files (traditional) v.
Database (old industry) v. NOSQL (new cloud industry)
– Hbase MongoDB Riak Cassandra are typical NOSQL systems
• Academia confused on curation of data: University Libraries,
Projects, National repositories, Amazon/Google?
https://portal.futuregrid.org
30
Data Communities Confused II?
• Academia agrees on principles of Simulation Exascale Architecture:
HPC Cluster with accelerator plus parallel wide area file system
– Industry doesn’t make extensive use of high end simulation
• Academia confused on architecture for data analysis: Grid (as in
LHC), Public Cloud, Private Cloud, re-use simulation architecture with
database, object store, parallel file system, HDFS style data
• Academia has not agreed on Programming/Execution model: “Data
Grid Software”, MPI, MapReduce ..
• Academia has not agreed on need for new algorithms: Use natural
extension of old algorithms, R or Matlab. Simulation successes built
on great algorithm libraries;
• Academia has not agreed on what algorithms are important?
• Academia could attract more students: with data-oriented curricula
that prepare for industry or research careers
https://portal.futuregrid.org
31
Clouds in Research
https://portal.futuregrid.org
32
2 Aspects of Cloud Computing:
Infrastructure and Runtimes
• Cloud infrastructure: outsourcing of servers, computing, data, file
space, utility computing, etc..
• Cloud runtimes or Platform: tools to do data-parallel (and other)
computations. Valid on Clouds and traditional clusters
– Apache Hadoop, Google MapReduce, Microsoft Dryad, Bigtable,
Chubby and others
– MapReduce designed for information retrieval but is excellent for
a wide range of science data analysis applications
– Can also do much traditional parallel computing for data-mining
if extended to support iterative operations
– Data Parallel File system as in HDFS and Bigtable
https://portal.futuregrid.org
Clouds have highlighted SaaS PaaS IaaS
Software
(Application
Or Usage)
SaaS
Platform
PaaS
 Education
 Applications
 CS Research Use e.g.
test new compiler or
storage model
 Cloud e.g. MapReduce
 HPC e.g. PETSc, SAGA
 Computer Science e.g.
Compiler tools, Sensor
nets, Monitors
But equally valid for classic clusters
• Software Services are
building blocks of
applications
• The middleware or
computing environment
including HPC, Grids …
Infra  Software Defined
Computing (virtual Clusters) • Nimbus, Eucalyptus,
structure
IaaS
Network
NaaS
 Hypervisor, Bare Metal
 Operating System
 Software Defined
Networks
 OpenFlow GENI
OpenStack, OpenNebula
CloudStack plus Bare-metal
• OpenFlow – likely to grow in
importance
https://portal.futuregrid.org
Science Computing Environments
• Large Scale Supercomputers – Multicore nodes linked by high
performance low latency network
– Increasingly with GPU enhancement
– Suitable for highly parallel simulations
• High Throughput Systems such as European Grid Initiative EGI or
Open Science Grid OSG typically aimed at pleasingly parallel jobs
– Can use “cycle stealing”
– Classic example is LHC data analysis
• Grids federate resources as in EGI/OSG or enable convenient access
to multiple backend systems including supercomputers
• Use Services (SaaS)
– Portals make access convenient and
– Workflow integrates multiple processes into a single job
https://portal.futuregrid.org
35
Clouds HPC and Grids
• Synchronization/communication Performance
Grids > Clouds > Classic HPC Systems
• Clouds naturally execute effectively Grid workloads but are less
clear for closely coupled HPC applications
• Classic HPC machines as MPI engines offer highest possible
performance on closely coupled problems
• The 4 forms of MapReduce/MPI
1) Map Only – pleasingly parallel
2) Classic MapReduce as in Hadoop; single Map followed by reduction with
fault tolerant use of disk
3) Iterative MapReduce use for data mining such as Expectation Maximization
in clustering etc.; Cache data in memory between iterations and support the
large collective communication (Reduce, Scatter, Gather, Multicast) use in
data mining
4) Classic MPI! Support small point to point messaging efficiently as used in
partial differential equation solvers
https://portal.futuregrid.org
Cloud Applications
https://portal.futuregrid.org
37
What Applications work in Clouds
• Pleasingly (moving to modestly) parallel applications of all sorts
with roughly independent data or spawning independent
simulations
– Long tail of science and integration of distributed sensors
• Commercial and Science Data analytics that can use MapReduce
(some of such apps) or its iterative variants (most other data
analytics apps)
• Which science applications are using clouds?
– Venus-C (Azure in Europe): 27 applications not using Scheduler,
Workflow or MapReduce (except roll your own)
– 50% of applications on FutureGrid are from Life Science
– Locally Lilly corporation is commercial cloud user (for drug
discovery) but not IU Biology
• But overall very little science use of clouds
https://portal.futuregrid.org
38
Parallelism over Users and Usages
• “Long tail of science” can be an important usage mode of clouds.
• In some areas like particle physics and astronomy, i.e. “big science”,
there are just a few major instruments generating now petascale
data driving discovery in a coordinated fashion.
• In other areas such as genomics and environmental science, there
are many “individual” researchers with distributed collection and
analysis of data whose total data and processing needs can match
the size of big science.
• Clouds can provide scaling convenient resources for this important
aspect of science.
• Can be map only use of MapReduce if different usages naturally
linked e.g. exploring docking of multiple chemicals or alignment of
multiple DNA sequences
– Collecting together or summarizing multiple “maps” is a simple Reduction
https://portal.futuregrid.org
39
Internet of Things and the Cloud
• It is projected that there will be 24 billion devices on the Internet by
2020. Most will be small sensors that send streams of information
into the cloud where it will be processed and integrated with other
streams and turned into knowledge that will help our lives in a
multitude of small and big ways.
• The cloud will become increasing important as a controller of and
resource provider for the Internet of Things.
• As well as today’s use for smart phone and gaming console support,
“Intelligent River” “smart homes and grid” and “ubiquitous cities”
build on this vision and we could expect a growth in cloud
supported/controlled robotics.
• Some of these “things” will be supporting science
• Natural parallelism over “things”
• “Things” are distributed and so form a Grid
https://portal.futuregrid.org
40
Sensors (Things) as a Service
Output Sensor
Sensors as a Service
A larger sensor ………
Sensor
Processing as
a Service
(could use
MapReduce)
https://portal.futuregrid.org
https://sites.google.com/site/opensourceiotcloud/
Open Source Sensor (IoT) Cloud
4 Forms of MapReduce
(a) Map Only
Input
(b) Classic
MapReduce
(c) Iterative
MapReduce
Input
Input
(d) Loosely
Synchronous
Iterations
map
map
map
Pij
reduce
reduce
Output
BLAST Analysis
High Energy Physics
Expectation maximization
Classic MPI
Parametric sweep
(HEP) Histograms
Clustering e.g. Kmeans
PDE Solvers and
Pleasingly Parallel
Distributed search
Linear Algebra, Page Rank
particle dynamics
Domain of MapReduce and Iterative Extensions
MPI
Science Clouds
Exascale
MPI is Map followed by Point tohttps://portal.futuregrid.org
Point Communication – as in style42d)
•
Classic
Parallel
Computing
HPC: Typically SPMD (Single Program Multiple Data) “maps” typically
processing particles or mesh points interspersed with multitude of
low latency messages supported by specialized networks such as
Infiniband and technologies like MPI
– Often run large capability jobs with 100K (going to 1.5M) cores on same job
– National DoE/NSF/NASA facilities run 100% utilization
– Fault fragile and cannot tolerate “outlier maps” taking longer than others
• Clouds: MapReduce has asynchronous maps typically processing data
points with results saved to disk. Final reduce phase integrates results
from different maps
– Fault tolerant and does not require map synchronization
– Map only useful special case
• HPC + Clouds: Iterative MapReduce caches results between
“MapReduce” steps and supports SPMD parallel computing with
large messages as seen in parallel kernels (linear algebra) in clustering
and other data mining
https://portal.futuregrid.org
43
Data Intensive Applications
• Applications tend to be new and so can consider emerging
technologies such as clouds
• Do not have lots of small messages but rather large reduction (aka
Collective) operations
– New optimizations e.g. for huge messages
• EM (expectation maximization) tends to be good for clouds and
Iterative MapReduce
– Quite complicated computations (so compute largish compared to
communicate)
– Communication is Reduction operations (global sums or linear algebra in our
case)
• We looked at Clustering and Multidimensional Scaling using
deterministic annealing which are both EM
– See also Latent Dirichlet Allocation and related Information Retrieval
algorithms with similar EM structure
https://portal.futuregrid.org
44
Data Intensive Programming Models
https://portal.futuregrid.org
45
Map Collective Model (Judy Qiu)
• Combine MPI and MapReduce ideas
• Implement collectives optimally on Infiniband,
Azure, Amazon ……
Iterate
Input
map
Initial Collective Step
Generalized Reduce
Final Collective Step
https://portal.futuregrid.org
46
Twister for Data Intensive
Iterative Applications
Broadcast
Compute
Communication
Generalize to
arbitrary
Collective
Reduce/ barrier
New Iteration
Smaller LoopVariant Data
Larger LoopInvariant Data
• (Iterative) MapReduce structure with Map-Collective is
framework
• Twister runs on Linux or Azure
• Twister4Azure is built on top of Azure tables, queues,
https://portal.futuregrid.org
storage
Qiu, Gunarathne
Pleasingly Parallel
Performance Comparisons
BLAST Sequence Search
100.00%
90.00%
Parallel Efficiency
80.00%
70.00%
60.00%
50.00%
40.00%
30.00%
Twister4Azure
20.00%
Hadoop-Blast
DryadLINQ-Blast
10.00%
0.00%
128
228
328
428
528
Number of Query Files
628
728
Parallel Efficiency
Cap3 Sequence Assembly
100%
95%
90%
85%
80%
75%
70%
65%
60%
55%
50%
Twister4Azure
Amazon EMR
Apache Hadoop
Num. of Cores * Num. of Files
https://portal.futuregrid.org
Smith Waterman
Sequence Alignment
Multi Dimensional Scaling
BC: Calculate BX
Map
Reduc
e
Merge
X: Calculate invV
Reduc
(BX)
Merge
Map
e
Calculate Stress
Map
Reduc
e
Merge
New Iteration
Performance adjusted for sequential
performance difference
Data Size Scaling
Weak Scaling
Scalable Parallel Scientific Computing Using Twister4Azure. Thilina Gunarathne, BingJing Zang, Tak-Lon Wu and Judy Qiu.
Submitted to Journal of Future Generation Computer Systems. (Invited as one of the best 6 papers of UCC 2011)
https://portal.futuregrid.org
1400
Kmeans
1200
Time (ms)
1000
Twister4Azure
800
T4A+ tree broadcast
600
T4A + AllReduce
400
Hadoop Adjusted for Azure
200
0
32 x 32 M
64 x 64 M
128 x 128 M
Num cores x Num Data Points
256 x 256 M
Hadoop adjusted for Azure: Hadoop KMeans run time adjusted for the performance
difference of iDataplex vs Azure
https://portal.futuregrid.org
Kmeans Strong Scaling
(with Hadoop Adjusted)
1
Relative Parallel Efficiency
0.95
0.9
0.85
0.8
T4A + AllReduce
0.75
T4A+ tree broadcast
0.7
Twister4Azure-legacy
0.65
Hadoop
0.6
Hadoop Adjusted for Azure
0.55
0.5
32
64
96
128
160
Num Cores
192
224
256
128 Million data points. 500 Centroids (clusters). 20 Dimensions. 10 iterations
Parallel efficiency relative to the 32 core run time.
Note Hadoop slower by factor of 2
https://portal.futuregrid.org
Kmeans Clustering
300
Number of Executing Map Tasks
250
200
150
100
50
0
0
25
50
75
100
125
150
Elapsed Time (s)
175
200
225
250
This shows that the communication and synchronization overheads between iterations are very small
(less than one second, which is the lowest measured unit for this graph).
128 Million data points(19GB), 500 centroids (78KB), 20 dimensions
10 iterations, 256 cores, 256 map tasks per iteration
https://portal.futuregrid.org
Kmeans Clustering
70
Task Execution Time (s)
60
50
40
30
20
10
0
0
256
512
768
1024
1280
Map Task ID
1536
1792
2048
128 Million data points(19GB), 500 centroids (78KB), 20 dimensions
10 iterations, 256 cores, 256 map tasks per iteration
https://portal.futuregrid.org
2304
FutureGrid Technology
https://portal.futuregrid.org
54
FutureGrid Testbed as a Service
• FutureGrid is part of XSEDE set up as a testbed with cloud focus
• Operational since Summer 2010 (i.e. now in third year of use)
• The FutureGrid testbed provides to its users:
– Support of Computer Science and Computational Science research
– A flexible development and testing platform for middleware and
application users looking at interoperability, functionality,
performance or evaluation
– FutureGrid is user-customizable, accessed interactively and
supports Grid, Cloud and HPC software with and without VM’s
– A rich education and teaching platform for classes
• Offers OpenStack, Eucalyptus, Nimbus, OpenNebula, HPC (MPI) on
same hardware moving to software defined systems; supports both
classic HPC and Cloud storage
https://portal.futuregrid.org
4 Use Types for FutureGrid TestbedaaS
• 292 approved projects (1734 users) April 6 2013
– USA(79%), Puerto Rico(3%- Students in class), India, China, lots of
European countries (Italy at 2% as class)
– Industry, Government, Academia
• Computer science and Middleware (55.6%)
– Core CS and Cyberinfrastructure; Interoperability (3.6%) for Grids
and Clouds such as Open Grid Forum OGF Standards
• New Domain Science applications (20.4%)
– Life science highlighted (10.5%), Non Life Science (9.9%)
• Training Education and Outreach (14.9%)
– Semester and short events; focus on outreach to HBCU
• Computer Systems Evaluation (9.1%)
– XSEDE (TIS, TAS), OSG, EGI; Campuses
https://portal.futuregrid.org
56
Sample FutureGrid Projects I
• FG18 Privacy preserving gene read mapping developed hybrid
MapReduce. Small private secure + large public with safe data. Won
2011 PET Award for Outstanding Research in Privacy Enhancing
Technologies
• FG132, Power Grid Sensor analytics on the cloud with distributed
Hadoop. Won the IEEE Scaling challenge at CCGrid2012.
• FG156 Integrated System for End-to-end High Performance Networking
showed that the RDMA over Converged Ethernet (InfiniBand made to
work over Ethernet network frames) protocol could be used over widearea networks, making it viable in cloud computing environments.
• FG172 Cloud-TM on distributed concurrency control (software
transactional memory): "When Scalability Meets Consistency: Genuine
Multiversion Update Serializable Partial Data Replication,“ 32nd
International Conference on Distributed Computing Systems (ICDCS'12)
(good conference) used 40 nodes of FutureGrid
https://portal.futuregrid.org
57
Sample FutureGrid Projects II
• FG42,45 SAGA Pilot Job P* abstraction and applications. XSEDE
Cyberinfrastructure used on clouds
• FG130 Optimizing Scientific Workflows on Clouds. Scheduling Pegasus
on distributed systems with overhead measured and reduced. Used
Eucalyptus on FutureGrid
• FG133 Supply Chain Network Simulator Using Cloud Computing with
dynamic virtual machines supporting Monte Carlo simulation with
Grid Appliance and Nimbus
• FG257 Particle Physics Data analysis for ATLAS LHC experiment used
FutureGrid + Canadian Cloud resources to study data analysis on
Nimbus + OpenStack with up to 600 simultaneous jobs
• FG254 Information Diffusion in Online Social Networks is evaluating
NoSQL databases (Hbase, MongoDB, Riak) to support analysis of
Twitter feeds
• FG323 SSD performance benchmarking for HDFS on Lima
https://portal.futuregrid.org
58
Education and Training Use of FutureGrid
• 27 Semester long classes: 563+ students
– Cloud Computing, Distributed Systems, Scientific Computing and Data Analytics
• 3 one week summer schools: 390+ students
– Big Data, Cloudy View of Computing (for HBCU’s), Science Clouds
•
•
•
•
1 two day workshop: 28 students
5 one day tutorials: 173 students
From 19 Institutions
Developing 2 MOOC’s (Google Course Builder) on Cloud Computing
and use of FutureGrid supported by either FutureGrid or
downloadable appliances (custom images)
– See http://cgltestcloud1.appspot.com/preview
• FutureGrid appliances support Condor/MPI/Hadoop/Iterative
MapReduce virtual clusters
https://portal.futuregrid.org
59
Performance of Dynamic Provisioning
• 4 Phases a) Design and create image (security vet) b) Store in
repository as template with components c) Register Image to VM
Manager (cached ahead of time) d) Instantiate (Provision) image
Generate an Image
Provisioning from Registered Images
500
Time (s)
300
250
200
400
Upload image to the
repo
Compress image
300
Install user packages
200
Install u l packages
100
Create Base OS
Boot VM
CentOS 5
150
OpenStack
Ubuntu 10.10
Generate Images
xCAT/Moab
800
100
600
Time (s)
Time (s)
0
50
CentOS 5
400
Ubuntu 10.10
200
0
1
2
4
Number of Images Generated
at the Same Time
0
1
2
4
8
16
37
Number of Machines
https://portal.futuregrid.org
60
FutureGrid is an onramp to other systems
•
•
•
•
•
FG supports Education & Training for all systems
User can do all work on FutureGrid OR
User can download Appliances on local machines (Virtual Box) OR
User soon can use CloudMesh to jump to chosen production system
CloudMesh is similar to OpenStack Horizon, but aimed at multiple
federated systems.
– Built on RAIN and tools like libcloud, boto with protocol (EC2) or programmatic
API (python)
– Uses general templated image that can be retargeted
– One-click template & image install on various IaaS & bare metal including
Amazon, Azure, Eucalyptus, Openstack, OpenNebula, Nimbus, HPC
– Provisions the complete system needed by user and not just a single image;
copes with resource limitations and deploys full range of software
– Integrates our VM metrics package (TAS collaboration) that links to XSEDE
(VM's are different from traditional Linux in metrics supported and needed)
https://portal.futuregrid.org
61
Lessons
learnt
from
FutureGrid
Unexpected major use from Computer Science and Middleware
•
• Rapid evolution of Technology Eucalyptus  Nimbus  OpenStack
• Open source IaaS maturing as in “Paypal To Drop VMware From 80,000 Servers
and Replace It With OpenStack” (Forbes)
– “VMWare loses $2B in market cap”; eBay expects to switch broadly?
• Need interactive not batch use; nearly all jobs short
• Substantial TestbedaaS technology needed and FutureGrid developed (RAIN,
CloudMesh, Operational model) some
• Lessons more positive than DoE Magellan report (aimed as an early science
cloud) but goals different
• Still serious performance problems in clouds for networking and device (GPU)
linkage; many activities outside FG addressing
– One can get good Infiniband performance on a peculiar OS + Mellanox drivers but
not general yet
• We identified characteristics of “optimal hardware”
• Run system with integrated software (computer science) and systems
administration team
• Build Computer Testbed as a Service Community
https://portal.futuregrid.org
62
Future Directions for FutureGrid
• Poised to support more users as technology like OpenStack matures
– Please encourage new users and new challenges
• More focus on academic Platform as a Service (PaaS) - high-level
middleware (e.g. Hadoop, Hbase, MongoDB) – as IaaS gets easier to
deploy
• Expect increased Big Data challenges
• Improve Education and Training with model for MOOC laboratories
• Finish CloudMesh (and integrate with Nimbus Phantom) to make
FutureGrid as hub to jump to multiple different “production” clouds
commercially, nationally and on campuses; allow cloud bursting
– Several collaborations developing
• Build underlying software defined system model with integration
with GENI and high performance virtualized devices (MIC, GPU)
• Improved ubiquitous monitoring at PaaS IaaS and NaaS levels
• Improve “Reproducible Experiment Management” environment
• Expand and renew hardware via federation
https://portal.futuregrid.org
63
Direct GPU Virtualization
• Allow VMs to directly access GPU hardware
• Enables CUDA and OpenCL code – no need for
custom APIs
• Utilizes PCI-passthrough of device to guest VM
– Hardware directed I/O virt (VT-d or IOMMU)
– Provides direct isolation and security of device
from host or other VMs
– Removes much of the Host <-> VM overhead
• Similar to what Amazon EC2 uses (proprietary)
https://portal.futuregrid.org
64
Performance 1
Max FLOPS (Autotuned)
Bus Speed
1200
7
6
1000
5
600
Native
VM
Buss Speed (GB/s)
GFLOPS
800
4
Native
3
VM
400
2
200
1
0
0
maxspflops
maxdpflops
bspeed_download
Benchmark
bspeed_readback
Benchmark
https://portal.futuregrid.org
http://futuregrid.org
65
Performance 2
300
Fast Fourier Transform and Matrix-Matrix Multiplcation
250
GFLOPS
200
150
Native
VM
100
50
0
Benchmark
https://portal.futuregrid.org
http://futuregrid.org
66
Algorithms
Scalable Robust Algorithms: new
data need better algorithms?
https://portal.futuregrid.org
67
Algorithms for Data Analytics
• In simulation area, it is observed that equal contributions
to improved performance come from increased computer
power and better algorithms
http://cra.org/ccc/docs/nitrdsymposium/pdfs/keyes.pdf
• In data intensive area, we haven’t seen this effect so
clearly
– Information retrieval revolutionized but
– Still using Blast in Bioinformatics (although Smith Waterman etc.
better)
– Still using R library which has many non optimal algorithms
– Parallelism and use of GPU’s often ignored
https://portal.futuregrid.org
68
https://portal.futuregrid.org
69
Data Analytics Futures?
• PETSc and ScaLAPACK and similar libraries very important in
supporting parallel simulations
• Need equivalent Data Analytics libraries
• Include datamining (Clustering, SVM, HMM, Bayesian Nets …), image
processing, information retrieval including hidden factor analysis
(LDA), global inference, dimension reduction
– Many libraries/toolkits (R, Matlab) and web sites (BLAST) but typically not
aimed at scalable high performance algorithms
• Should support clouds and HPC; MPI and MapReduce
– Iterative MapReduce an interesting runtime; Hadoop has many limitations
• Need a coordinated Academic Business Government Collaboration
to build robust algorithms that scale well
– Crosses Science, Business Network Science, Social Science
• Propose to build community to define & implement
SPIDAL or Scalable Parallel Interoperable Data Analytics Library
https://portal.futuregrid.org
70
Deterministic Annealing
• Deterministic Annealing works in many areas including clustering,
latent factor analysis, dimension reduction for both metric and non
metric spaces
– ~Always gets better answers than K-means and R?
– But can be parallelized and put on GPU
https://portal.futuregrid.org
71
DA is Multiscale and Parallel
200K 74D 138 Clusters
241K 2D LC-MS 25000 Clusters
72
• Start at high temperature with one cluster and keep splitting
• Parallelism over points (easy) and centers
• Improve using triangle inequality test in high dimensions
https://portal.futuregrid.org
•
•
•
•
Dimension Reduction/MDS
You can get answers but do you believe them!
Need to visualize
HMDS = x<y=1N weight(x,y) ((x, y) – d3D(x, y))2
Here x and y separately run over all points in the system, (x, y) is
distance between x and y in original space while d3D(x, y) is distance
between them after mapping to 3 dimensions. One needs to
minimize HMDS for optimal choices of mapped positions X3D(x).
LC-MS 2D
Lymphocytes 4D
https://portal.futuregrid.org
Pathology 54D
73
MDS runs as well in Metric and
non Metric Cases
• DA Clustering also runs in non metric case
with rather different formalism
COG Database with a few biology clusters
https://portal.futuregrid.org
Metagenomics with DA clusters
74
~125 Clusters from Fungi sequence set
https://portal.futuregrid.org
75
Phylogenetic tree using MDS
200 Sequences
2133 Sequences
(126 centers
of clusters
Extended
from set
found from
446K)
of 200
https://portal.futuregrid.org
Tree found
from
Trees
bymapping
Neighbor
sequences
to 10D
Joining
in using
3D map
Neighbor Joining
Silver Spheres
Whole collection
mapped
Internal Nodes
76
to 3D
Data Science Education
Opportunities at universities
see recent New York Times articles
http://datascience101.wordpress.com/2013/04/13/new-york-times-data-science-articles/
https://portal.futuregrid.org
77
Data Science Education
• Broad Range of Topics from Policy to curation to
applications and algorithms, programming models,
data systems, statistics, and broad range of CS
subjects such as Clouds, Programming, HCI,
• Plenty of Jobs and broader range of possibilities
than computational science but similar cosmic
issues
– What type of degree (Certificate, minor, track, “real”
degree)
– What implementation (department, interdisciplinary
group supporting education and research program)
https://portal.futuregrid.org
78
Computational Science
• Interdisciplinary field between computer science
and applications with primary focus on simulation
areas
• Very successful as a research area
– XSEDE and Exascale systems enable
• Several academic programs but these have been
less successful than computational science research
as
– No consensus as to curricula and jobs (don’t appoint
faculty in computational science; do appoint to DoE labs)
– Field relatively small
• Started around 1990
https://portal.futuregrid.org
79
MOOC’s
https://portal.futuregrid.org
80
Massive Open Online Courses (MOOC)
• MOOC’s are very “hot” these days with Udacity and Coursera as
start-ups
• Over 100,000 participants but concept valid at smaller sizes
• Relevant to Data Science as this is a new field with few courses
at most universities
• Technology to make MOOC’s: Google Open Source Course
Builder is lightweight LMS (learning management system)
• Supports MOOC model as a collection of short prerecorded
segments (talking head over PowerPoint) termed lessons –
typically 15 minutes long
• Compose playlists of lessons into sessions, modules, courses
– Session is an “Album” and lessons are “songs” in an iTunes
analogy
https://portal.futuregrid.org
81
MOOC’s for Traditional Lectures
• We can take MOOC lessons and view
them as a “learning object” that we can
share between different teachers
https://portal.futuregrid.org
• i.e. as a way of teaching
typical sized classes but
with less effort as shared
material
• Start with what’s in
repository;
• pick and choose;
• Add custom material of
individual teachers
• The ~15 minute Video over
PowerPoint of MOOC’s
much easier to re-use than
PowerPoint
• Do not need special
mentoring support
• Defining how to support
computing labs with
FutureGrid or appliances +
82
Virtual Box
Conclusions
https://portal.futuregrid.org
83
Conclusions
• Clouds and HPC are here to stay and one should plan on using both
• Data Intensive programs are not like simulations as they have large
“reductions” (“collectives”) and do not have many small messages
– Clouds suitable
• Iterative MapReduce an interesting approach; need to optimize
collectives for new applications (Data analytics) and resources
(clouds, GPU’s …)
• Need an initiative to build scalable high performance data analytics
library on top of interoperable cloud-HPC platform
• Many promising data analytics algorithms such as deterministic
annealing not used as implementations not available in R/Matlab etc.
– More sophisticated software and runs longer but can be
efficiently parallelized so runtime not a big issue
https://portal.futuregrid.org
84
Conclusions II
• Software defined computing systems linking NaaS, IaaS,
PaaS, SaaS (Network, Infrastructure, Platform, Software) likely
to be important
• More employment opportunities in clouds than HPC and
Grids and in data than simulation; so cloud and data related
activities popular with students
• Community activity to discuss data science education
– Agree on curricula; is such a degree attractive?
• Role of MOOC’s as either
– Disseminating new curricula
– Managing course fragments that can be assembled into
custom courses for particular interdisciplinary students
https://portal.futuregrid.org
85