Transcript Slide 1
CS246: Mining Massive Datasets
Jure Leskovec, Stanford University
http://cs246.stanford.edu
Add pictures of TAs
TAs:
Bahman Bahmani
Juthika Dabholkar
Pierre Kreitmann
Lu Li
Aditya Ramesh
Office hours:
Jure: Tuesdays 9-10am, Gates 418
See course website for TA office hours
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
2
Course website:
http://cs246.stanford.edu
Lecture slides (at least 6h before the lecture)
Announcements, homeworks, solutions
Readings!
Readings: Book Mining of Massive Datasets
by Anand Rajaraman and Jeffrey D. Ullman
Free online:
http://i.stanford.edu/~ullman/mmds.html
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
3
4 longer homeworks: 40%
Theoretical and programming questions
All homeworks (even if empty) must be handed in
Assignments take time. Start early!
How to submit?
Paper: Box outside the class and in the Gates east wing
We will grade on paper!
You should also submit electronic copy:
1 PDF/ZIP file (writeups, experimental results, code)
Submission website: http://cs246.stanford.edu/submit/
SCPD: Only submit electronic copy & send us email
7 late days for the quarter:
Max 5 late days per assignment
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
4
Short weekly quizzes: 20%
Short e-quizzes on Gradiance (see course website!)
First quiz is already online
You have 7 days to complete it. No late days!
Final exam: 40%
March 19 at 8:30am
It’s going to be fun and hard work
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
5
Homework schedule:
Date
1/11
1/25
2/8
2/22
3/7
Out
HW1
HW2
HW3
HW4
In
HW1
HW2
HW3
HW4
No class: 1/16: Martin Luther King Jr.
2/20: President’s day
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
6
Algorithms (CS161)
Dynamic programming, basic data structures
Basic probability (CS109 or Stat116)
Moments, typical distributions, MLE, …
Programming (CS107 or CS145)
Your choice, but C++/Java will be very useful
We provide some background, but
the class will be fast paced
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
7
Recitation sessions:
Review of probability and statistics
Installing and working with Hadoop
We prepared a virtual machine with Hadoop preinstalled
HW0 helps you write your first Hadoop program
See course website!
We will announce the dates later
Sessions will be recorded
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
8
Can skip CS345a
and just say that
there is a follow up
class in Spring that
is project oriented
CS345a: Data mining got split into 2 courses
CS246: Mining massive datasets:
Methods/algorithms oriented course
Homeworks (theory & programming)
No class project
CS341: Project in mining massive datasets:
Project oriented class
Lectures/readings related to the project
Unlimited access to Amazon EC2 cluster
We intend to keep the class small
Taking CS246 is basically prerequisite
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
9
For questions/clarifications use Piazza!
If you don’t have @stanford.edu email address
email us and we will register you
To communicate with the course staff use
[email protected]
We will post announcements to
[email protected]
If you are not registered or auditing send us email
and we will subscribe you!
You are welcome to sit-in & audit the class
Send us email saying that you will be auditing
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
10
Chould skip!
Much of the course will be devoted to
ways to data mining on the Web:
Mining to discover things about the Web
E.g., PageRank, finding spam sites
Mining data from the Web itself
E.g., analysis of click streams, similar products at
Amazon, making recommendations
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
11
Much of the course will be devoted to
large scale computing for data mining
Challenges:
How to distribute computation?
Distributed/parallel programming is hard
Map-reduce addresses all of the above
Google’s computational/data manipulation model
Elegant way to work with big data
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
12
High-dimensional data:
Locality Sensitive Hashing
Dimensionality reduction
Clustering
The data is a graph:
Link Analysis: PageRank, Hubs & Authorities
Machine Learning:
k-NN, Perceptron, SVM, Decision Trees
Data is infinite:
Mining data streams
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
13
Applications:
7/16/2015
Association Rules
Recommender systems
Advertising on the Web
Web spam detection
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
14
Discovery of patterns and models that are:
Valid: hold on new data with some certainty
Useful: should be possible to act on the item
Unexpected: non-obvious to the system
Understandable: humans should be able to
interpret the pattern
Subsidiary issues:
Data cleansing: detection of bogus data
Visualization: something better than MBs of output
Warehousing of data (for retrieval)
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
16
Predictive Methods
Use some variables to predict unknown
or future values of other variables
Descriptive Methods
Find human-interpretable patterns that
describe the data
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
17
Skip
Scalability
Dimensionality
Complex and Heterogeneous Data
Data Quality
Data Ownership and Distribution
Privacy Preservation
Streaming Data
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
18
Overlaps with:
Databases: Large-scale (non-main-memory) data
Machine learning: Complex methods, small data
Statistics: Models
Different cultures:
To a DB person, data mining
is an extreme form of
analytic processing –
queries that examine large
amounts of data
Statistics/
AI
Machine Learning/
Pattern
Recognition
Data Mining
Result is the query answer
To a statistician, data-mining is
the inference of models
Result is the parameters of the model
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
Database
systems
19
A big data-mining risk is that you will
“discover” patterns that are meaningless.
Bonferroni’s principle: (roughly) if you look in
more places for interesting patterns than your
amount of data will support, you are bound to
find crap
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
20
Joseph Rhine was a parapsychologist in the
1950’s who hypothesized that some people
had Extra-Sensory Perception
He devised an experiment where subjects
were asked to guess 10 hidden cards – red or
blue
He discovered that almost 1 in 1000 had ESP –
they were able to get all 10 right!
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
21
The point is that the
patterns should be
real and significant
He told these people they had ESP and called
them in for another test of the same type
Alas, he discovered that almost all of them
had lost their ESP
What did he conclude?
He concluded that you shouldn’t tell people
they have ESP; it causes them to lose it
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
22
CPU
Machine Learning, Statistics
Memory
“Classical” Data Mining
Disk
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
24
20+ billion web pages x 20KB = 400+ TB
1 computer reads 30-35 MB/sec from disk
~4 months to read the web
~1,000 hard drives to store the web
Takes even more to do something useful
with the data!
Standard architecture is emerging:
Cluster of commodity Linux nodes
Gigabit ethernet interconnect
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
25
Can skip this slide
2-10 Gbps backbone between racks
1 Gbps between
any pair of nodes
in a rack
Switch
Switch
CPU
Mem
Disk
…
Switch
CPU
CPU
Mem
Mem
Disk
Disk
CPU
…
Mem
Disk
Each rack contains 16-64 nodes
In Aug 2006 Google had ~450,000 machines
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
26
Large-scale computing for data mining
problems on commodity hardware
Challenges:
How do you distribute computation?
How can we make it easy to write distributed
programs?
Machines fail:
One server may stay up 3 years (1,000 days)
If you have 1,0000 servers, expect to loose 1/day
In Aug 2006 Google had ~450,000 machines
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
27
Idea:
Bring computation close to the data
Store files multiple times for reliability
Map-reduce addresses these problems
Google’s computational/data manipulation model
Elegant way to work with big data
Storage Infrastructure – File system
Google: GFS
Hadoop: HDFS
Programming model
Map-Reduce
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
28
Problem
If nodes fail, how to store data persistently?
Answer
Distributed File System:
Provides global file namespace
Google GFS; Hadoop HDFS;
Typical usage pattern
Huge files (100s of GB to TB)
Data is rarely updated in place
Reads and appends are common
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
29
Chunk Servers
File is split into contiguous chunks
Typically each chunk is 16-64MB
Each chunk replicated (usually 2x or 3x)
Try to keep replicas in different racks
Master node
a.k.a. Name Nodes in Hadoop’s HDFS
Stores metadata
Might be replicated
Client library for file access
Talks to master to find chunk servers
Connects directly to chunk servers to access data
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
30
Reliable distributed file system
Data kept in “chunks” spread across machines
Each chunk replicated on different machines
Seamless recovery from disk or machine failure
C0
C1
D0
C1
C2
C5
C5
C2
C5
C3
D0
D1
Chunk server 1
Chunk server 2
…
Chunk server 3
C0
C5
D0
C2
Chunk server N
Bring computation directly to the data!
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
31
Warm-up task:
We have a huge text document
Count the number of times each
distinct word appears in the file
Sample application:
Analyze web server logs to find popular URLs
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
32
Case 1:
File too large for memory, but all <word, count>
pairs fit in memory
Case 2:
Count occurrences of words:
words(doc.txt) | sort | uniq -c
where words takes a file and outputs the words in it,
one per a line
Captures the essence of MapReduce
Great thing is it is naturally parallelizable
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
33
Sequentially read a lot of data
Map:
Extract something you care about
Group by key: Sort and Shuffle
Reduce:
Aggregate, summarize, filter or transform
Write the result
Outline stays the same, map and reduce
change to fit the problem
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
34
Input
key-value pairs
Intermediate
key-value pairs
k
v
k
v
k
v
map
k
v
k
v
…
k
7/16/2015
map
…
v
k
v
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
35
Intermediate
key-value pairs
Output
key-value pairs
Key-value groups
reduce
k
v
k
v
v
v
k
v
k
v
reduce
k
v
k
v
group
k
7/16/2015
v
…
…
k
v
v
k
…
v
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
k
v
36
Input: a set of key/value pairs
Programmer specifies two methods:
Map(k, v) <k’, v’>*
Takes a key value pair and outputs a set of key value pairs
E.g., key is the filename, value is a single line in the file
There is one Map call for every (k,v) pair
Reduce(k’, <v’>*) <k’, v’’>*
All values v’ with same key k’ are reduced
together and processed in v’ order
There is one Reduce function call per unique key k’
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
37
Bif\gger document
text. So that
people can read the
example – know
the answer.
MAP:
reads input and
produces a set of
key value pairs
Provided by the
programmer
Group by key:
Reduce:
Collect all pairs
with same key
Collect all values
belonging to the
key and output
The crew of the space shuttle
Endeavor recently returned to
Earth as ambassadors,
harbingers of a new era of
space exploration. Scientists
at NASA are saying that the
recent assembly of the Dextre
bot is the first step in a longterm space-based
man/machine partnership.
'"The work we're doing now -the robotics we're doing -- is
what we're going to need to
do to build any work station
or habitat structure on the
moon or Mars," said Allard
Beutel.
(the, 1)
(crew, 1)
(of, 1)
(the, 1)
(space, 1)
(shuttle, 1)
(Endeavor, 1)
(recently, 1)
….
(crew, 1)
(crew, 1)
(space, 1)
(the, 1)
(the, 1)
(the, 1)
(shuttle, 1)
(recently, 1)
…
(crew, 2)
(space, 1)
(the, 3)
(shuttle, 1)
(recently, 1)
…
Big document
(key, value)
(key, value)
(key, value)
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
reads
Only
sequential
data
read the
Sequentially
Provided by the
programmer
38
map(key, value):
// key: document name; value: text of the document
for each word w in value:
emit(w, 1)
reduce(key, values):
// key: a word; value: an iterator over counts
result = 0
for each count v in values:
result += v
emit(key, result)
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
39
Map-Reduce environment takes care of:
Partitioning the input data
Scheduling the program’s execution across a
set of machines
Performing the group by key step
Handling machine failures
Managing required inter-machine
communication
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
40
The sort is the
magical part. The
map reduce does it
by itself.
Big document
MAP:
reads input and
produces a set of
key value pairs
Group by key:
Call it hash merge
Group by key
Sort
Call it Partition or
Hash Merge
Collect all pairs
with same key
Reduce:
Collect all values
belonging to the
key and output
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
41
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
43
Input and final output are stored on a
distributed file system:
Scheduler tries to schedule map tasks “close” to
physical storage location of input data
Intermediate results are stored on local FS
of map and reduce workers
Output is often input to another map
reduce task
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
44
Skip
Master data structures:
Task status: (idle, in-progress, completed)
Idle tasks get scheduled as workers become
available
When a map task completes, it sends the master
the location and sizes of its R intermediate files,
one for each reducer
Master pushes this info to reducers
Master pings workers periodically
to detect failures
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
45
How exactly is
reducer failure
recovered?
Map worker failure
Map tasks completed or in-progress at worker are
reset to idle
Reduce workers are notified when task is
rescheduled on another worker
Reduce worker failure
Only in-progress tasks are reset to idle
Master failure
MapReduce task is aborted and client is notified
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
46
M map tasks, R reduce tasks
Rule of a thumb:
Debugging have 1
mapper1 reducer
M and R are
independent of
chungs, system
diced which
mapper gets what
part of the input file.
Similarly for
reducers.
Make M and R much larger than the number of
nodes in cluster
One DFS chunk per map is common
Improves dynamic load balancing and speeds
recovery from worker failure
Usually R is smaller than M
because output is spread across R files
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
47
Fine granularity tasks: map tasks >> machines
Minimizes time for fault recovery
Can pipeline shuffling with map execution
Better dynamic load balancing
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
48
Picture with chunk
servers
Problem
Slow workers significantly lengthen the job
completion time:
Other jobs on the machine
Bad disks
Weird things
Solution
Near end of phase, spawn backup copies of tasks
Whichever one finishes first “wins”
Effect
Dramatically shortens job completion time
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
49
Example back to
the word count
Often a map task will produce many pairs of
the form (k,v1), (k,v2), … for the same key k
E.g., popular words in the Word Count example
Can save network time by
pre-aggregating values at
the mapper:
combine(k, list(v1)) v2
Combiner is usually same
as the reduce function
Works only if reduce
function is commutative and associative
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
50
Inputs to map tasks are created by contiguous
splits of input file
Reduce needs to ensure that records with the
same intermediate key end up at the same
worker
System uses a default partition function:
hash(key) mod R
Sometimes useful to override:
E.g., hash(hostname(URL)) mod R ensures URLs
from a host end up in the same output file
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
51
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
52
Suppose we have a large web corpus
Look at the metadata file
Lines of the form (URL, size, date, …)
For each host, find the total number of bytes
i.e., the sum of the page sizes for all URLs from
that host
Other examples:
Link analysis and graph processing
Machine Learning algorithms
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
53
Statistical machine translation:
Need to count number of times every 5-word
sequence occurs in a large corpus of documents
Very easy with MapReduce:
Map:
Extract (5-word sequence, count) from document
Reduce:
Combine counts
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
54
Join was bad
Compute the natural join R(A,B) ⋈ S(B,C)
R and S each are stored in files
Tuples are pairs (a,b) or (b,c)
A
B
a1
b1
a2
b1
a3
b2
a4
b3
7/16/2015
⋈
B
C
A
C
b2
c1
a3
c1
b2
c2
a3
c2
b3
c3
a4
c3
=
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
55
Explain better
what’s going on
What are tuples?
Give animation, …
Use a hash function h from B-values to 1...k
A Map process turns:
Each input tuple R(a,b) into key-value pair (b,(a,R))
Each input tuple S(b,c) into (b,(c,S))
Map processes send each key-value pair with
key b to Reduce process h(b).
Hadoop does this automatically; just tell it what k is.
Each Reduce process matches all the pairs
(b,(a,R)) with all (b,(c,S)) and outputs (a,b,c).
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
56
-- Intro is too birocratic. Maybe it would be better to create a printout
and hand it out to the students or tell them to read the class website
-- What is data mining is too abstract? Maybe say an application or
two. Check how Manning and Ng do intro lectures
-- MapReduce was good
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
57
1.
2.
3.
7/16/2015
Communication cost = total I/O of all
processes.
Elapsed communication cost = max of I/O
along any path.
(Elapsed ) computation costs analogous, but
count only running time of processes.
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
58
For a map-reduce algorithm:
Communication cost = input file size + 2 (sum of
the sizes of all files passed from Map processes to
Reduce processes) + the sum of the output sizes of
the Reduce processes.
Elapsed communication cost is the sum of the
largest input + output for any map process, plus
the same for any reduce process
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
59
Either the I/O (communication) or processing
(computation) cost dominates
Ignore one or the other
Total costs tell what you pay in rent from your
friendly neighborhood cloud
Elapsed costs are wall-clock time using
parallelism
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
60
Total communication
cost = O(|R|+|S|+|R ⋈ S|)
Elapsed communication cost = O(s)
We’re going to pick k and the number of Map
processes so I/O limit s is respected
We put a limit s on the amount of input or output that
any one process can have. s could be:
What fits in main memory
What fits on local disk
With proper indexes, computation cost is linear
in the input + output size
So computation costs are like comm. costs
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
61
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
62
Google
Not available outside Google
Hadoop
An open-source implementation in Java
Uses HDFS for stable storage
Download: http://lucene.apache.org/hadoop/
Aster Data
Cluster-optimized SQL Database that also
implements MapReduce
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
63
Ability to rent computing by the hour
Additional services e.g., persistent storage
Amazon’s “Elastic Compute Cloud” (EC2)
Aster Data and Hadoop can both be run on
EC2
For CS341 (offered next quarter) Amazon will
provide free access for the class
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
64
Jeffrey Dean and Sanjay Ghemawat:
MapReduce: Simplified Data Processing on
Large Clusters
http://labs.google.com/papers/mapreduce.html
Sanjay Ghemawat, Howard Gobioff, and ShunTak Leung: The Google File System
http://labs.google.com/papers/gfs.html
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
65
Hadoop Wiki
Introduction
http://wiki.apache.org/lucene-hadoop/
Getting Started
http://wiki.apache.org/lucenehadoop/GettingStartedWithHadoop
Map/Reduce Overview
http://wiki.apache.org/lucene-hadoop/HadoopMapReduce
http://wiki.apache.org/lucenehadoop/HadoopMapRedClasses
Eclipse Environment
http://wiki.apache.org/lucene-hadoop/EclipseEnvironment
Javadoc
http://lucene.apache.org/hadoop/docs/api/
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
66
Releases from Apache download mirrors
http://www.apache.org/dyn/closer.cgi/lucene/had
oop/
Nightly builds of source
http://people.apache.org/dist/lucene/hadoop/nig
htly/
Source code from subversion
http://lucene.apache.org/hadoop/version_control
.html
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
67
Programming model inspired by functional language
primitives
Partitioning/shuffling similar to many large-scale sorting
systems
NOW-Sort ['97]
Re-execution for fault tolerance
BAD-FS ['04] and TACC ['97]
Locality optimization has parallels with Active
Disks/Diamond work
Active Disks ['01], Diamond ['04]
Backup tasks similar to Eager Scheduling in Charlotte
system
Charlotte ['96]
Dynamic load balancing solves similar problem as River's
distributed queues
River ['99]
7/16/2015
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
68