Amazon Mechanical Turk Requester Round Table (Presenter Name
Download
Report
Transcript Amazon Mechanical Turk Requester Round Table (Presenter Name
Crowdsourcing using Mechanical Turk
Quality Management and Scalability
Panos Ipeirotis – New York University
Panos Ipeirotis - Introduction
New York University, Stern School of Business
“A Computer Scientist in a Business School”
http://behind-the-enemy-lines.blogspot.com/
Email: [email protected]
4
5
Example: Build an “Adult Web Site” Classifier
Need a large number of hand-labeled sites
Get people to look at sites and classify them as:
G (general audience) PG (parental guidance) R (restricted) X (porn)
Cost/Speed Statistics
Undergrad intern: 200 websites/hr, cost: $15/hr
Amazon Mechanical Turk: Paid Crowdsourcing
Example: Build an “Adult Web Site” Classifier
Need a large number of hand-labeled sites
Get people to look at sites and classify them as:
G (general audience) PG (parental guidance) R (restricted) X (porn)
Cost/Speed Statistics
Undergrad intern: 200 websites/hr, cost: $15/hr
MTurk: 2500 websites/hr, cost: $12/hr
Bad news: Spammers!
Worker ATAMRO447HWJQ
labeled X (porn) sites as G (general audience)
Improve Data Quality through Repeated Labeling
Get multiple, redundant labels using multiple workers
Pick the correct label based on majority vote
11 workers
93% correct
1 worker
70% correct
Probability of correctness increases with number of workers
Probability of correctness increases with quality of workers
But Majority Voting is Expensive
Single Vote Statistics
MTurk: 2500 websites/hr, cost: $12/hr
Undergrad: 200 websites/hr, cost: $15/hr
11-vote Statistics
MTurk: 227 websites/hr, cost: $12/hr
Undergrad: 200 websites/hr, cost: $15/hr
Using redundant votes, we can infer worker quality
Look at our spammer friend ATAMRO447HWJQ
together with other 9 workers
We can compute error rates for each worker
Our “friend” ATAMRO447HWJQ
P[X → G]=90.153%
mainly marked sites as G.
Obviously a spammer…
P[G → G]=99.947%
Error rates for ATAMRO447HWJQ
P[X → X]=9.847%
P[G → X]=0.053%
Rejecting spammers and Benefits
Random answers error rate = 50%
Average error rate for ATAMRO447HWJQ: 45.2%
P[X → X]=9.847%
P[G → X]=0.053%
P[X → G]=90.153%
P[G → G]=99.947%
Action: REJECT and BLOCK
Results:
Over time you block all spammers
Spammers learn to avoid your HITS
You can decrease redundancy, as quality of workers is higher
After rejecting spammers, quality goes up
Spam keeps quality down
Without spam, workers are of higher quality
Need less redundancy for same quality
Same quality of results for lower cost
Without spam
5 workers
94% correct
Without spam
1 worker
With spam
80% correct
11 workers
93% correct
With spam
1 worker
70% correct
Correcting biases
Classifying sites as G, PG, R, X
Sometimes workers are careful but biased
Error Rates for CEO of AdSafe
P[G → G]=20.0%
P[P → G]=0.0%
P[R → G]=0.0%
P[X → G]=0.0%
P[G → P]=80.0%
P[P → P]=0.0%
P[R → P]=0.0%
P[X → P]=0.0%
P[G → R]=0.0%
P[P → R]=100.0%
P[R → R]=100.0%
P[X → R]=0.0%
P[G → X]=0.0%
P[P → X]=0.0%
P[R → X]=0.0%
P[X → X]=100.0%
Classifies G → P and P → R
Average error rate for ATLJIK76YH1TF: too high
Is she a spammer?
Correcting biases
Error Rates for Worker: ATLJIK76YH1TF
P[G → G]=20.0%
P[P → G]=0.0%
P[R → G]=0.0%
P[X → G]=0.0%
P[G → P]=80.0%
P[P → P]=0.0%
P[R → P]=0.0%
P[X → P]=0.0%
P[G → R]=0.0%
P[P → R]=100.0%
P[R → R]=100.0%
P[X → R]=0.0%
P[G → X]=0.0%
P[P → X]=0.0%
P[R → X]=0.0%
P[X → X]=100.0%
For ATLJIK76YH1TF, we simply need to “reverse the errors”
(technical details omitted) and separate error and bias
True error-rate ~ 9%
Too much theory?
Demo and Open source implementation available at:
http://qmturk.appspot.com
Input:
– Labels from Mechanical Turk
– Cost of incorrect labelings (e.g., XG costlier than GX)
Output:
– Corrected labels
– Worker error rates
– Ranking of workers according to their quality
Beta version, more improvements to come!
Suggestions and collaborations welcomed!
Scaling Crowdsourcing: Use Machine Learning
Human labor is expensive, even when paying cents
Need to scale crowdsourcing
Basic idea: Build a machine learning model and use it
instead of humans
Data from existing
crowdsourced answers
New Case
Automatic Model
(through machine learning)
Automatic
Answer
Tradeoffs for Automatic Models: Effect of Noise
Get more data Improve model accuracy
Improve data quality Improve classification
Example Case: Porn or not?
Data Quality = 100%
100
Data Quality = 80%
Accuracy
90
80
Data Quality = 60%
70
60
Data Quality = 50%
50
80
10
0
12
0
14
0
16
0
18
0
20
0
22
0
24
0
26
0
28
0
30
0
60
40
20
1
40
Number of examples (Mushroom)
20
Scaling Crowdsourcing: Iterative training
Use machine when confident, humans otherwise
Retrain with new human input → improve model →
reduce need for humans
Automatic
Answer
New Case
Automatic Model
(through machine learning)
Data from existing
crowdsourced answers
Get human(s) to
answer
Tradeoffs for Automatic Models: Effect of Noise
Get more data Improve model accuracy
Improve data quality Improve classification
Example Case: Porn or not?
Data Quality = 100%
100
Data Quality = 80%
Accuracy
90
80
Data Quality = 60%
70
60
Data Quality = 50%
50
80
10
0
12
0
14
0
16
0
18
0
20
0
22
0
24
0
26
0
28
0
30
0
60
40
20
1
40
Number of examples (Mushroom)
22
Scaling Crowdsourcing: Iterative training, with noise
Use machine when confident, humans otherwise
Ask as many humans as necessary to ensure quality
Automatic
Answer
New Case
Automatic Model
(through machine learning)
Data from existing
Not confident
for quality?
Get human(s) to
crowdsourced answers
answer
Confident for quality?
Thank you!
Questions?
“A Computer Scientist in a Business School”
http://behind-the-enemy-lines.blogspot.com/
Email: [email protected]