Transcript The Game

Semistructured bargaining
with private information and
deadlines
Gideon Nave
Alec Smith
Colin Camerer
1
Bargaining and information
Informational asymmetries  inefficiencies
Willingness to endure a strike is the only credible
evidence that a firm is unable to pay a high wage
Forsythe+ AER 93 experiments: strikes might be
efficient under informational asymmetry
4
Motivations
• Since 1990s: abandoned experiments
on semistructured bargaining
Revive interest in this paradigm
Can still make predictions
• Dynamic, continuous interaction
• Informational asymmetries
• Descriptive & prescriptive
Descriptive: What happens?
Prescriptive: Data-mine predictors of strikes
5
The game
Integer random pie size: $1-6
Two player types
Uninformed
Informed
Bargain over the uninformed player’s payoff
6
The game
First 2 sec: initial bargaining positions
7
The game
10 sec: dynamic bargaining
8
The game
Cursors match: visual feedback
After 1.5 sec (without changes) – deal is made
9
The game
After 10 sec without agreement - strike
Both players get feedback following the game
10
First 2 secs
Next 8 secs
deal
11
Methods
Fixed Roles informed / uninformed
Random pair matching
120 periods (+15 practice rounds)
N=110
> 6,000 trials
Players are paid for 15% of the periods
12
Revelation principle
• Every equilibrium has payoffs equivalent to
truthfully revealing hidden information to a
mediator, who shrinks pie k by (1-k) (Myerson, 1978)
•  payoffs must satisfy “incentive
compatibility” IC constraint
– e.g. if pie size is $6, must prefer to report $6 than
to report $1-$5
13
Kavli Lecture SfNEcon 27.9.2014
Some algebra showing IC
• shrinkage rate for pie k is 1-k
• Suppose pie is $6 (6)
• IC: Must prefer truth to misreporting $5
•
66 - x6 > kk – xk
for k=1,2,..5
14
Kavli Lecture SfNEcon 27.9.2014
15
Implies: Strikes in 1-3 no strikes in 4-6
16
Candidate “focal” equilibrium
•
•
•
•
•
Divide pies evenly subject to IR, IC
Equilibrium:
xk = k/2 for k=1,2,3,4
xk = $2
for k=4,5,6
j < k(.56)/ (k-.5k)
 deal rates are .4, .6, .8, 1, 1, 1
uninformed gets .5, 1, 1.5, 2, 2, 2
17
Key point
• Can get precise nonobvious predictions
even with semistructured bargaining*
*”evidence”: Shin Shimojo reaction
18
19
DATA
20
21
Payoffs division (for deals)
4.
3.
Payoff
(USD)
2.
Informed payoff
Uninformed payoff
1.
0.
Pie size (USD)
22
23
Uninformed player payoffs (deals only)
27
29
Using process data to predict strikes
• Process has many “features”
– E.g.: time since last demand/offer change
– gap between current offer and demand
• Use machine learning/data mining to select
from many features
– LASSO regression with penalty for large β
– Crucial!: “train” on 90% of data, cross-validate on
10% holdout….done for all 10 holdouts
32
quick advert
• Machine learning in economics, highdimensional data, sparsity
– Krajbich+ Sci 09 neural BOLD and public good
value
– Smith+ AEJ: Micro 14 neural BOLD during passive
viewing and consumer purchase
– Methods: Belloni+ J Ec Pers, 14 review
33
LASSO shrinkage method
http://statweb.stanford.edu/~tibs/ElemStatLearn/
35
The LASSO as a constrained optimization problem
SSR
contours
Constraint
From Hastie et al., (2009)
Process features that have
weight in LASSO
37
38
ROC curves
process data ≈ pie size,
adds small predictive power (t=5)
39
first-time player
strike rates
.45
.27
40
—.45—
—.27
41
Conclusions
Revive interest in semi-structured bargaining
• “Semi” is enough to get prediction
• Further hypotheses (focality) give precise predictions
(exact offers, strike rates)
Results:
• Too many strikes
• Otherwise offers, strike rate trends match closely
• Process data can improve prediction
Future:
• Richer process (SCR, eyetracking, facial image)
• Positive analysis of bargaining institutions (face-toface, use of agents,…)
42
The secret of life is honesty and fair dealing.
If you can fake that, you’ve got it made.
Groucho Marx
43
A
B
C
D
Data Analysis: Predicting Choices
(Smith, Bernheim, Camerer, Rangel AEJ Micro 2014)
1. Separate (y,D) into test and training data
2. Model Selection: Using only the training data
a. Identify best predicting model via k-fold crossvalidation over penalty weights λ1,…,λ100
b. Save regression coefficients
c. We also use an initial voxel screening step (cf
Ryali et al. 2010), but this turns out not to
matter
3. Model Assessment: On test data
– Predict ytest using Dtest
4. Repeat, cycling through all n observations
Lasso-penalized Logistic Regression
for Model Selection
• For each subject (i) and choice pair (t), we let yit = 1 if the
target food is chosen, and 0 otherwise.
• We assume
exp(g 0 + g Dit )
Pr(yit =1| Dit ) =
1+ exp(g 0 + g Dit )
where Dit is the difference in non-choice neural responses
between the two foods.
• We solve
p
max LL(g ) - l å g j
g
and use cross validation to determinej=1
the optimal penalty
weight
Within subject predictive accuracy:
by voxel selection threshold
0.66
Mean Success Rate ( n=17 )
0.64
Success Rate
0.62
0.6
0.58
0.56
0.54
0.52
0.5
0.48
0.01 0.05
0.1
0.5
1
5
10
Percent of Voxels
50
100
Within subject predictive accuracy:
By subject, 1% of voxels