Foundations of Organizational Behavior
Download
Report
Transcript Foundations of Organizational Behavior
Combining Test Data
MANA 4328
Dr. George Benson
[email protected]
Selection Decisions
First, how to deal with multiple predictors?
Second, how to make a final decision?
Interpreting Test Scores
Norm-referenced scores
Test scores are compared to applicants or
comparison group.
Raw scores should be converted to Z scores or
percentiles
Use “rank ordering”
Criterion-referenced scores
Test scores indicate a degree of competency
NOT compared to other applicants
Typically scored as “qualified” vs. “not qualified”
Use “cut-off scores”
Setting Cutoff Scores
Based on the percentage of applicants you need to
hire (yield ratio). “Thorndike’s predicted yield”
You need 5 warehouse clerks and expect 50 to apply.
5 / 50 = .10 (10%) means 90% of applicants rejected
Cutoff Score set at 90th percentile
Z score 1 = 84th percentile
Based on a minimum proficiency score
Based on validation study linked to job analysis
Incorporates SEM (validity and reliability)
Selection Outcomes
Cut Score
Regression Line
PERFORMANCE
90% Percentile
No Pass
Pass
PREDICTION
Selection Outcomes
PERFORMANCE
High Performer
Low Performer
Type 1 Error
False
Negative
True
Positive
True
Negative
Type 2 Error
False
Positive
No Hire
Hire
PREDICTION
Selection Outcomes
Prediction Line
PERFORMANCE
Cut Score
High Performer
Low Performer
Unqualified
Qualified
PREDICTION
Banding
Grouping like test scores together
Standard Error of Measure
Function of test reliability
Band of + or – 2 SEM
95% Confidence interval
If the top score on a test is 95 and SEM is 2, then
scores between 95 and 91 should be banded
together.
Selection Outcomes
PERFORMANCE
Prediction Line
Cut Score
Acceptable
Unacceptable
Unqualified
PREDICTION
Qualified
Dealing With Multiple Predictors
“Mechanical” techniques superior to judgment
1.
Combine predictors
2.
Judge each independently
3.
Compensatory or “test assessment approach”
Multiple Hurdles / Multiple Cutoff
Hybrid selection systems
Compensatory Methods
Unit weighting
P1 + P2 + P3 + P4 = Score
Rational weighting
(.10) P1 + (.30) P2 + (.40) P3 + (.20) P4 = Score
Ranking
RankP1 + RankP2 +RankP3 + RankP4 = Score
Profile Matching
D2 = Σ (P(ideal) – P(applicant))2
Combined Selection Model
Selection
Stage
Selection Test
Decision
Model
Applicants
Candidates
Application Blank
Minimum Qualification
Hurdle
Candidates
Finalists
Four Ability Tests
Work Sample
Rational Weighting
Hurdle
Structured Interview
Unit Weighting
Rank Order
Drug Screen
Final Interview
Hurdle
Finalists
Offers
Offers
Hires
Final Selection
Top Down Selection (Rank) vs. Cutoff scores
1.
2.
Is the predictor linearly related to performance?
How reliable are the tests?
Top-down method – Rank order
Minimum cutoffs – Passing Scores
Final Decision
Random Selection
Ranking
Grouping
Role of Discretion or “Gut Feeling”