Transcript Document

Testing hypotheses
Continuous variables
Hypothesis
Lower income
L
Higher murder rate
H
Median income
L
L
H
L
H
L
H
L
L
H
L
H
H
L
H
H
L
H
H
H
High
Murder
Low
Murder
Low
Income
3
1
High
Income
2
4
L
Murder rate
High
Murder
Low
Murder
Low
Income
75%
25%
High
Income
33%
67%
H
Correlation and Regression
• Correlation: measure of the strength of an
association (relationship) between
continuous variables
• Regression: predicting the value of a
continuous dependent variable (y) based on
the value of a continuous independent
variable (x)
Hypothesis
Lower income
Higher murder rate
Distribution of
cities by
median income
Distribution
of cities by
murder rate
Murder rate
Plot IV and DV
for each case
(city) on a
“scattergram”
(two cities
detailed)
Murder rate
Median income
Analysis later…
Median income
Correlation statistic - r
•
Values of r Range from –1 to +1
•
-1 is a perfect negative association (correlation),
meaning that as the scores of one variable increase,
the scores of the other variable decrease at exactly
the same rate
•
+1 is a perfect positive association, meaning that
both variables go up or down together, in perfect
harmony
•
Intermediate values of r (close to zero) indicate
weak or no relationship
•
Zero r (never in real life) means no relationship –
that the variables do not change or “vary” together,
except as what might happen through chance alone
•
Remember that “negative” doesn’t mean “no”
relationship. A negative relationship is just as much a
relationship as a positive relationship.
+1
Perfect
positive
relationship
0
No relationship
-1
Perfect
negative
relationship
Two “scattergrams” – each with a “cloud” of dots
Y
NOTE:
Dependent
variable (Y) is
always placed on
the vertical axis
5
5
6
6
Y
r=-1
3
3
4
4
r = +1
2
1
1
2
NOTE: Independent
variable (X) is always
placed on the horizontal
axis
X
X
1
2
3
4
5
1
2
3
4
5
Can changes in one variable be predicted by changes in the other?
Can changes in one variable be predicted by changes in the other?
Y
4
5
As X changes in value, does
Y move correspondingly,
either in the same or opposite
direction?
2
3
Here there seems to be no
connection between X and Y.
One cannot predict values of
Y from values of X.
1
r=0
X
1
2
3
4
5
Can changes in one variable be predicted by changes in the other?
Y
5
Here as X changes in value by
one unit Y also changes in
value by one unit.
3
4
Knowing the value of X one
can predict the value of
Y.
1
2
X and Y go up and down
together, meaning a positive
relationship.
r = +1
X
1
2
3
4
5
Can changes in one variable be predicted by changes in the other?
Y
4
5
Here as X changes in value by
one unit Y also changes in
value by one unit.
3
Knowing the value of X one
can predict the value of Y.
1
2
X and Y go up and down in
an opposite direction,
meaning a negative
relationship.
r = -1
X
1
2
3
4
5
Computing r using the “Line of best fit”
• To arrive at a value of “r” a straight line is
placed through the cloud of dots (the actual,
“observed” data)
6
Y
5
• This line is placed so that the cumulative
distance between itself and the dots is
minimized
4
• The smaller this distance, the higher the r
3
• r’s are normally calculated with computers.
Paired scores (each X/Y combination) and the
means of X and Y are used to compute:
2
b
a
•
•
a, where the line crosses the Y axis
b, the slope of the line
1
• When relationships are very strong or very
weak, one can estimate the r value by simply
examining the graph
X
1
2
3
4
5
2
“Line of best fit”
•
The line of best fit predicts a value for
Y
•
There will be a difference between these
if y =5, x=3.4
5
variable
6
one variable given the value of the other
(“observed”) values. This difference is
called a “residual” or an “error of the
4
estimated values and the actual, known
3
estimate.”
predicted values decreases – as the dots
2
As the error between the known and
cluster more tightly around the line – the
absolute value of r (whether + or –)
if x =.5, y=2.3
1
•
increases
X
1
2
3
4
5
3
3
4
4
5
5
6
6
A perfect fit: Line of best fit goes “through” each dot
Y
Y
2
r = -1.0
a perfect fit
1
1
2
r = +1.0
a perfect fit
X
X
1
2
3
4
5
1
2
3
4
5
4
Moderate cumulative distance
between line of best fit and “cloud” of dots
3
4
5
6
Y
1
2
r = +.65
An intermediate fit yields
an intermediate value of r
X
1
2
3
4
5
2
Large cumulative distance
between line of best fit and “cloud” of dots
2
r = - .19
1
3
4
5
6
Y
A poor fit yields
a low value of r
X
1
2
3
4
5
R-squared (R2), the coefficient of
determination
• Proportion of the change in the dependent variable (also
known as the “effect” variable) that is accounted for by
change in the independent variable (also known as the
“predictor” variable)
• Taken by squaring the correlation coefficient (r)
• “Little” r squared (r2) depicts the explanatory power of a
single independent/predictor variable
• “Big” R squared (R2) combines the effects of multiple
independent/predictor variables. It’s the more commonly
used.
Hypothesis: Lower income  higher murder rate
How to “read” a scattergram
•
Move along the IV. Do
the values of the DV
change in a consistent
direction?
•
Look across the IV. Does
knowing the value of the
IV help you predict the
value of the DV?
•
Place a straight line
through the cloud of dots,
trying to minimize the
overall distance between
the line and the dots. Is
the line at a pronounced
angle?
To the extent that you can
answer “yes” to each of
these, there is a relationship
R = -.6
R2 = .36
Change in the
IV accounts
for thirty-six
percent of the
change in the
DV.
A moderateto-strong
relationship,
in the
hypothesized
direction –
hypothesis
confirmed!
Class exercise
Hypothesis1: Height  Weight
Hypothesis2: Age  Weight
• Use this data to build a scattergram for each
hypothesis
• Be sure that the independent variable is on the X
axis, smallest value on the left, largest on the
right, just like when graphing any distribution
• Be sure that the dependent variable is on the Y
axis, smallest value on the bottom, largest on top
• Place a dot representing a case at the
intersection of its values on X and Y
• Place a STRAIGHT line where it minimizes the
overall distance between itself and the cloud of
dots
• Use this overall distance to estimate a possible
value of r, from -1 (perfect negative
relationship,) to 0 (no relationship), to +1
(perfect positive relationship)
• Remember that “negative” doesn’t mean “no”
relationship. Negative relationships are just as
much a relationship as positive relationships.
Height (inches)
62
62
64
64
68
60
63
66
69
62
69
64
64
65
68
66
63
74
67
64
71
71
65
69
69
70
Weight
130
167
145
150
145
122
125
125
236
115
150
115
175
150
208
190
150
230
150
117
195
230
175
180
220
150
Age
23
26
30
28
28
26
31
20
40
20
21
23
22
29
40
26
28
25
34
27
21
24
26
27
28
20
Impact of extreme scores
r = .35
r2 = .12
With all cases, weak to moderate
positive relationship
r = -.17
r2 = .03
Less extreme cases,
very weak negative relationship
Extreme scores can be produced by measurement errors or other circumstances
(here, it could be chronic illness or a hereditary disorder). To prevent confusion,
such cases are often dropped, but notice should always be given.
SHORT
240
Changing the
level of
measurement
from
continuous to
categorical
TALL
220
HEAVY
Weight
200
3
7
12
4
180
160
140
LIGHT
120
100
58
60
62
64
66
68
Height
70
72
74
76
Spring ’15
p.s.
r = -.26
r2 = .07
A weak negative relationship
(larger circles denote multiple cases at same values)
Warning: know your data!
Age  Height
People get taller as
they age, right?
r = .04
r2 =.00
No relationship
In this sample, age has no relationship with height. Why? Because the range
for age is severely restricted: each case is already an adult!
Exploring data with r
Why are we so
polarized? Could
part of the reason
be a poor
economy?
The r statistic can
be used to explore
such questions, not
just for a small
group but for the
whole country!
But unless we go
in with a
hypothesis, backed
by a literature
review, it’s
basically a fishing
expedition.
Remember that
there are lots of
variables changing
all the time, so
finding substantial
correlations isn’t
unusual.
Theorizing after
the fact is always
hazardous.
Remember the
story about lunar
cycles and
homicide?
Other correlation techniques
•
“Spearman’s r”
– Correlation technique for ordinal categorical variables
•
Partial correlation
– Using a control variable to assess its potential influence on a bivariate
(two-variable) relationship when all variables are continuous
– Analogous to using first-order partial tables for categorical variables
– Instead of height  weight, is it possible that a variable related to height
– age – is the real cause of changes in weight?
Zero-order
correlations
•
HEIGHT WEIGHT
AGE
HEIGHT
1.00
.72
.04
WEIGHT
.72
1.00
.34
AGE
.04
.34
1.00
First-order
partial
correlations
Controlling for AGE
HEIGHT
WEIGHT
HEIGHT
1.00
.75
WEIGHT
.75
1.00
Nope - when age is “controlled for” - meaning its possible influence on the
relationship between height and weight is removed - the original, “zero-order”
relationship between height and weight, .72, barely changes (.75). (Age displays
no relationship with height because each case in this sample is an adult.)
Some parting thoughts
•
•
If we did not use probability sampling
– Our results apply only to the cases we coded
– Accounting for the influence of other variables can be tricky
– R and related statistics are often unimpressive; describing what they mean
can be tricky
If we used probability sampling
– Our results can be extended to the population
– But, since samples are just that – samples – we cannot assume that the
statistics a sample yields (e.g., r, R2) hold true for the population
– Techniques we’ll discuss later allow us to estimate the magnitude of the
difference between sample statistics and the corresponding population
parameters
– This process will also let us interpret our results with far greater clarity
and precision than is possible without probability sampling
Exam preview
1.
2.
3.
You will be given a hypothesis and data from a sample. There will be two variables – the dependent variable,
and the independent variable. Both will be categorical, and each will have two levels (e.g., low/high, etc.)
A. You will build a table containing the frequencies (number of cases), just like we did in class and in this
slide show. For consistency, place the categories of the independent variable in rows, just like in the
slide shows.
B. You will build another table with the percentages. Remember to go to one category of the independent
variable and percentage it across the dependent variable. Then go to the other category of the
independent variable and do the same.
C. You will analyze the results. Are they consistent with the hypothesis?
You will be given the same data as above, broken down by a control variable. It will also be categorical, with
two levels.
A. You will build first order partial tables, one with frequencies (number of cases), the other with
percentages, for each level of the control variable. Remember that these tables will look exactly like the
zero-order table. The hypothesis, the independent and dependent variables and their categories stay
the same.
B. You will be asked whether introducing the control variable affects your assessment of the hypothesized
zero-order relationship. This requires that you separately compare the results for each level of the
control variable to the zero-order table. Does introducing the control variable tell us anything new?
You will be given another hypothesis and data. There will be two variables – the dependent variable and the
independent variable. Both are continuous variables.
A. You will build a scattergram and draw in a line of best fit. Remember that the independent variable
must go on the X (horizontal) axis, and the dependent variable must go on the Y (vertical) axis. Also
remember that the line of best fit must be a straight line, placed so as minimize its overall distance from
the dots, which represent the cases.
B. You will estimate the r (correlation coefficient) and state whether the scattergram supports the
hypothesis. Be careful! First, is there a relationship between variables? Second, is it in the same
direction (positive or negative) as the hypothesized relationship?