8+14+17+10+8

Download Report

Transcript 8+14+17+10+8

Basic Statistics
Michael Hylin
Scientific Method






Start w/ a question
Gather information and
resources (observe)
Form hypothesis
Perform experiment and
collect data
Analyze data
Interpret data & draw
conclusions


form new hypotheses
Retest (frequently done by
other scientists)

i.e. replicate & extend
Example


Pavlov noticed that
when the dogs saw the
lab tech they salivated
the same as when
they saw meat powder
(Observation)
Predicted that other
stimuli could elicit this
response when paired
w/ meat powder
(Hypothesis)
Example



Pavlov found that
when a bell was paired
w/ the presence of
meat powder an
association occurred
(Experimentation)
Concluded that pairing
of US w/ CS could
lead to CR
(Interpretation)
Research since Pavlov
has demonstrated the
mechanism of how CC
works (e.g. Aplysia)
Basics of Experimental Design



Types of Variables
Types of Comparisons
Types of Groups
Types of Variables

Independent Variable



Dependent Variable



Manipulated by the experimenter
May have several
Dependent upon the IV
The data
IV → DV
Types of Comparisons

Between-subjects


Comparing one group to another
Within-subjects


Comparing a subject’s results at one
point to another point
Usually referred to as repeated-measures
Types of Groups

Experimental Group


Receives experimental manipulation
Control Group

“controls” for the effect of manipulation
Example


A researcher has a new drug (M100) that
improves semantic memory in normal
individuals.
The researcher decides to test M100’s
effectiveness by giving the drug to
participants and testing their ability to
memorize a list of words. Other participants
are given a sugar pill and told to memorize
the list as well.
Example

What is the IV? the DV?



Additional IVs & DVs
What was the control?
What type of comparison was being
done?

Could it be different?
What about statistics?

Why do we need statistics?




Cannot rely solely upon anecdotal
evidence
Make sense of raw data
Describe behavioral outcomes
Test hypotheses
Measures of Central Tendency

Mode


Median



Frequency, most common ‘score’
Point at or below 50% of scores fall when the
data is arranged in numerical order
Used typically w/ non-normal distributions
Mean (Often expressed X )

Sum of the scores divided by the number of
scores
Mean
X

X 
n
X  X
1
 X 2  .....
Xn
Example

Data for number of words recalled
8, 14, 17, 10, 8



Mode = 8
Median = 10 (8 , 8, 10, 14, 17)
Mean = 8+14+17+10+8 = 11.4
5
Measures of Variability

Range




Difference between highest and lowest
scores
Variance (s2)
Standard Deviation (s)
Standard Error of the Mean (S.E.M.)
Variance

s
2
Equation for Variance
(X  X )


2
(n  1)
Where:
(X  X )
2

 X1  X
  X
2
2
X

2
 .....
X
n
X

2
Variance

s2 
Another Equation for Variance
X
 X 

2
2
n  1
n
Where:
2
2
2
X

X

X

1
2  .....
 X 
2
X n2
&
  X 1  X 2  .....

X n 

2
Standard Deviation

s
Equation for Standard Deviation
(X  X )
2
(n  1)
Or
 X 

2
s
X
2
n  1
n
Example

Data for number of words recalled
8, 14, 17, 10, 8



Range = 17 – 8 = 9
Variance = 15.8
Standard Deviation = 3.97
Example
Variance
XX
X  X 
2
X
X
11.4
8
8 - 11.4 = -3.4
11.56
14
14 - 11.4 = 2.6
6.76
17
17 - 11.4 = 5.6
31.36
10
10 - 11.4 = -1.4
1.96
8
8 - 11.4 = -3.4
11.56
 X  X 
2
63.2
63.2
4
s 2  15.8
s2 
Standard Deviation
s
63.2
4
s  3.97
Mean & Standard Deviation
Null Hypothesis

Start w/ a research hypothesis



“Manipulation” has an effect
e.g. Students given study techniques
have a higher GPA
Set up the null hypothesis


“Manipulation” has NO effect
e.g. Students w/ techniques are no diff.
than those w/o techniques
Null Hypothesis


Does the
manipulation have
an effect
Use a critical value
to test our
hypothesis

Usually 0.05
H 0  treat  control
H1  treat  control
Hypothesis Testing
True State of the World
Decision
H0 True
H0 False
Reject H0
Type I Error
p=α
Correct decision
p=1-α
Correct decision
p = 1 – β = Power
Accept H0
Type II Error
p=β
Hypothesis Testing


Not truly ‘proving’ our hypothesis
In reality we are setting up a situation
where there is no relationship between
the variables and then testing whether
or not we can reject this (null
hypothesis)
Independent T-Test

Test whether our
samples come from
the same
population or
different
populations
X1
X2
Equation for Independent TTest
t
X1  X 2
2
2









X
X
2
2
 X   1  X   2 

2
  1


  1 1 
n1
n2
 
   


  n1 n2 
n1  n2  2






Group 1 (study techniques)
Group 2 (no techniques)
GPA
GPA
3.41
2.54
3.16
3.10
2.98
2.10
2.95
2.40
3.26
2.80
X
1
X
2
1
 15.76
 49.80
X 1  3.15
n1  5
X
2
X
 12.94
2
2
 34.07
X 2  2.58
n2  5
3.15  2.58
t
2
2

 





15
.
76
12
.
94
  49.80 
   34.07 




 1 1
5
5

 
   


 
5

5

2
5
5


 




t
t
0.57

248.38  
167.44  

34
.
07

  49.80 
 

5  
5 

 0.2  0.2


8




0.57
 49.80  49.68  34.07  33.49 

  0.4
8




t
0.57
 0.12  0.58 

  0.4
8


t
0.57
0.0875  0.4
0.57
t
0.187

t
0.57
 0.70 

  0.4
 8 

0.57
t
0.035

t  3.04





Since our observed t = 3.04 which is
greater than 2.306 we can reject the
null hypothesis
Therefore the probability of the
difference we observed occurring
when the null hypothesis is true is less
than 0.05 (5%)
As a result our effect is likely due to
the training
Degrees of Freedom

6, 8, 10


Mean = 8
If we change two numbers the other is
determine if we want to keep Mean = 8

67 & 1013 then the final number is 4
7  13  Y
20  Y
20  Y 8
8
8
  24  20  Y  24  20  Y  Y  4
3
3
3
1
IV with more than two levels


Sometimes we want to compare more
that just two groups
Cannot just due multiple t-tests


Increase alpha
Simple analysis of variance

1-way ANOVA
Multiple IVs

Factoral ANOVA





Allow for comparison of more than one IV
IVs can be between or within
If both its called mixed ANOVA (repeated
measures)
Interaction of IVs
E.g. 2x2 ANOVA


IV1 Study group (no study vs. study)
IV2 Time at testing (pre. vs. post.)
Example
4
GPA
3
Study
2
No Study
1
0
Pre
Post
ANOVA Table
Sum of Squares
df
Test
0.5445
1
Test * Group
0.8405
Mean
Square
F
Sig.
0.5445
36.3
0.00
1
0.8405
56.03333
0.00
0.12
8
0.015
Group
0.7605
1
0.7605
5.827586
0.04
Error
1.044
8
0.1305
Error(Test)
Example
4
GPA
3
Study
2
No Study
1
0
Pre
Post
ANOVA Table
Mean
Square
Sum of Squares df
Test
F
Sig.
0.002
1
0.002
0.148148
0.710342
0
1
0
0
1
Error(Test)
0.108
8
0.0135
Group
0.002
1
0.002
0.017241
0.898775
Error
0.928
8
0.116
Test * Group
F-Score Equation
F
MS group
MS error
What about further group
comparisons

Significant main effects with more than
2 levels


Post hoc comparisons
Significant interactions

Simple effects