power point slides from this presentation
Download
Report
Transcript power point slides from this presentation
Applying an Action Research Model to
Assess Student Understanding of the
Central Limit Theorem in Post-Calculus
Probability and Statistics Courses
JSM, Minneapolis
August 7, 2005
M. Leigh Lunsford, Longwood University
Ginger Holmes Rowell, Middle TN State University
Tracy Goodson-Espy, Appalachian State University
1
Action (Classroom) Research
• Use our classes as “mini-labs” in which we can
gather data and conduct experiments that will
(hopefully!) improve our teaching.
– NOT a randomized study!
• Research done in post-calculus mathematical
probability and statistics courses.
• Building on previous researchers work
– Not reinventing the wheel!
– Comparison of our results to previous results.
• Conclusions and conjectures based on our data
and classroom observations may be interesting to
other educators.
Our Ultimate Goal: To Improve Our Teaching!
2
The Big Picture
• Work done via our NSF DUE A&I Collaborative Research
Award*:
– “Adaptation & Implementation of Activity & Web-Based Materials in
Post-Calculus Introductory Probability & Statistics Courses”
• Materials for A&I:
– “A Data-Oriented, Active Learning, Post-Calculus Introduction to
Statistical Concepts Methods, and Theory (SCMT),” A. Rossman, B.
Chance, K. Ballman, NSF DUE-9950476
– “Virtual Laboratories in Probability and Statistics (VLPS),” K. Siegrist,
NSF DUE-9652870
Use of activities and simulation (in-class and out-of-class) throughout
the semester to improve teaching and student understanding in our
post-calculus introductory probability and statistics courses
*This project was partially support by the National Science Foundation. The project
3
started in June 2002 and continued through August 2004.
Today’s Topic!
Coverage and Assessment of Student Understanding
of Sampling Distributions and the CLT in
“Math 300 – Introduction to Probability”
• First semester of the typical post-calculus
mathematical probability and statistics sequence
– Prereq: Calculus II (through sequences and series)
• Student Population:
– Approx 50% Computer Science majors, 20% Engineering, 10%
Mathematics, 20% Other Science majors and Graduate
Students
• Traditional Text:
– Probability and Statistical Inference, 6th Edition, Hogg and Tanis
4
Sampling Distributions and CLT
Coverage
• Less time spent on multivariate distributions (in order to
get to CLT and its applications sooner).
• Text Supplemented with:
– In-class Activity using Sampling SIM* software.
• Activity modified from an early SCMT activity*.
– Written report based on extension of Activity (including use of
Sampling SIM outside of class).
• Supplemental materials had more emphasis on
graphical understanding of sampling distributions and
the CLT.
• Also briefly covered (via the textbook) an application of the
CLT via confidence intervals and sample sizes for
proportions.
*2001 - delMas, R. Sampling SIM.
On line at http://www.gen.umn.edu/research/stat_tools/
5
Assessment Tools
Quantitative:
– Used assessment tool developed by delMas, et. al.* that
measures student understanding and reasoning regarding
sampling distributions and the CLT.
– Assessment items roughly characterized as
• Graphical (the focus of this presentation)
• Fact (Theory) Recollection/Computational (please see our paper)
– Same assessment tool used as both a pre-test and post-test
given before (week 11) and after (week 14) coverage of Sampling
Distributions and CLT.
– Data exploration of student responses to examine student
understanding of the CLT and related concepts.
Qualitative:
– Student’s self perception of learning of concepts, use of
technology, etc.
– Survey administered at beginning and at end of semester.
*Sampling Distributions Post Test - 2002 - delMas, R., Garfield, J., and Chance, B.
6
Assessment Tool Graphical
Item Example*
Irregular Population
1*. Which graph best represents a
distribution of sample means for 500
samples of size n = 4?
A
B
C
D
E
2(a)*. What do you expect for the
shape of the sampling distribution?
- More like a normal dist.
- More like the population.
2(b)*. I expect the sampling distribution
to have: (choose one)
less the same more
variability than/as the population.
•Same questions for n = 25. (3* and 4*)
•Similar questions asked for a “skewed”
population.
*Please see your handout!
7
Graphical Measures from
Assessment Item
– Percent of students with correct identification of sampling
distribution for large and small n (sample size).
– Distribution of students into “reasoning” categories by
considering answer pairs for choices of sampling
distribution histogram (answer for n=4, answer for n=25).
Measures reasoning about variability of sampling
distribution and CLT effect on shape as n increases
(delMas, et. al.*).
– Percent of students who showed “consistent graphical
reasoning” regarding their stated belief about shape and
variance of the sampling distribution versus their choice
of sampling distribution.
Measures taken for both “irregular” and “skewed” populations
8
Correct Identification of Sampling
Distribution
Correct Identification
of Sampling Distribution
Math 300 - Post (N=18)
100
Percent Correct
Math 300 - Pre (N=18)
80
60
40
20
0
Skew (n=4)
Skew (n=16)
Irreg. (n=4)
Irreg. (n=25)
Distribution & Sam ple Size
Irregular
Sampling
Distribution
% Correct
Identification
(18 students)
n=4 (Question 1*)
n=25 (Question 3*)
Pre-Test
Post-Test
Pre-Test
Post-Test
5.6%
(1)
33.3%
(6)
5.6%
(1)
77.8%
(14)
*Please see your handout!
9
Classification of Student Response
Pairs into Reasoning Categories
Population
Post-Test
Answer Pair
(n=4, n=25)
Percent
(number) of
18 Students
Reasoning
Category*
(C, E)
27.8% (5)
Correct
(B, E)
11.1% (2)
Good (L-S
Normal*)
(A, E)
38.9% (7)
L-S
Normal
(A, B)
5.6% (1)
L-S
Population
(E, D) or (E, C)
11.1% (2)
S-L
(C, D)
5.6% (1)
Other
*2002 – delMas, R., Garfield, J., and Chance, B..
10
A Few Observations
• Students classified as having correct, good, or L-S normal reasoning for
irregular population increased from 11% pre-test to 78% post-test (N=18).
Consistent with results of delMas, et. al.*
• Majority of students in L-S Normal category. Mainly because they missed the
sampling distribution for n=4.
• Responses for sampling distribution when n=4:
A (8 or 44.4%), B (2 or 11.1%), C (6 or 33.3%), E (2 or 11.1%)
• Responses for sampling distribution when n=25:
B (1 or 5.6%), C (2 or 11.1%), D (2 or 11.1%), E (14 or 77.8%)
WHY???
• Incorrect graphical interpretation (i.e., confusing variation with frequency,
improper knowledge of histogram shape)????
• Incorrect knowledge of CLT (averaging effect on shape)????
• Incorrect understanding of variability of sampling distribution of sample
means????
*2002 - delMas, et.al..
11
Measure of Consistent Graphical
Reasoning
Student considered to have “consistent graphical reasoning”
if sampling distribution chosen was consistent with stated
variance and shape (regardless of correctness!).
Irregular
Population
Sampling
Distribution
% Correct
(18 students)
% Consistent
(18 students)
n=4
n=25
Pre-Test
Post-Test
Pre-Test
Post-Test
5.6%
(1)
16.7%
(3)
33.3%
(6)
77.8%
(14)
5.6%
(1)
11.1%
(2)
77.8%
(14)
83.3%
(15)
12
Details of Consistent Graphical
Reasoning
Answer Pair
(n=4, n=25)
(Questions 1
and 3*)
Percent
(number) of
18 Students
n=4
Shape more like:
(Question 2(a)*)
n=4 Variability
compared to
population:
(Question 2(b)*)
n=4
Graph Chosen
Consistent with
Answers
(C, E)
27.8% (5)
Normal - 5
Less - 5
5
(B, E)
11.1% (2)
Pop - 2
Less - 2
2
(A, E)
38.9% (7)
Pop – 7
Same – 4
Less - 3
4
(A, B)
5.6% (1)
Normal – 1
More - 1
0
(E, D) or (E, C)
11.1% (2)
Normal - 2
Less - 2
2
(C, D)
5.6% (1)
Normal – 1
Less - 1
1
Normal – 9
Pop - 9
Less – 13
Same – 4
More – 1
14 (77.8%)
Totals
*Please see your handout!
13
Conclusions/Conjectures
Our students are generally displaying “consistent graphical
reasoning.” So where are they going wrong?
• Not recognizing that averaging effect of CLT on shape occurs
quickly (waiting for the magic number n=30).
• Not able to graphically estimate magnitude of standard deviation.
Some may be confusing variability with frequency.
• May be confusing limiting shape result of CLT with fixed variability
result of sampling distribution via mathematical expectation.
• Cannot necessarily expect upper-level students to extend their
computational and/or theoretical knowledge to the graphical realm.
• Qualitative results show students generally like using technology
and believe that group work and activities contribute to their
learning.
14
What should be done next,
based on what was learned?
• Focus more on graphical estimation and interpretation
skills earlier in the semester.
• Improve lectures, activity, and homework to make a
clearer distinction between fixed variance result and
limiting shape result for sampling distributions of sample
means.
• Continue to use technology wisely (especially computer
simulations) to enhance teaching.
• Modify assessment questions to easily target where and
how students are reasoning incorrectly/correctly.
• Compare our Math 300 students with our Math 400
students via the same assessment tool (see our paper*).
*2005 – Lunsford, Rowell, and Tracy Goodson-Espy, in preparation..
15
More Results!
We have shown a small portion of our results!
Please see our soon to be submitted paper* for:
• More graphical results with more detail
• Results from other parts of the assessment tool
(fact recollection/computational)
• Comparison of our Math 300 and Math 400
students using the same assessment tool (both
courses taught in Spring ’04)
• More observations and conjectures from our
data!
*2005 – Lunsford, et. al., in preparation..
16
References
1. delMas, R., Garfield, J., and Chance B. (1999), A Model of Classroom Research in Action:
Developing Simulation Activities to Improve Students' Statistical Reasoning, Journal of
Statistics Education v7, n3, http://www.amstat.org/publications/jse/secure/v7n3/delmas.cfm.
2. delMas, R. (2001). Sampling SIM. On line at
http://www.gen.umn.edu/faculty_staff/delmas/stat_tools/
3. delMas, R., Garfield, J., and Chance, B. (2002), “Assessment as a Means of Instruction,”
presented at the 2002 Joint Mathematics Meetings, online at
http://www.gen.umn.edu/research/stat_tools/jmm_2002/assess_instruct.html
4. Hollins, E. R. (1999), “Becoming a Reflective Practitioner,” in Pathways to Success in School:
Culturally Responsive Teaching, eds. ER Hollins and EI Oiver, Mahwah, NJ: Lawrence
Erlbaum Associates.
5. Hopkins, D. (1993), A Teacher’s Guide to Classroom Research, Buckingham: Open
University Press.
6. Lunsford, M. L., Rowell, G. H. Goodson-Espy, T. J. (2005), “Classroom Research:
Assessment of Student Understanding of Sampling Distributions of Means and the Central
Limit Theorem in Post-Calculus Probability and Statistics Classes,” in preparation.
7. Noffke, S., and Stevenson, R. (eds.) (1995, Educational Action Research, NY: Teachers
College Press.
8. Rowell, G. H., Lunsford, M. L., and Goodson-Espy, T. J. (2003), “An Application of the Action
Research Model for Assessment: Preliminary Report,” for the Joint Statistical Meeting
Proceedings, Summer 2003.
17
Contact Information
M. Leigh Lunsford:
[email protected]
Ginger Holmes Rowell:
[email protected]
18