Transcript slides

ENSURING
STATISTICAL
LITERACY FOR PRESERVICE EDUCATION
MAJORS
Stacy M. Bjorkman, Ph.D., NCSP
Walden University
&
Kelly H. Summers
Northern Illinois University
WHY IS THIS IMPORTANT?
In almost every state there has been a shift in
the way teachers are evaluated.
 Most states are including student achievement
data as part of the teacher evaluation process.
 Teachers’ livelihoods depend on knowing about
assessments.
 Most education degree programs do not include
coursework in statistics.

ENSURING STATISTICAL LITERACY
Determine the role of assessments in the teacher
evaluation process in your state.
 Build a pre-service workshop that discusses the
role of assessments in teacher evaluations.
 Design the workshop so the focus is on statistical
concepts with real-world educational data.
 Include key components of assessment and
statistical literacy in the workshop.
 Be sure to include frequent checks for
understanding.

POSSIBLE CONCEPTS TO INCLUDE IN A
PRE-SERVICE WORKSHOP
Summative vs. Formative Assessments
 The concept of reliability
 The concept of validity
 Measures of central tendency and variability
 Reading various types of graphs
 Types of test scores

KEY DIFFERENCES IN FORMATIVE &
SUMMATIVE ASSESSMENTS
Summative
Formative
Reasons for Assessing
Document individual or group
achievement or mastery of
standards; measure achievement
status at a point in time for
purposes of reporting
Increase achievement; to help
students meet more standards;
support ongoing student growth
To Inform
Others about students
Students about themselves
Focus of Assessment
Achievement standards for
which schools, teachers, and
students are held accountable
Specific achievement targets
selected by teachers that enable
students to build toward
standards
Driving Force
Accountability
Improvement
Place in time
Event after learning is supposed
to have happened
Process during learning
Chapius, J. & Chapius, C. (2002). Understanding school assessment: A parent and community
guide to helping students learn. Assessment Training Institute: Portland, OR.
THE CONCEPT OF RELIABILITY

What do we mean by reliable?
The consistency of a measurement tool
 Not good or bad, just consistent


Want to minimize “measurement error”

Test-taker variables


Test administration


Hot, cold, noise, lack of sleep, lack of food, etc.
Lack of standardization either knowingly or unwittingly
Test scoring and interpretation

Is the test more subjective? Then there will likely be more
measurement error.
TYPES OF RELIABILITY TO CONSIDER
Inter-rater reliability: consistency between
scorers
 Test-retest reliability: variability over time for
one person’s score
 Internal consistency: How well the items on a
test measure the same construct

WHAT EXACTLY IS VALIDITY?
 What
do we mean by a valid measure?
 A judgment or estimate of how well a
test measures what it purports to
measure in a particular context
 Reliability is solely a statistical measure
 Validity is determined statistically as well
as through common sense judgments
CENTRAL TENDENCY
Mean, Median, Mode
 Range
 Standard deviation

Consider the impact of…
 Small sample sizes
 Range restriction & outliers
 High & low achieving students
WHAT’S IN A SCORE?

Raw scores


Standard score


A raw score that has been converted from one scale to
another scale
Percentiles


How many items a test taker got correct
Divides performance into 100 equal parts
Percentage correct
THANK YOU! PLEASE CONTACT
US WITH QUESTIONS
[email protected]
&
[email protected]