Transcript Document

Does Student-Teacher
Interaction Matter in
Distance Education?
WAYNE FREEMAN
DOUGLAS GLASS
S T. A N D R E W S U N I V E R S I T Y
A BRANCH OF WEBBER INTERNATIONAL UNIVERSITY
Overview
Problem
Literature Review
Research Question
Methods
Analysis
Discussion
Conclusions
Previous Presentation
A previous version of
this research was
presented at the SoTL
Conference recently in
Savannah, Ga.
The Problem
How can student outcomes best be
improved in distance education (DE)
and at what ‘costs’?
Literature Review
• “Distance education (DE) can be much better and
also much worse than classroom instruction (CI)
based on measured academic outcomes”
• Research methodologies in DE are “woefully
inadequate and poorly reported”
• Research should focus on what makes DE effective
or ineffective – not on comparing CI and DE.
(Bernard R. M., et al., 2009)
Literature Review
• Student-Teacher interaction highly valued & course
was more satisfying (Nichols, 2011)
• Interaction an integral component of DE (Holden &
Westfall, 2006)
• Asynchronous DE courses more positive versus
synchronous DE courses compared to Classroom
Instruction (Bernard, R.M., et.al., 2004)
Interaction is Important
MODES OF INTERACTION
Student <-> Teacher
Student <-> Student
Student <-> Content
(Anderson 2003)
Interaction Equivalency Theorem
Student- Student- StudentContent Teacher Student
Any one of them?
Student-
Thesis 1 - Quality
StudentTeacher
Student- StudentContent Content
(Anderson 2003)
(Miyazoe & Anderson, 2011)
Student
StudentTeacher
StudentContent
Thesis 2 - Quantity
Increased
interaction =
Higher
learning
quality?
But more
costs and
time
Research Question
How does a low level of student-teacher
Interaction impact student satisfaction and
achievement when student-content
interaction is high?
The Pilot Study
Methods
Research Design
The quality of the quantitative literature of
distance education (DE) is poor!
•
•
•
•
•
lack of experimental control
lack of procedures for randomly selecting participants
lack of random assignment to treatment group
poorly designed dependent measures
failure to account for a variety of variables related to
the attitudes of students and instructors
(Bernard R., et. al. 2010)
Methods
Research Design of Present Study
Quasi-Experimental
• Sample – Undergraduate and graduate students
at a small liberal as college in the South
• Control Group – students enrolled in an
asynchronous tutorial with no facilitator (n= 15)
• Treatment Group – students enrolled in an
asynchronous tutorial with a facilitator (n=20)
Methods
Instrumentation
Pre-Tutorial
• Student Background Survey - Demographics and
Self Efficacy for Online Learning (Artino)
• Test of APA Knowledge
Post-Tutorial
• Test of APA Knowledge
• Student Satisfaction Survey
Methods
Data Collection/Preparation
Collection
• Invitation to Participate developed
• Outreach to potential participants (67 students
agreed to participate)
Preparation
• Data consolidation from SurveyMonkey and
Moodle
• Missing values/Multiple Imputation
Analysis
Descriptive Statistics
Variable
Control
Treatment
Gender
62% Female
38% Male
49% Female
51% Male
Race
70% White
27% Black
3% Other
67% White
17% Black
16% Other
GPA
3.1-3.5
3.1-3.5
Age
26 years old
24 years old
Online Self-Efficacy
4.7 out of 7
5.4 out of 7
Online Experience
1.4 courses
1.4 courses
Analysis
Correlation
SATISFACTION
POSTTEST
GROUP
ONLINEEXP
Significant
ONLINESE
Significant
GENDER
Significant
AGE
Significant
GPA
Analysis
Regression
Coefficientsa
Model (R Square = .346)
6
(Constant)
Unstandardized Standardized
Coefficients
Coefficients
B
Std. Error
Beta
49.993
7.611
-13.903
GENDER
2.721
SATIS
ONLINEEXP
2.446
.349
PREQUIZ
7.789
GROUP
RACE
-5.502
a. Dependent Variable: POSTQUIZ
2.399
.569
.548
.100
2.271
1.709
-.369
.286
.281
.214
.206
-.199
t
Sig.
6.568
.000
-5.795
4.783
4.466
3.492
3.430
-3.219
.000
.000
.000
.001
.001
.002
Analysis
Analysis of Covariance (ANCOVA)
No significant difference in post-test scores
between the Control and Treatment Groups
GROUP
F
1.453
Sig.
.230
Limitations
• Small sample size/Low statistical power
• Convenience sample
• Self-reported data
• Limited to tutorial, not full course
• Measurement of satisfaction
Discussion/Conclusions
• Statistical significance in regression
• Singular pedagogy tested
• Student Motivations/ Attitudes
₋ Learning Styles - see-hear-do
₋ ‘in’ vs ‘at’ college
₋ Task Value
• Validity/ reliability across disciplines
References
Andreson, T. (2003). Modes of interaction in distance education: Recent developments & research questions
(Vol. Handbook of Distance Education). (M. Moore, Ed.) Mahwah, NJ: Lawrence Eelbaum.
Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, C. A., Tamim, R. M., Surkes, M. A., & Bethel, E.
(2009). A meta-analysis of three types of inrteraction treatments in distance education. Review of
Educational Research, 1243-1289.
Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., & Wozney, L. (2004). How does distance
education compare with classroom instruction? A meta analysis of the empirical literature. Review of
Educational Research 3(74), 260-277.
Holden, J. T., & Westfall, P. J.-L. (2006). An instructional media selection guide for distance learning.
Boston: United States Distance Learning Association.
Mayer, R. (2001). Multi-Media Learning. Cambridge, UK: Cambridge University Press.
Miyazoe, T., & Anderson, T. (2009). The Interactive Equivalency Theorem: Research Potential and Its
Application to Teaching. (pp. 1-6). Madison: 27th Annual Conference on Distance Teaching & Learning.
Miyazoe, T., & Anderson, T. (2010 9(2)). The interactive equivalency theorem. Journal of Interactive Online
Learning, 94-104.
Nichols, J. (2011). Comparing Educational Leadership Course and Professor Evaluations in on-line and
traditional instructional formats: What are the Students saying? College Student Journal 45(4) , 862-868.
Russell, T. L. (1999). The No Significant Difference Phenomenon. Chapel Hill: Office of Instructional
telecommunications, NC State University.
Contact Information
Wayne Freeman - [email protected]
Douglas Glass – [email protected]