Course evaluation for practice, educational research

Download Report

Transcript Course evaluation for practice, educational research

Course evaluation for practice
and educational research
Professor Elena Frolova
St-Petersburg, Russia
Department of family medicine MAPS
Educational research task Force group, Tallinn, 2011
May 6th
Background
• New e-learning course as a part of
RESPECT project of Family medicine
Department and Leuven Katholiek
University (leaders prof.Degryse, prof.
Kuznetsova)
• To teach spirometry, then to conduct
research on prevalence of COPD
Why we decided to evaluate?
Why evaluate?
• Evaluation is central to ensuring the
quality of academic practice
• e-learning involves high levels of
investment that need to be justified
• To demonstrate effectiveness and costeffectiveness of learning technologies
Who are stakeholders?
• We as teachers to extract lessons learned and
improve on practice
• Your students or future cohorts
• Other lecturers, departments
• Senior managers, to disseminate lessons
learned and promote good practice by others
• QAA
• Funding body, finance officer
(www2.warwick.ac.uk/services/ldc)
Why we decided to evaluate?
•
•
•
•
•
•
New learning- e-learning
New subject of learning
Spirometry in primary care
Research in education
Too many challenges!
Money, money
When evaluate?
• Diagnostic – learning from the
potential users; to inform plans to
change practice;
• Formative - learning from the
process; to inform practice;
• Summative - learning from the
product; to inform others about
practice
When we decided to evaluate?
• When the cook tastes the soup, it is
formative evaluation;
• When the dinner guest tastes the soup, it
is summative evaluation
(Jen Harvey, “Evaluation cookbook”)
• Diagnostic, formative, summative
What we evaluate?
• e-pedagogy?
• E-learning facilitates new forms of
resources, communication and
collaboration and new patterns of
study and group behaviors
• E-learning may demand new
attitudes and skills
The objects of evaluation
• Do not compare the “e” group with a
previous “control” group
• It is difficult to separate e-learning
intervention from complex interplay of
cultural and social influences
• Not only pedagogical aims and teaching
process
• Technology itself and support surrounding
this technology
“If you don't have a question you
don't know what to do (to observe
or measure), if you do then that
tells you how to design the study”
(Draper, 1996).
Ask yourself
For the top
management of my company, university,
government, or institution, what is the single
most
important measure of success?
Focus on ?
•
•
•
•
•
On e-learning materials ?
On ‘content’?
On issues concerning screen design?
Navigation?
This focus is probably quite superficial in
terms of pedagogical impact
Focus on?
• The ways in which students interact with
electronic content
• How the e-resources are introduced into
the learning design
• The ways in which the technology
supports interactions between learners
• The form of communication between
students
Comparing traditional and e-learning
methods is quite difficult
• Students like it because it’s new
• Students hate it because it’s unfamiliar
• Is it possible to isolate the effect of the
new medium?
• Is any change in scores the result of
having a different cohort of student?
Evaluating technology supported
higher order learning
A key evaluation question is whether
the approach resulted in students
achieving the intended learning
outcomes
Evaluating cost-effectiveness
• To determine the costs associated with using
e-learning
• The outcomes may include pedagogical,
social and economic benefits which not
always could be convert into market or
monetary forms
• The benefits of using e-learning are difficult
to quantify, but may be of high social value
Question structures
Level 1st
• Does it work?
• Do students like it?
• How effective is it?
Question structures
Level 2nd
• How cost-effective is the approach?
• How scalable is the approach?
Question structures
• Is there sufficient need for the innovation to make it
worth developing?
• What are the best ways of using e-learning resource
X?
• Will this approach be an effective way to resolve
problem X?
• How does this approach need to be modified to suit its
purpose?
• Do you think it would be better if ...(an alternative
mode of engagement) was carried out?
• What advice would you give another student who was
about to get involved in a similar activity?
•
(Bristol LTSS guidelines)
Question structures
• How do students actually use the learning
system/materials?
• How usable is the e-learning tool or material?
• What needs are being met by them?
• What types of users find it most useful?
• Under what conditions is it used most effectively?
• What are features of the support materials are most
useful to students?
• What changes result in teaching practices,
learning/study practices?
• What effects does it have on tutor planning, delivery
and assessment activities?
• What are the changes in time spent on learning?
Tavistock (1998) evaluation guide
How evaluation should be planned
• A preliminary analysis of aims, questions,
tasks, stakeholders, timescales and
instruments/methods
• Time and budget
• Sampling and randomisation
• Question should be changed with the
changing of the aim of evaluation
•
•
•
•
Methods of gathering information
A pre-task questionnaire
Confidence logs after each kind of activity
A learning test (quiz)
Access to subsequent exam (assessment)
performance on one or more relevant questions
• Post-task questionnaire
• Interviews of a sample of students (also: focus
group)
• Observation and/or videotaping of one or more
individuals (also: peer observation/co-tutoring)
How should the data be analyzed?
• Quantitative data analysis
• Qualitative data analysis
"Research is the process of going
up alleys to see if they are blind."
Marsten Bates, 1967
Design
•
•
•
•
•
•
Samples
Control group
Intervention group
Skills and attitudes
Spirometry skills
Performance?
What happened to the students
• Randomized studies found no difference
between groups
• “no particular method of teaching is
measurably to be preferred over another
when evaluated by student examination
performances” (Dubin and Taveggia, 1968)
• Methods like Problem Based Learning are
implemented very differently in different
institutions
What we test?
• Students may compensate for
educational interventions
• Hawthorn effect
• VanderBlij Effect
• a design with an educational method as
the independent variable and a
curricular test as the dependent
variable, is usually too simple
• We have know the learners behavior
What we expect from better learning?
•
•
•
•
•
Better students grades?
Better patients?
Better world?
Better doctors?
Or finally better learning behavior?
Conclusion
Do we want to test this dinner
still?
Recourses used and recommended
• http://www2.warwick.ac.uk/services/ldc
• http://www.fastrak-consulting.co.uk/tactix/
(Evaluating online learning)
• “Evaluation cookbook”, Editor Jen Harvey
(http://www2.warwick.ac.uk/services/ldc/)
• OLLE TEN CATE. What Happens To the Student? The
Neglected Variable in Educational Outcome
Research. Advances in Health Sciences Education
6: 81–88, 2001.