Transcript PowerPoint

Evaluation Research
Pierre-Auguste Renoir: Le Grenouillere, 1869
Evaluation Research
Introduction
• Evaluation research refers to a research purpose
rather than to a specific method.
• Evaluation research can include many different
types of methods aimed at understanding the
effectiveness of a social program that is intended
to bring about desired change.
• This form of research helps sociologists complete
the tasks of identifying social problems and
assessing the efficacy and consequences of social
change programs.
Evaluation Research
Evaluation research includes:
1. Needs assessment studies.
• Determine the existence, extent, and
awareness of social problems.
2. Cost-benefit studies.
• Assess the extent to which the outcomes of
social change programs justify their costs.
3. Monitoring studies.
• Provide information about ongoing social
problems.
Evaluation Research
Evaluation research includes:
4. Program evaluation (outcome assessment).
• Determine the extent to which social programs
are reducing social problems.
Formulating the Problem
1. Issues of Measurement
• One cannot measure efficacy and desired
outcomes unless one knows specifically the
outcomes of a social program or policy
expected within a certain time frame.
• Sometimes, goals are not initially welldefined, change over time, or broaden in
scope over time.
• Sometimes, intended outcomes require a
long time to materialize, but funding
guidelines require early evaluation of
programs or policies.
Formulating the Problem
2. Specifying Outcomes
• The response variable, or outcome, must be
clearly defined.
• Sometimes, outcomes are defined by the
guidelines of the funding agency.
• Ideally, definitions of outcomes are specified
prior to the implementation of the program
or policy being evaluated.
• But, things change….
Formulating the Problem
3. Measuring Experimental Contexts
• Obviously, to assess the efficacy of a program
or policy, one needs to know and be able to
measure its characteristics.
• Sometimes, characteristics are easy to
identify (e.g., hours of contact, labor hours,
funding, time, guidelines for behavior).
• In some cases, characteristics are more
difficult to identify (e.g., quality of contact,
expertise of labor, timing of funding,
flexibility in guidelines).
Formulating the Problem
4. Specifying Interventions
• Evaluation research often does not enjoy the
level of control available in a laboratory
experiment.
• Thus, specifying the independent variables, the
“interventions,” is not necessarily a
straightforward task.
• People participate differentially in programs.
• People come and go within programs.
• Program delivery varies over time and
space.
Formulating the Problem
5. Specifying the Population
• Specifying the participants in a program is not
always straightforward.
• People vary in the characteristics they bring
into a social change program.
• People vary in the extent to which they have
adopted and adapted to the desired
changes of the program.
Formulating the Problem
6. New versus Existing Measures
• Specifying new or existing measures affects
the validity and reliability of the evaluation.
• The use of new or existing measures also can
affect the extent of acceptance of an evaluation
by funding agencies, the public, and the
community of scholars.
• Standardized measures, often specified by
funding agencies, can have advantages and
disadvantages for evaluation of programs
and policies.
Formulating the Problem
7. Operationalizing Success and Failure
• Specifying what constitutes success or failure
can be challenging.
• How much change is success?
• What types of change are success?
• Are unanticipated changes success?
• When should success happen, immediately
or over a long period of time?
• Which measures indicate success?
• What happens when some measures
indicate success and others indicate failure?
Types of Evaluation Research Designs
1. Experimental Designs
• Typically, evaluation research involves
assessments of programs and policies in field
(i.e., natural) experiments.
• One does not have the level of control
available within the laboratory.
• Unless evaluation is planned within the context
of social change programs and policies, one
might not be able to conduct a classical
experiment.
Types of Evaluation Research Designs
2. Quasi-Experimental Designs
• Subjects are not randomly assigned to
experimental and control conditions.
• Assessments do not occur both at Time 1 and
Time 3 (i.e., pretest and posttest for all
subjects).
Types of Evaluation Research Designs
2. Quasi-Experimental Designs (Continued)
• Time-Series Designs: If time-series evaluations
do not involve classical experiments, it can be
challenging to infer an effect of the treatment.
• Consider this situation:
• An instructor introduces the use of
“controversial discussion topics” midway
through the semester, and then observes
the level of classroom participation.
• Which of the following patterns of classroom
participation support a treatment effect?
Types of Evaluation Research Designs
2. Quasi-Experimental Designs (Continued)
• Pattern One:
• Classroom participation is low at the
beginning of the semester, but steadily
increases at a constant rate throughout the
semester.
• Pattern Two:
• Classroom participation has a random
pattern of low and high levels of interaction
throughout the semester.
Types of Evaluation Research Designs
2. Quasi-Experimental Designs (Continued)
• Pattern Three:
• Classroom participation is low at the
beginning of the semester, but steadily
increases at a constant rate throughout the
semester.
Types of Evaluation Research Designs
2. Quasi-Experimental Designs (Continued)
• Time-Series Designs
• In observing Pattern 1, the researcher might
conclude that participation increases
throughout the semester, regardless of the
introduction of a treatment.
• In observing Pattern 2, the researcher might
conclude that participation is erratic and not
related to the introduction of a treatment.
• Pattern 3 indicates a treatment effect.
Types of Evaluation Research Designs
2. Quasi-Experimental Designs (Continued)
• Nonequivalent Control Groups
• Researchers seek naturally-occurring
control groups with similar characteristics to
the experimental group.
• Multiple Time-Series Designs
• Comparison of trends across naturallyoccurring groups, wherein one group
experiences some type of treatment effect.
Types of Evaluation Research Designs
3. Qualitative Evaluations
• Qualitative methods can be equally as effective
in evaluating programs and policies as are
quantitative methods.
• The most effective evaluation research often
uses both quantitative and qualitative methods.
The Social Context
1. Logistical Problems
• Evaluation research implies an assessment of
employee performance.
• Employees of organizations and agencies
being evaluated, therefore, often are reluctant
to reveal problems with a program or policy.
• Motivating personnel to participate fully in an
evaluation can be a challenge.
• Administrators, in particular, might feel
threatened by evaluation research.
• Administrators might hinder the quality of the
evaluation research.
The Social Context
2. Ethical Issues
• Evaluation research implies becoming involved
in the programs being conducted. Hence, the
evaluator might disturb the normal functioning
of the program.
• The results of an evaluation sometimes reveal
a need for immediate change to protect human
subjects. But the aims of the evaluation argue
for nonintervention to best complete the
evaluation.
The Social Context
3. Use of Research Results
• Evaluation research sometimes is funded with
the goal of applauding or discrediting a
program or policy.
• When the purposes are biased, then the quality
of the research is more likely to become
biased.
• When the results of evaluation research do not
support biased goals, then they might be
critiqued or squashed.
Social Indicators Research
1. Social Indicators
• Social indicators are aggregated statistics that
reflect various forms of societal well-being.
• Consumer price index
• Poverty levels
• Levels of illiteracy
• Infant mortality statistics
• Divorce rates
• Although such indicators provide only rough
approximations of societal health, they are part
of common practice.
Social Indicators Research
2. Computer Simulation
• High speed, large capacity computers allow for
complex simulations using many indicators of
societal conditions to forecast trends or predict
the outcome of suggested programs or
policies.
• Simulations are restricted by knowledge of
current technologies and conditions, which
might change dramatically over the course of
the simulation period.