ath causal analysis 2011

Download Report

Transcript ath causal analysis 2011

The Demands of
Causal Analysis:
Perplexities and Prospects
Martyn Hammersley
The Open University
Personal background
The changing fortunes of causal analysis
amongst qualitative researchers:
From positions claiming to provide forms of
causal analysis that are more effective than
statistical method (notably, analytic induction,
see Znaniecki 1934 and Lindesmith 1947) ,
to widespread rejection by qualitative
researchers of the possibility or desirability of
causal analysis.
Two caricatures?
• In qualitative work, researchers often deny
that they are engaging in causal analysis,
but do so anyway; and in a fashion which
assumes that identifying causes is
relatively straightforward.
• In dealing with quantitative data,
researchers usually insist that correlation is
not causation, but frequently proceed as if it
were – often (in effect) using significance
tests as an indicator of causal relations.
Issues to be addressed
•
•
•
•
•
•
Is causal analysis possible?
Empiricist misconceptions
Inference and theoretical models
Proof and evidence
The pragmatic character of explanation
The role of value-relevance frameworks
Arguments that causal analysis is
impossible
• Causation does not operate in the social
world, or perhaps even elsewhere.
• Causes are metaphysical entities, therefore
beyond the scope of empirical inquiry.
• In the social world, it is meanings or reasons,
not causes, that shape action.
• Human beings have agency, their behaviour
is not determined.
• Causal accounts are constructions: the same
scene can be narrated in conflicting ways.
An example of dismissal
‘The conventional approach of the social scientist
is to pursue a causal line of inquiry; to ask what
is the cause of the shift or change which
stimulates the investigation […] Foucault rejects
the preoccupation with causes [because this
tends] to presume that social life is subject to
linear and evolutionary change. [And because]
the quest for causes tends to introduce
assumptions about the role of human intentions,
that outcomes are the result of human desires
and plans.’ (Hunt and Wickham, 1994, p6)
Dismissal from another direction
‘The law of causality, I believe, like much that
passes muster among philosophers, is a relic of
a bygone age, surviving, like the monarchy,
only because it is erroneously supposed to do
no harm.’ (Russell, 1913, p. 1).
Later view (1948:part vi, ch v): the standard
philosophical concept of causation involves the
law of universal determinism – every event is
both a cause and an effect – whereas scientific
laws propose functional relations that are not
necessarily deterministic.
The opposite extreme:
causal analysis as unproblematic
• The attribution of causes is not a difficult task:
we routinely attribute causes in everyday life,
much of the time quite successfully. Moreover,
even the discovery of error itself tells us that
we can succeed in causal attribution.
• The case of historiography: historians
successfully document the causes of past and
current events.
• So, no problem!
Empiricist illusions
• The idea that experimental method, for
example RCTs, can demonstrate causal
relationships (On this, see Worrall 2002 and
2007, Cartwright 2007)
• The assumption that causes can be found by
calculating probabilities within a representative
data set. (See Turner 1948)
• The view that causation is equivalent to the
reasons that people give for their actions.
• The belief that causal relations can be
identified via direct observation.
Do people know what causes their
own behaviour?
The answer is yes, and no.
• They have access to information about some
causal processes affecting them to which
others will not have access. Some of this, as
regards their own intentions and decisions,
may be virtually immediate and error-free.
• However, they do not have privileged access
to information about many causal processes
that shape their behaviour, including what
they think and feel.
An example of the direct
observation view
‘In field work […] general relations are
often discovered in vivo; that is, the field
worker literally sees them occur. This
aspect of the “real life” character of field
work deserves emphasis […]’.
(Glaser and Strauss 1967:40)
Inference from signs
An ancient concept (Allen 2001).
A sophisticated version:
Peirce’s account of the process of scientific
inference:
abduction (the development of explanatory
models)
deduction (the derivation of implications,
including those that can be tested),
induction (inference back from evidence to
conclusions about the likely validity of the
explanatory model).
Explanatory models
The centrality of theoretical (not statistical)
models of causal mechanisms or
processes.
These specify causal relations amongst
factors, held to operate in the world.
These relations may be synchronic and/or
diachronic.
A simple example of functional variation:
predisposing and trigger factors.
More complex relations among the
components of explanatory models
• Causal configurations.
• Non-linear relations between two or more
variables.
• Causal relations in which factors affect the
character of one another (so the effect of the
two combined is not additive).
• Feedback mechanisms: evolutionary causation
(van Parijs 1981).
• Emergence of new systemic causal factors from
lower level interactions (see Elder-Vass 2010).
Signs: the truth in empiricism
• Contiguity of putative cause and effect, and the
identification of mediating factors: process
tracing (see Hage and Meeker 1988; Roberts
1996; George and Bennett 2005)
• Regularities: discovering recurrent sequences
• Counterfactual evidence and the comparative
method
All are relevant kinds of evidence, but none can
be absolutely conclusive: judgment is required.
An exemplary case of process
tracing
In investigating the effects of streaming,
and of differentiation of students by
teachers in terms of academic criteria more
generally, Lacey (1970) used both
qualitative and quantitative data to trace
processes of differentiation over a two-year
period, showing how a polarisation of
attitude towards school developed between
students at the top and bottom of the
academic rankings.
Strategies for finding regularities
and/or making counterfactual checks
• Experimental methods, including RCTs.
• Correlational approaches: estimating the
predictability of an outcome given the presence
of some feature, or the level of a property, in
individual cases, while controlling for (some)
other variables
• Qualitative Comparative Analysis (QCA) (Ragin
2008; Rihoux and Ragin 2008; Cooper et al
forthcoming): discovering which combinations
of factors produce some type of outcome.
Witness accounts
Can we draw evidence from witness
accounts?
Of course, but:
• Witnesses may not have reliable
knowledge
• They may have reason to deceive or lie
• Their explanations may not be framed in
the same way as those of the social
scientist.
The problem of cogency
threshold
• Not a matter of proof or demonstration.
• The responsibility of researchers is to produce
conclusions whose likely validity is significantly
greater, on average, than those from other
sources.
• In this sense, the threshold of cogency that
must be met is audience-relative.
• But it is not a matter of whether people are
convinced but whether they should be: ideal
research community (Hammersley 2011a)
The pragmatic character of
explanation
When Willie Sutton was in prison, a
priest who was trying to reform him
asked him why he robbed banks.
‘Well’, Sutton replies, ‘that’s where the
money is’. (Garfinkel 1981:21)
An explication
The explanatory frame behind the priest’s question
was:
Why does Sutton rob banks [rather than earning
money legitimately]?
The bank robber’s answer operates within a different
frame:
Why does he rob banks [rather than robbing other
sorts of establishment]?
The importance of knowing what question is being
addressed
A further complication:
an infinite number of causes
The selection of causes from an infinite
number of candidates:
C3
C7
C1
C4
C8
And
O
C9
so
C2
C5
C10
on
C6
C11
C12
How are causes to be
selected?
• Partly, no doubt, in terms of relative
causal power, but this is not the whole
story.
• Some debates about what caused what
are really about which causes are
relevant or most significant, in the sense
of who is to be blamed or what policy is
required. ‘Importance’ here is not
judged solely in terms of causal power.
An example
from Gewirtz and Cribb (2006)
Black boys are more likely than other
categories of student to be excluded from
school or to leave school with low or no
formal academic qualifications.
Is this a product of:
• Institutional racism in schools (Gillborn)?
or
• The prevalence of an anti-school peer-group
culture amongst these students (Sewell)?
Mackie (1974) on INUS conditions
INUS condition = an Insufficient but Nonredundant part of a set of causes, this set being
itself Unnecessary but nevertheless Sufficient for
the occurrence of the effect.
For example, a short circuit can be an INUS
condition for a house burning down: it, plus the
proximity of flammable material, taken together,
are unnecessary but sufficient for this result.
(Unnecessary, since other sets of factors could
also have done this. Note, sufficiency is always
relative to some assumed causal background.)
Value-relevance
• If all explanation is pragmatic, how
should explanations in educational
research be framed: how should what is
to be explained be determined, and how
should causal factors be selected?
• The answer, I think, is by means of valuerelevance frameworks.
• But these are not frameworks of valuecommitments, they are frameworks
adopted for working purposes.
An example of a frequently used
value-relevance framework
Explanations of social class inequalities in
educational or occupational achievement typically
assume that (Hammersley 2011b):
1.Pursuit of high level educational qualifications,
or of service class jobs, constitutes a (in fact, the
most) worthwhile goal in life.
2.Failure to achieve perfect educational equality
between social classes mainly results from
discrimination and/or of other barriers generated
by the social system.
Nothing wrong with using
value-relevance frameworks,
so long as:
• No framework is presented as if it were the
only legitimate one for studying the
phenomena concerned.
• It is not claimed, or implied, that research can
justify, on its own, the adoption of one
framework rather than another.
• Evaluations and/or recommendations are not
presented as if they were facts, rather than
value-judgments (involving factual elements).
Are there general theories in
social science?
• The distinction between explaining and
theorising.
• The problems facing theorising about the
social world.
• The idiographic approach of Max Weber (see
Turner and Factor 1984; Ringer 1997, Agevall
1999).
• Russell (1948) on causal lines. Types of
structures or systems: but not universally
operating, deterministic, or eternal.
Conclusion
• Causal analysis is an essential task in social and
educational research.
• It is important to recognise that it involves identifying
causal processes/mechanisms in the world, and we
need to use explicit models of these processes.
• So, naïve empiricism must be resisted, but it points to
the means by which we can gain evidence for
constructing and checking the validity of our models.
• All explanations are pragmatic, and rely on valuerelevance frameworks.
• There is (for me) an open question about what the
intended product of causal analysis in social science
can/should be: explanations or theories?
References
Abbott, A. (2001) Time Matters, Chicago, University of Chicago Press.
Agevall, O. (1999) A Science of Unique Events: Max Weber’s methodology of the cultural sciences, Uppsala,
Uppsala University.
Allen, J. (2001) Inference from Signs: ancient debates about the nature of evidence, Oxford, Oxford University
Press.
Cartwright, Nancy D., 'Are RCT's the Gold Standard?' in Biosocieties, 2007, 2, pp.11-20. Available at (accessed
12.03.09): http://personal.lse.ac.uk/cartwrig/Papers%20on%20Evidence.htm
Cooper, B. et al (eds.) (forthcoming) Challenging the Qualitative-Quantitative Divide, London, Continuum.
Elder-Vass, D. (2010) The Causal Power of Social Structures, Cambridge, Cambridge University Press.
Garfinkel, A. (1981) Forms of Explanation, New Haven CT, Yale University Press.
George, A. and Bennett (2005) Case Studies and Theory Development in the Social Sciences, Cambridge MS,
MIT Press.
Gewirtz, S. and Cribb, A. (2006) ‘What to Do about Values in Social Research: The Case for Ethical Reflexivity in
the Sociology of Education’, British Journal of Sociology of Education, 27, 2, pp141-55.
Glaser, B. and Strauss, A. (1967) The Discovery of Grounded Theory, Chicago, Aldine.
Hage, J. and Meeker, B. (1988) Social Causality, London, Unwin Hyman.
Hammersley, M. (2011a) Methodology, Who Needs It?, London, Sage.
Hammersley, M. (2011b) ‘Can social science tell us whether Britain is a meritocracy? A Weberian critique’,
unpublished paper.
Hedstrom, P. and Swedberg, R. (eds.) Social Mechanisms: an analytical approach to social theory, Cambridge,
Cambridge University Press.
Hunt, A. and Wickham, G. (1994) Foucault and Law, London, Pluto Press.
Kaplan, D. (2009) ‘Causal inference in non-experimental education policy research’, in Sykes, G., Schneider, B.,
and Plank, D. (eds.) Handbook of Education Policy Research, London, Routledge.
Lacey, C. Hightown Grammar, Manchester, Manchester University Press.
Lieberson, S. (1985) Making it Count: The improvement of social research and theory, Berkeley, University of
California Press.
Lieberson, S. (1991) ‘Small Ns and big conclusions: an examination of the reasoning based on a small number of
cases’, Social Forces, 70, pp307-20.
References Contd.
Lindesmith, A. (1947) Opiate Addiction, Evanston ILL, Principia Press.
Mackie, J. L. (1974) The Cement of the Universe, Oxford, Oxford University Press.
McKimm, V. R. and Turner, S. P. (eds.) (1997) Causality in Crisis? Statistical methods and the
search for causal knowledge in the social sciences, South Bend IND, University of Notre Dame
Press.
Ragin, C. C. (2008) Redesigning Social Inquiry: Fuzzy sets and beyond, Chicago, University of
Chicago Press.
Rescher, N. 1978. Peirce's Philosophy of Science: critical studies in his theory of induction and
scientific method. South Bend, Ind: University of Notre Dame Press.
Rihoux, B. and Ragin, C. (eds.) (2008) Configurational Comparative Methods, Thousand Oaks CA,
Sage.
Ringer, F. (1997) Max Weber’s Methodology, Cambridge MS, Harvard University Press.
Roberts, C. (1996) The Logic of Historical Explanation, University Park PA, Pennsylvania State
University Press.
Russell, B. (1913) ‘On the Notion of Cause’, Proceedings of the Aristotelian Society 13: 1-26.
Russell, B. (1948) Human Knowledge, New York, Simon and Schuster.
Turner, R. H. (1948) ‘Statistical logic in sociology’, Sociology and Social Research, 32, pp697-704.
Turner, S. P. and Factor, R. A. (1984) Max Weber and the Dispute over Reason and Value,
London, Routledge and Kegan Paul.
Van Parijs, P. (1981) Evolutionary Explanation in the Social Sciences, London, Tavistock.
Worrall, J. (2002) ‘What evidence in evidence-based medicine?’, Philosophy of Science, 69,
ppS316-S330.
Worrall, J. (2007) 'Why there's no cause to randomize', British Journal for the Philosophy of
Science, 58, 3, pp451-488.
Znaniecki, F. (1934) The Method of Sociology, New York, Farrar and Rinehart.