Transcript Personhood
Personhood
What
Why
is a person?
does it matter?
“Human” rights: do
you have to be human to
deserve human rights?
Restricted
rights? Rights of protection, etc.
granted to children, the severely mentally ill
and others that are not granted full human
rights
The
right to be counted in utilitarian
calculations, i.e. to have one’s pleasure and
suffering matter morally
1) A member of the species Homo sapiens
People
vs. “persons”
Brain-dead, talking
computers, robots
Cyborgs
pig, aliens, apes,
(part human, part machine): how
much of a human being can be replaced by
machinery or artificial parts before
personhood is lost?
2) A certain level of intelligence?
How
to define? Merely quantitative or
qualitative, e.g. understanding concepts, having
intentionality?
Not
Not
necessary? Baby, mentally disabled
sufficient? Intelligent but not sentient
computer, deep blue, “zombie”
3) Being consciousness and/or having feelings
Lower
animals, e.g. rabbits, chickens
Does
consciousness come in degrees? Is a
certain degree of consciousness necessary?
How
The
to determine consciousness?
Problem of Other Minds
4) Moral agenthood
Kant:
“…rational beings are called persons inasmuch as their nature
already marks them out as ends in themselves” (1785)
Kant’s requirements of a moral agent: rationality,
autonomy, able to understand moral judgments and
choose to act morally, free will
Problem cases:
Babies, mentally deficient people, apes, dolphins,
computers, robots
Morally responsible vs. morally considerable
• Even if only “persons” are moral agents (hence, morally
responsible) , “non-persons” may be morally considerable
5) Some combination?
Having sufficient intelligence, being a moral agent,
being conscious, having free will, (being homo
sapiens)?
What combination would you choose?
Are the criteria too strict?
Can fulfilling some criteria be sufficient, e.g. either
being homo sapiens or being sufficiently intelligent
and conscious?
Could
AI fit all the criteria (except being
homo sapiens)?
Could
computers:
• Be (truly) intelligent, e.g. understanding
•
•
•
•
Is
concepts, having intentionality
Be conscious
Have feelings
Be moral agents
Have free will (at least to the same degree as we do)
it important to be biological?
Proposed by Alan Turing in 1950
The Chinese Room
Thought experiment invented by John Searle in 1980
Aliens, animals, computers surely don’t
have to be exactly like humans (mentally)
to be “persons”, or to be moral agents, or
(at least) to be morally considerable.
Will there come a time when we have to make moral
judgments regarding how computers or robots are
treated?
Are people just biological computers? If so, could
non-biological computers be mentally similar?
If people are something more, e.g. a spirit or special
type of substance (e.g. non-physical substance) that
has become attached to a biological machine, could
a similar spirit or special type of substance become
attached to a non-biological machine?
Asimov, Issac (1976), Bicentennial Man
(on reserve in the Philosophy Dept. Office)
Searle, John. R. (1990), “Is the Brain's Mind a Computer
Program?” in Scientific American, 262, pgs. 20-25 (in
main library)
Churchland, Paul, and Patricia Smith Churchland (1990)
“Could a machine think?” in Scientific American 262,
pgs. 26-31 (in main library)