ethics in a world of robots

Download Report

Transcript ethics in a world of robots

Medical ethics of the future:
What will we do when all things are possible?
ETHICS IN A WORLD OF ROBOTS
ROBOTIC POTENTIAL-MASSIVE!
Contemporary Examples
 Social Impact: Foxconn International makes
components for iPhones, iPads, etc. It will
buy enough robots to replace 1.2 million
workers in China.
 Military and Surveillance: Internet
surveillance..
 eg. gmail monitored in US by CIA.
 e.g. Israel’s “Iron Dome” Defensive system
 e.g. National Airspace Monitoring system
Current Unmanned Surveillance Vehicle:
Drone
 Over 30,000 drones forecast for US airspace
alone-border patrol, forest fire location, etc.
ROBOTIC POTENTIAL-MASSIVE!
Amazing Medical Advances
 Stomach, or gastric cancer is the second
leading cause of cancer deaths worldwide
and is particularly common in East Asia.
SNU’s Ho and Phee made a crab robot that
enters through the mouth to ‘eat’ cancer in
the stomach.
 Robots for surgery, internal examination,
organ modification (artery clearing),
behavioural modification (implanted in
brain), physical assistance, etc.
SINGULARITY AND The ETHICAL
ISSUES: The DEBATE
 Proposition: Even if a super-robot were to control all
medical systems in the future, with unlimited
possibilities to manipulate the human, so long as the
word ‘human’ applies, there must be the presumption of
an ethical awareness, an available intentionality to
express self meaningfully, and some sense of legitimate
‘choice’.
 Pro: So long as the statement “x is better for humans”
has relevance, then, ethical evaluation will define the
human. Even if we adopt Zadeh’s (1988) argument for
fuzzy logic, we just have no means of relating to entities
who do not exhibit the minimal elements noted above.
SINGULARITY AND The ETHICAL
ISSUES: The DEBATE
 Con: Singularity may change the very




definition of the human
Already the line is blurring between the
machine and the human
Most current technology is already beyond
the intelligence of most humans
No one human institution has control over
machines or their development
Wide belief that machines do it better
ROBOTS BRING A HOST OF ETHICAL ISSUES
 Should robots only be developed that are




‘sensitive’ to human values?
Will humans accept their replacement?
Human modification technology no longer in
future—pacemaker? Hearing aids? Motion?
Can we build a robot with interpersonal skills?
Haven’t we always had technological
developments? The wheel, the boat, writing,
the telephone, etc.
ETHICS IN A WORLD OF ROBOTS
 Ethical Reasoning is a contested area in
current human existence, and just about any
area of medical procedure has ethical
dilemmas.
 The literature on all aspects of futurism
bristles with ethical challenges.
 The relationship of ethics to current human
meaning is critical but its foundation is rooted
in religion…a very difficult field to destroy.
UNPACKING THE ISSUES:
Some Important Studies
 NANOTECHNOLOGY AND THE ETHICS OF
FORECASTING: David Sanford Horner
 TRANSCENDING BIOLOGY: Ray Kurzweil
 THE HUMAN-NOT WELCOME IN THE
FUTURE: William Joy
 ETHICAL ISSUES IN AI: Richard Mason
UNPACKING THE ISSUES:
 NANOTECHNOLOGY: Horner
 “‘nanomedicine’ devoted not merely to ameliorative
medical treatment but to the improvement of human
performance”
 “a forecast may only be properly made if it is made on
the basis of sufficient knowledge, experience and
evidence”
 “if the outcomes are beyond our knowledge and
control then we can’t be held responsible for them.
But it is a central plank of moral theory that moral
agency and judgement must be immune to luck”
 “Ergo: the need for nano-ethics”
UNPACKING THE ISSUES
 TRANSCENDING BIOLOGY: Kurzweil
 “human life will be irreversibly transformed..
Although neither utopian or dystopian, this epoch
will transform the concepts that we rely on to give
meaning to our lives, from our business models to
the cycle of human life, including death itself”
 “There will be no distinction, post-Singularity,
between human and machine or between physical
and virtual reality”
UNPACKING THE ISSUES
 TRANSCENDING BIOLOGY: Kurzweil
 “six historical epochs that are driven, in a law-like
manner (‘the law of accelerating returns’), by the
exponential growth of information and
technology”
 “‘a theory of technological evolution’ as
justification of the shape of future human society”
UNPACKING THE ISSUES
 THE HUMAN-NOT WELCOME IN THE
FUTURE: William Joy
 “genetic engineering, robotics, and
nanotechnology (GNR)—will extinguish human
beings as we now know them”
 “Joy’s big fish eat little fish argument quotes
robotics pioneer Hans Moravec: “Biological species
almost never survive encounters with superior
competitors.”
UNPACKING THE ISSUES
 THE HUMAN-NOT WELCOME IN THE
FUTURE
 “self-replication amplifies the danger of GNR: “A
bomb is blown up only once—but one bot can
become many, and quickly get out of control.”
 “21st century technologies “are widely within the
reach of individuals or small groups… knowledge
alone will enable the use of them,” I.E. “knowledgeenabled mass destruction (KMD).”
 “we are on the cusp of the further perfection of
extreme evil…”
UNPACKING THE ISSUES
 THE HUMAN-NOT WELCOME IN THE FUTURE
 “It seems far more likely that a robotic existence would
not be like a human one in any sense that we
understand, that the robots would in no sense be our
children… that on this path our humanity may well be
lost.”
 “this is the first moment in the history of our planet
when any species by its voluntary actions has become a
danger to itself.”
 “The only realistic alternative I see is relinquishment: to
limit development of the technologies that are too
dangerous, by limiting our pursuit of certain kinds of
knowledge.”
UNPACKING THE ISSUES
 ETHICAL ISSUES IN AI: Richard Mason
 Fundamental assumption: “[e]very aspect of
learning or any other feature of intelligence can in
principle be so precisely described that a machine
can be made to simulate it.” (McCarthy, 1956)
 “Approaches based on this assumption are called
symbolic or symbol-processing AI.”
UNPACKING THE ISSUES
 ETHICAL ISSUES IN AI: Richard Mason
 Wiener observed: “It has long been clear to me that the
modern ultra-rapid computing machine was in principle an
ideal central nervous system to an apparatus for automatic
control…this new development has unbounded possibilities
for good and evil.”
 “Physically they will be silicon based rather than carbon
based; but, they will be able to think, feel, have moods, be
emotional, interact socially with others, draw on common
sense, and have a “soul.” Thus, Al-based systems, will
become the next stage in the evolution of life, emerge as
our successors, and create a future society populated and
governed by computers”
UNPACKING THE ISSUES
 ETHICAL ISSUES IN AI: Richard Mason
 “the question of granting personhood to an AI
machine or robot depends on where the line is
drawn between persons and inanimate objects”
 “The overarching criterion is displaying some form
of cognitive capacity—being conscious, having
perceptions, feeling sensations”

UNPACKING THE ISSUES
 ETHICAL ISSUES IN AI: Richard Mason
 Turing also predicted at mid-century that “in
about fifty years’ time, it will be possible to
programme computers … to make them play the
imitation game so well that an average
interrogator will not have more than a seventy
percent chance of making the right identification
after five minutes questioning.”
 Turing concluded, “We may hope that machines
will eventually compete with men in all purely
intellectual fields.”
UNPACKING THE ISSUES
 ETHICAL ISSUES IN AI: Richard Mason
 “While the possibility of a machine being granted
moral status is the most compelling ethical issue
raised by AI, there are others, determined largely by
the uses to which AI programs are actually put. These
ethical considerations have evolved as AI research and
development has progressed. AI programs form
relationships with other entities. They are used, for
example, to advise human users, make decisions, and
in the case of intelligent software agents to chat with
people, search for information, look for news, find
jobs, and shop for goods and locate the best prices.
Their role in these relationships engenders moral
responsibility. “
SUMMING UP:
 Widespread embrace of technology by




humans
No guidelines for developing entities more
intelligent than we are
Massive human dislocation/destruction could
be a result ( Atom bomb?)
Ultimately human ethics will have to grapple
with outcomes
Can there be a “higher ethics”?
A New High-Tech Exercise Machine
BIBLIOGRAPHY
 Grunwald, Armin. Nanotechnology-A New Field of
Ethical Inquiry? Science and Engineering Ethics 11
(2):187-201.
 Horner, D.S., 2007a. Forecasting Ethics and the
Ethics of Forecasting: the case of Nanotechnology.
In: T.W. Bynum, K. Murata, and S. Rogerson, eds.
Globalisation: Bridging the Global Nature of
Information and Communication Technology and the
Local Nature of Human Beings. ETHICOMP 2007,
Vol.1. Meiji University, Tokyo, Japan 27 -29 March
2007. Tokyo: Global e-SCM Research Centre, Meiji
University, pp. 257-267.
BIBLIOGRAPHY
 Joy, William. 2000. “The future doesn’t need
us.” Wired Magazine.
 Kurzweil, Ray. 2005. The Singularity is near:
when humans transcend biology. London:
Duckworth.
 Mason, Brian. 2004. Ethical Issues in Artificial
Intelligence
http://www.sciencedirect.com/science/article/pii/B0122272404000642#mc0
473
 Zadeh, L.A. 1988.Fuzzy Logic. Computer, Vol.
21, #4, 83-93
CONTACT INFORMATION
Dr. Earle Waugh
Centre for the Cross-Cultural Study of
Health and Healing
Department of Family Medicine
University of Alberta
901 College Plaza
Edmonton, AB T6G 2C8
Ph: 780 492-6424
Email: [email protected]