Transcript Document

ETHICBOTS “crude” questions

(how) is the ICT monitoring and use of
personal data to be regulated?

who is responsible for actions carried out
by human-robot hybrid teams?

can bionic implants be used to enhance
physical and intellectual capabilities?
concerning human-machine
integration

Human-softbot integration, as achieved

Human-robot, non-invasive integration, as
by AI research on information and
communication technologies;
achieved by robotic research on
autonomous systems inhabiting human
environments;
 Human-robot invasive integration, as
achieved by bionic research.
ETHICBOTS
Strategic objectives
 Raising awareness and deepening understanding of
these techno-ethical issues (conceptual analysis);
 Ethical monitoring of ICT, robotic, and bionic
technologies for enhancing human mental and
physical capacities;
 Fostering integration between Science and Society,
by
 promoting responsible research,
 providing input to EU and national committees for
ethical monitoring, warning, and opinion
generation,
 improving communication between scientists,
citizens and special groups.
Multiple-actor enterprise

Ordinary citizens

Legal experts

Roboticists

Computer scientists

Philosophers

Sociologists

Theologians

………………………
Conceptual analysis by experts
• Conceptual analysis on the basis of specialized
knowledge
• triaging techno-ethical issues,
• deepening our understanding of the higherranked issues,
• identifying ethical motivations opening new
research perspectives,
• dispelling misconceptions
Triaging: identifying potential impact
categories


We need a set of Potential Impact
Categories (PICs) as a basis for triaging
emerging techno-ethical issues.
Examples:
 imminence,
 novelty,
 Social pervasiveness of technologies.
General Ethical Themes




Personal integrity and identity
Responsibility
Autonomy
Fair access
Deepening our understanding
learning machines and responsibility
Designers, manufacturers, and operators
cannot fully predict the behaviour of many
learning machines based on
 symbolic learning
 neural network learning
 evolutionary algorithms
Traditional concepts of responsibility ascription
fail!
Deepening our understanding
Being cautious about precautionary principles
 Should one enforce a “human-in-the-controlloop” exceptionless requirement?
No! Machines can take decisions which humans
should not override (e.g., to prevent accidents)
Ethically motivated research

Improving machine learning standards


Practising cooperative design
Providing machines with explanation &
justification facilities
Explanation and justification

Accountability, autonomy, trust, social anxiety
Machines should become increasingly
capable to explain and justify their courses
of action



Antecedents in knowledge-based decision support
systems and expert systems
Future Developments: Machine introspective and
reflective capacities
Dispelling misconceptions
“The machine will do exactly what we programmed it
to do…”

Do we fully understand the robots we make and
theorize about?

Can we fully predict and control robot behaviour?
Misconceptions at war
The American military is working on a new generation
of soldiers, far different from the army it has. "They
don't get hungry," said Gordon Johnson of the Joint
Forces Command at the Pentagon. "They're not afraid.
They don't forget their orders. They don't care if the
guy next to them has just been shot. Will they do a
better job than humans? Yes.“ The robot soldier is
coming.
Front-page article, NYT 16 feb. 2005 T. Weiner
Robo-soldiers &
AI-complete problems

Open context interpretation

Recognizing surrender gestures

Telling bystanders from foes