Modelling Morality with Prospective Logic

Download Report

Transcript Modelling Morality with Prospective Logic

Moral Decision Making with
ACORDA
Luís Moniz Pereira CENTRIA – UNL, Portugal
Ari Saptawijaya FASILKOM – UI, Indonesia
Study on Morality
• Interdisciplinary perspectives
– Philosophy:
virtue ethics, utilitarianism/consequentialism,
deontological principles/nonconsequentialism,
etc.
– Science: primatology, cognitive sciences,
neuroscience, artificial intelligence, etc.
Computational Study on Morality
• Several names: machine ethics, machine
morality, artificial morality, computational
morality
• Two purposes:
– To understand morality better, from the
computational point of view
– To equip artificial agents with the capability of
moral decision making
Our Goal
• To provide a general framework to model
morality computationally
– A toolkit to codify arbitrarily chosen moral rules as
declaratively as possible
• Logic programming as a promising paradigm
–
–
–
–
–
Default negation
Abductive logic programming
Stable model semantics
Preferences
etc.
Prospective Logic Programming
• Enable evolving programs to look ahead
prospectively its possible future states and
to prefer among them to satisfy goals
• Working implementation: ACORDA
– Based on EVOLP
– Benefits from XSB-XASP interface to
Smodels
Prospective Logic Agent
Architecture
The Trolley Problem (1)
The Trolley Problem (2)
The Trolley Problem (3)
The Trolley Problem (4)
The Principle of Double Effect
Harming another individual is permissible if it
is the foreseen consequence of an act that
will lead to a greater good;
in contrast, it is impermissible to harm
someone else as an intended means to a
greater good.
Modelling “Denise Case”
• There is a man standing on the side track.
human_on_side_track(1).
Modelling Two Possible Decisions (1)
• Merely watching
expect(watching).
train_straight <- consider(watching).
end(die(5)) <- train_straight.
observed_end <- end(X).
Modelling Two Possible Decisions (2)
• Throwing the switch
expect(throwing_switch).
redirect_train <- consider(throwing_switch).
kill(N) <- human_on_side_track(N),
redirect_train.
end(save_men,ni_kill(N)) <- redirect_train,
kill(N).
observed_end <- end(X,Y).
Modelling
the Principle of Double Effect (1)
• Observe final endings of each possible
decisions to enable us later to morally
prefer decisions by considering the greater
good between possible decisions
falsum <- not observed_end.
Modelling
the Principle of Double Effect (2)
• Rule out impermissible actions, i.e. actions that
involve intentional killing in the process of
reaching the goal.
falsum <- intentional_killing.
intentional_killing <- end(save_men,i_kill(Y)).
Modelling
the Principle of Double Effect (3)
• Preferring among permissible actions
those resulting in greater good.
elim([end(die(N))]) <exists([end(save_men,ni_kill(K))]),
N > K.
elim([end(save_men,ni_kill(K))]) <exists([end(die(N))]),
N =< K.
Conclusions
• Use ACORDA to make moral decisions
– the classic trolley problem
• Abducibles to model possible moral decisions
• Compute abductive stable models
– Capturing abduced decisions along with their
consequences
• Integrity constraints
– Rule out impermissible actions
• A posteriori preferences
– Prefer among permissible actions, using utility
functions
Future Work
• Extend ACORDA concerning a posteriori evaluation of
choices
– refinement of morals, utility functions, and conditional
probabilities
– once an action is done, ACORDA should receive an update with
the results of the action
– ACORDA must tune itself to lessen the chance of repeating
errors
• To explore how to express metarule and metamoral
injunctions
• To have a framework for generating precompiled rules
– Fast and frugal moral decision making, rather than full
deliberative moral reasoning every time