Transcript EECS 690
EECS 690
May 4
LIDA
• Most of this chapter concerns specific
ways in which a system like LIDA might be
an accurate model of human psychology.
• This sounds like an interesting project. It
would be interesting to see it built and
developed.
Moral Mimicry
• The authors point out that they do not mean to
replicate human psychology wholesale, but they
do not spend much time on exploring to what
degree such mimicry of human moral
psychology is necessary for AMAs. (We don’t
want to specifically make AMAs who are more
generous after finding a dime, but such foibles
may be unavoidable in such a complex system)
• Questions of how people come to understand
morality are called questions of moral
epistemology.
Moral Facts
• In order to talk about coming to understand morality, it is
necessary to ask what there is to be understood.
• Anti-realists about morality contend that moral language
corresponds with no recognizable category of facts.
• Realists correspondingly associate moral language with
one (or more) of a number of factual categories, a few of
which are:
–
–
–
–
–
–
–
sociological belief
human well-being
the outcomes of rational procedures
interpersonal conventions
human well-feeling
natural science
theology
Human moral psychology
• Notice that the specifics of human psychology are only
related to some of those categories, and are related in
different ways. Remember that the question here is not
whether these things influence human behavior, but
whether what is moral is determined at least in part by
these things.
–
–
–
–
–
–
–
sociological belief
human well-being
the outcomes of rational procedures
interpersonal conventions
human well-feeling
natural science
theology
Recognition problems
• Because there is not widespread
agreement on many aspects of what moral
facts are or even what behavior is moral or
not, Wallach and Allen leave unaddressed
the question of success conditions for
LIDA. After all, it does not follow that
something with a similar cognitive
architecture to ours would have similar
moral behavior to ours, or that we would
even want it to.
Complexity of AMAs
• I have several times raised the question of whether to
make a McRobot (a machine that worked at McDonalds)
that was sensitive to certain ethical concerns it would be
most productive to make a machine that could enroll in
college, could order at a fancy restaurant, could learn a
new game, could criticize a movie, and write a passable
limerick and then teach it to work fast food.
• Could we deal with a simpler form of ethical sensitivity
tailored for specific tasks? Is a better goal AMAs that are
as ethically sensitive as well-trained dogs?