PPT - Ubiquitous Computing Lab

Download Report

Transcript PPT - Ubiquitous Computing Lab

Intelligent Systems
Colloquium 3
Future of AI. Dangers and
problems of development of AI.
20.12.2005
1
Agenda
• Motivation of creating of artificial mind
• Objective necessity to development of artificial
human-like mind
• How to develop artificial human-like mind
• How to estimate and testing of artificial mind
• How to save controllability of artificial mind
• What can be consequences of creating of
artificial mind
• Main centers in World in development of AI
20.12.2005
2
Motivation of creating of mind
• Wish to develop of helper for hard works
• Wish to understand of myself, how we are
constructed and think
• Wish to improve capabilities of human to
process and store of information
• Creating of artificial mind is a part of common
tendency to creating of alternative kinds of life:
genetic engineering, artificial life (part of AI),
virtual reality in computer games
• The reason of that – wish of human mind to
obtain of new information from environment
20.12.2005
3
Objective necessity to development
of artificial human-like mind
• Necessity of comfortable and safe
interaction with (human-like interaction)
• Necessity of development of self-learning,
self-repairing and self-producing intelligent
systems
20.12.2005
4
Kismet – project of MIT
Kismet, a robot designed to interact socially
with humans. Kismet has an active vision system and
can display a variety of facial expressions.
20.12.2005
5
Example of self-assembling and
transforming robot
20.12.2005
6
How to develop artificial human-like
mind
• What is sense ( a semantics) of our concepts
(signs, words)
• What is an emotions and what its role in mind
• What is a consciousness?
• How a coding of our memory is implemeted
• Is exist a connection between our memory and
genetic memory
• Is exist a free will
20.12.2005
7
The emotions
“The main question is whether non-intellective, that
is affective and conative abilities, are admissible
as factors of general intelligence. (My
contention) has been that such factors are not
only admissible but necessary. I have tried to
show that in addition to intellective there are also
definite non-intellective factors that determine
intelligent behavior. If the foregoing observations
are correct, it follows that we cannot expect to
measure total intelligence until our tests also
include some measures of the non-intellective
factors” [Wechsler, 1943).
20.12.2005
8
The emotions
• Last investigations of brain – the emotions bring
influence on quality of memorization
• It is possibly that emotions is a tool for control of
speed of decision making by changing of level
(or degree of parrallelism) for thinking
• Emotions are connected with achievement of
goal (positive is a signal of successful process of
achievement and negative – is a signal about
fail)
• The emotion are closely connected with body
and are older feature of brain then neocortex (its
appear in reptiles)
20.12.2005
9
Artificial dogs AIBO playing in soccer
20.12.2005
10
Making of decision
Associative link (inference)
Classification
(rocognition) of
situation (task)
20.12.2005
Sensors
decision
Forming of reaction
on situation (solving)
Effectors
11
Architecture of EGO of robots of Sony
20.12.2005
12
Human brain
20.12.2005
13
Objective difficulties of investigation
of action of brain
• Brain is a most complex system known by
human (~1011 of neurons, 104 synapses
per neuron, unhomogeneity of structure)
• This system must investigate itself by itself
• Because emergency and distribution of
brain it is impossible to investigate it in
parts (only in limited value)
• It is impossible to investigate human brain
in parts with causation of damage
20.12.2005
14
Approaches to investigation of human mind
• Philosophy (in particular, in religions) – investigation of
place of mind in Universe and in society
• Psychology – investigation of external demonstration of
action of mind (actions, emotions, communication
capabilities), main goals – investigation and correction of
features of behavior
• Neurophysiology – investigation of structure of brain, role
of different components and processes in mind, main
goal – diagnosis and correction of any illnesses of brain
• Artificial intelligence – investigation of principles of
information processing in brain, which is reason of its
functionality, by inviting, implementation and testing of
models, main goal – development of human-like helper
20.12.2005
15
Approaches of AI
• Logical (computational)
– Based on symbol information processing with different
knowledge representation
– Goal: modeling of consistent reasoning and
understanding of natural language
• Connectionist (neural networks)
– Based on signal information processing
– Goal: modeling of deep processing of brain with
different models of neural networks
• Hybrid
– Based of combination of different models from above
– Goal: modeling of human-like mind in most full sense
20.12.2005
16
Example of knowledge
representation in logical approach
20.12.2005
17
Example of neural network for diagnosis of underwater robot
20.12.2005
18
Architecture of “hemi-sphere”
expert system (NSTU, Novosibirsk)
• Level of store of
knowledge
Knowledge Base
• Level of processing of
data and knowledge
Inference
• Level of store of data
Blackboard
Manager
• Level of processing of
signals and events
20.12.2005
Neural network
19
Approaches of AI(2)
• Agent-based approach
– Based on concept of multi-agent systems
– Goal: modeling of unhomogeneous structure of brain
as collective of interacting subsytems
• Evolutional approach
– Based on genetic algorithms
– Goal: modeling of building and learning of
unhomogeneous structure of brain during evolution
• Quantum approach
– Based on performance about wave processes as
basis of action of brain
– Goal: modeling of wave activity of brain during of its
action
20.12.2005
20
Agent-oriented approach is developed and
tested in soccer championships “Robocup”
20.12.2005
21
Example of using of genetic algorithms for
forming of best gate of robot
20.12.2005
22
How to estimate and testing of
artificial mind
• Test of Turing
Two approaches to estimate of mind:
• Deep testing
– Deal with understanding how we are thinking,
learning and store of knowledge
• Brief testing
– Deal with similarity of behavior
20.12.2005
23
Objective of difficulties of testing of
artificial mind
• Artificial mind as natural mind is a complex
emergent system
• We don’t know many features of mind and
brain and even sometimes don’t know
what is normal mind, for example,
sometimes there is very small difference
between schizophrenia and genius
• Artificial mind haven’t many features for
interaction of environment connected with
human body
20.12.2005
24
How to save controllability of
artificial mind
Asimov's Laws of Robotics:
First Law:
A robot may not injure a human being, or, through inaction,
allow a human being to come to harm.
Second Law:
A robot must obey orders given it by human beings,
except where such orders would conflict with the First Law.
Third Law:
A robot must protect its own existence as long as such
protection
does not conflict with the First or Second Law.
20.12.2005
25
A deadlock problem was the key feature of the short story in which Asimov
first introduced the laws.
He constructed the type of stand- off commonly referred to as the
"Buridan's ass" problem. It involved a balance between a strong third- law
self- protection tendency, causing the robot to try to avoid a source of danger,
and a weak second- law order to approach that danger. "The conflict between
the various rules is [meant to be] ironed out by the different positronic potentials
in the brain," but in this case the robot "follows a circle around [the source
of danger], staying on the locus of all points of ... equilibrium."
Deadlock is also possible within a single law. An example under the first law would
be two humans threatened with equal danger and the robot unable to contrive
a strategy to protect one without sacrificing the other.
Under the second law, two humans might give contradictory orders of equivalent force.
The later novels address this question with greater sophistication:
What was troubling the robot was what roboticists called an equipotential
of contradiction on the second level. Obedience was the Second Law and [the robot]
was suffering from two roughly equal and contradictory orders. Robot- block was what
the general population called it or, more frequently, roblock for short . . . [or] `mental
freeze- out.' No matter how subtle and intricate a brain might be, there is always
some way of setting up a contradiction. This is a fundamental truth of mathematics.
20.12.2005
26
• Conflict between hardness (certainty) of
Asimov’s laws and necessity of
development of human-like artificial mind
• Human-like artificial mind will be exposed
to same dangers as human mind with
using of unsafe principles of morals in
different religions (its didn’t defend of
mankind from wars, crimes, victims)
20.12.2005
27
What can be consequences of
creating of artificial mind
• We are only step of evolution of mind on
Earth (N.Amosov, 1963), film “AI”
• Revolution of machines (different films:
Matrix, Terminator)
• War between supporters and opponents of
creating of artificial mind (de Garis, 2001)
• Creating of cyborgs as new generation of
people (Worvick, about 2000)
20.12.2005
28
Hanson Robotics
Robot “Eva”
Robot “Philip Dick”
20.12.2005
29
Japan robot Replee Q1
20.12.2005
30
Robot Valery of Animatronics
20.12.2005
31
Possible future – planet of robots?
20.12.2005
32
Intelligent Robots
Human Robotics of MIT
http://www.ai.mit.edu/projects/humanoid-robotics-group/
Stanford University Http://cs.stanford.edu/Research/
Edinburg University
Http://www.informatics.ed.ac.uk
Aibo of Sony
http://www.aibo-europe.com/
ATR
http://www.sarcos.com/
USC
http://www-robotics.usc.edu/
Carnegi-Mellon University http://www.cs.cmu.edu
Androids of Hanson Robotics
http://www.human-robot.com
Manchester University
http://www.cs.man.ac.uk/robotics/
20.12.2005
33