What is an intelligent agent (cont.)

Download Report

Transcript What is an intelligent agent (cont.)

1. Introduction
Rachel Ben-Eliyahu - Zohary
1
What is Artificial Intelligence
2
Descartes (1596-1650)
Dualism vs. Materialism
“If there were machines which bore a resemblance to
our bodies and imitated our actions as closely as
possible for all practical purposes, we should still have
two very certain means of recognizing that they were
not real men. The first is that they could never use
words, or put together signs, as we do in order to
declare our thoughts to others… Secondly, even though
some machines might do some things as well as we do
them, or perhaps even better, they would inevitably
fail in others, which would reveal that they are acting
not from understanding, …”
3
Turing Test
 http://www.robitron.com/TuringHub/
 Test proposed by Alan Turing in 1950
 The computer is asked questions by a
human interrogator. It passes the test
if the interrogator cannot tell whether
the responses come from a person
 Required capabilities: natural language
processing, knowledge representation,
automated reasoning, learning,...
 No physical interaction
 Chinese Room (J. Searle, 1980)
4
Chinese Room
• A hypothetical system that runs a program
that passes the Turing test
• But clearly, the program does not
understand anything of its inputs and
outputs
• Conclusion: Running the right program is
not a sufficient condition for being a mind
5
Can Machines Act/Think
Intelligently?
 Yes, if intelligence is narrowly defined as
information processing
In fact, AI has made impressive achievements showing
that tasks initially assumed to require intelligence can
be automated
 Probably not, if intelligence is not
separated from the rest of “human
nature”
6
Central goals of Artificial Intelligence
Understand the principles that make intelligence possible
(in humans, animals, and artificial agents)
Developing intelligent machines or agents
(no matter whether they operate as humans or not)
Formalizing knowledge and mechanizing reasoning
in all areas of human endeavor
Making the working with computers
as easy as working with people
7
History of Artificial Intelligence
Stone age (1943-1956)
•Early work on neural networks and logic.
•The Logic Theorist (Alan Newell and Herbert Simon)
•Birth of AI: Dartmouth workshop - summer 1956
•John McCarthy’s name for the field: Artificial Intelligence
8
History of Artificial Intelligence
Early enthusiasm, great expectations (1952-1969)
•McCarthy (1958)
•defined Lisp
•invented time-sharing
•Advice Taker
•Learning without knowledge
•Neural modeling
•Evolutionary learning
•Samuel’s checkers player: learning
•Robinson’s resolution method.
•Minsky: the microworlds (e.g. the block’s world).
•Many small demonstrations of “intelligent” behavior.
•Simon’s over-optimistic predictions.
9
History of Artificial Intelligence
Dark ages (1966-1973)
AI did not scale up: combinatorial explosion
The fact that a program can find a solution in principle
does not mean that the program contains any of the
mechanisms needed to find it in practice.
Failure of natural language translation approach based
on simple grammars and word dictionary.
The famous retranslation English->Russian->English of
“the spirit is willing but the flash is weak” into
“the vodka is good but the meat is rotten”.
Funding for natural language processing stopped.
10
History of Artificial Intelligence
Renaissance (1969-1979)
Change of problem solving paradigm:
from search-based problem solving to
knowledge-based problem solving
expert systems:
•Dendral: infers molecular structure from the
information provided by a mass spectrometer
•Mycin: diagnoses blood infections
11
History of Artificial Intelligence
Industrial age (1980-present)
•The first successful commercial expert systems.
•Many AI companies.
•Exploration of different learning strategies
(Explanation-based learning, Case-based
Reasoning, Genetic algorithms, Neural networks, etc.)
12
History of Artificial Intelligence
The return of neural networks (1986-present)
The reinvention of the back propagation learning
algorithm for neural networks first found in 1969 by
Bryson and Ho.
Many successful applications of neural networks.
13
History of Artificial Intelligence
Maturity (1987-present)
Change in the content and methodology of AI
research:
• build on existing theories rather than propose
new ones;
• base claims on theorems and experiments
rather than on intuition;
• show relevance to real-world applications rather
than toy examples.
14
History of Artificial Intelligence
Intelligent agents (1995-present)
The realization that the previously isolated subfields of
AI (speech recognition, planning, robotics, computer
vision, machine learning, knowledge representation,
etc.) need to be reorganized when their results are to
be tied together into a single agent design.
A process of reintegration of different sub-areas of AI to
build a “whole agent”:
• “agent perspective” of AI
• multi-agent systems;
• agents for different types of applications, web agents.
15
State of the Art in Artificial Intelligence
Deep Blue defeated Kasparov, the chess world champion.
PEGASUS, a speech understanding system is able to handle
transactions such as finding the cheapest air faire.
MARVEL: a real-time expert system monitors the stream of data
from the Voyager spacecraft and signals any anomalies.
A robotic system drives a car at 55mph on the public highway.
A diagnostic expert system is correcting the diagnosis of a
reputable expert.
http://www.youtube.com/watch?v=jZmNc-rshWw
16
http://www.youtube.com/watch?v=hS0ZRZ0odTE
Four robotic vehicles
finished a Pentagonsponsored race across the
Mojave desert Saturday
(Oct 8, 2005) and achieved
a technological milestone
by conquering steep dropoffs, obstacles and tunnels
over a rugged 132-mile
course without a single
human command.
17
What is an intelligent agent
An intelligent agent is a system that:
• perceives its environment (which may be the physical
world, a user via a graphical user interface, a collection of
other agents, the Internet, or other complex environment);
• reasons to interpret perceptions, draw inferences, solve
problems, and determine actions; and
• acts upon that environment to realize a set of goals or
tasks for which it was designed.
input/
sensors
user/
environment
output/
effectors
Intelligent
Agent
18
What is an intelligent agent (cont.)
Humans, with multiple, conflicting drives, multiple
senses, multiple possible actions, and complex
sophisticated control structures, are at the
highest end of being an agent.
19
What is an intelligent agent (cont.)
At the low end of being an agent is a
thermostat. It continuously senses the room
temperature, starting or stopping the heating
system each time the current temperature is
out of a pre-defined range.
20
What is an intelligent agent (cont.)
The intelligent agents we are concerned with are
in between. They are clearly not as capable as
humans, but they are significantly more
capable than a thermostat.
21
What an intelligent agent can do
An intelligent agent can :
• collaborate with its user to improve the accomplishment of
his or her tasks;
• carry out tasks on user’s behalf, and in so doing employs
some knowledge of the user's goals or desires;
• monitor events or procedures for the user;
• advise the user on how to perform a task;
• train or teach the user;
• help different users collaborate.
22
Characteristic features of intelligent agents
Knowledge representation and reasoning
Transparency and explanations
Ability to communicate
Use of huge amounts of knowledge
Exploration of huge search spaces
Use of heuristics
Reasoning with incomplete or conflicting data
Ability to learn and adapt
23
Knowledge representation and reasoning
An intelligent agent contains an internal representation of its external
application domain, where relevant elements of the application
domain (objects, relations, classes, laws, actions) are represented
as symbolic expressions.
This mapping allows the agent to reason about the application
domain by performing reasoning processes in the domain model,
and transferring the conclusions back into the application domain.
ONTOLOGY
OBJECT
SUBCLASS-OF
represents
BOOK
CUP
TABLE
INSTANCE-OF
If an object is on top of
another object that is itself
on top of a third object
then the first object is on
top of the third object.
Application Domain
CUP1
ON
BOOK1
ON
TABLE1
RULE
 x,y,z  OBJECT,
(ON x y) & (ON y z)  (ON x z)
Model of the Domain
24
Separation of knowledge from control
Implements a general method of
interpreting the input problem based on
the knowledge from the knowledge base
Intelligent Agent
Input/
Sensors
User/
Environment
Output/
Problem Solving
Engine
Knowledge Base
Ontology
Effectors
Rules/Cases/Methods
Data structures that represent the objects from the application domain,
general laws governing them, action that can be performed with them, etc.
25
Transparency and explanations
The knowledge possessed by the agent and its reasoning
processes should be understandable to humans.
The agent should have the ability to give explanations of
its behavior, what decisions it is making and why.
Without transparency it would be very difficult to accept,
for instance, a medical diagnosis performed by an
intelligent agent.
27
Ability to communicate
An agent should be able to communicate with its users
or other agents.
The communication language should be as natural to
the human users as possible. Ideally, it should be free
natural language.
The problem of natural language understanding and
generation is very difficult due to the ambiguity of words
and sentences, the paraphrases, ellipses and references
which are used in human communication.
28
Illustration: Ambiguity of natural language
Words and sentences have multiple meanings
Diamond
• a mineral consisting of nearly pure carbon in crystalline form,
usually colorless, the hardest natural substance known;
• a gem or other piece cut from this mineral;
• a lozenge-shaped plane figure ();
• in Baseball, the infield or the whole playing field.
Visiting relatives can be boring.
• To visit relatives can be boring.
• The relatives that visit us can be boring.
She told the man that she hated to run alone.
• She told the man: I hate to run alone !
• She told the man whom she hated: run alone !
29
Other difficulties with natural language processing
Paraphrase: The same meaning may be expressed by many sentences.
Ann gave Bob a cat.
Bob was given a cat by Ann.
What Ann gave Bob was a cat.
Ann gave a cat to Bob.
A cat was given to Bob by Ann.
Bob received a cat from Ann.
Ellipsis: Use of sentences that appear ill-formed because they are
incomplete. Typically the parts that are missing have to be extracted from
the previous sentences.
Bob: What is the length of
the ship USS J.F.Kennedy ?
Bob: The beam ?
John: 1072
John: 130
Reference: Entities may be referred to without giving their names.
Bob: What is the length of
the ship USS J.F.Kennedy ?
Bob: Who is her commander ?
John: 1072
John: Captain Nelson.
30
Use of huge amounts of knowledge
In order to solve "real-world" problems, an intelligent agent
needs a huge amount of domain knowledge in its memory
(knowledge base).
Example of human-agent dialog:
User: The toolbox is locked.
Agent: The key is in the drawer.
In order to understand such sentences and to respond
adequately, the agent needs to have a lot of knowledge
about the user, including the goals the user might want to
achieve.
31
Use of huge amounts of knowledge (example)
User:
The toolbox is locked.
Agent:
Why is he telling me this?
I already know that the box is locked.
I know he needs to open it.
Perhaps he is telling me because he believes I can help.
To open it requires a key.
He knows it and he knows I know it.
The key is in the drawer.
If he knew this, he would not tell me that the toolbox is locked.
So he must not realize it.
To make him know it, I can tell him.
I am supposed to help him.
The key is in the drawer.
32
Exploration of huge search spaces
An intelligent agent usually needs to search huge spaces
in order to find solutions to problems.
Example 1: A search agent on the internet
Example 2: A checkers playing agent
33
Exploration of huge search spaces: illustration
Determining the best move with minimax:
I
Opponent
lose
I
win
win
win
win lose
win
win
win
lose
lose
win
lose draw win lose
win
win
win
lose
win
34
Exploration of huge search spaces: illustration
The tree of possibilities is far too large to be fully generated and
searched backward from the terminal nodes, for an optimal move.
Size of the search space
A complete game tree for checkers has been estimated as
having 1040 nonterminal nodes. If one assumes that these
nodes could be generated at a rate of 3 billion per second,
the generation of the whole tree would still require around
1021 centuries !
Checkers is far simpler than chess which, in turn, is generally
far simpler than business competitions or military games.
35
Use of heuristics
Intelligent agents generally attack problems for which
no algorithm is known or feasible, problems that require
heuristic methods.
A heuristic is a rule of thumb, strategy, trick, simplification,
or any other kind of device which drastically limits the
search for solutions in large problem spaces.
Heuristics do not guarantee optimal solutions. In fact they
do not guarantee any solution at all.
A useful heuristic is one that offers solutions which are good
enough most of the time.
36
Use of heuristics: illustration
.
3. Back propagate the estimated values
1. Generate a partial game tree
node corresponding to
the current board situation
2. Estimate the values of the leaf nodes by using a static evaluation function
Heuristic function for board position evaluation: w1.f1 + w2.f2 + w3.f3 + …
where wi are real-valued weights and fi are board features
(e.g. …….)
37
Reasoning with incomplete data
The ability to provide some solution even if not all the
data relevant to the problem is available at the time a
solution is required.
Example:
The reasoning of a physician in an intensive care unit.
Planning a military course of action.
If the EKG test results are not available, but the patient
is suffering chest pains, I might still suspect a heart
problem.
38
Reasoning with conflicting data
The ability to take into account data items that are more
or less in contradiction with one another (conflicting
data or data corrupted by errors).
Example:
The reasoning of a military intelligence analyst that has
to cope with the deception actions of the enemy.
39
Ability to learn
The ability to improve its competence and efficiency.
An agent is improving its competence if it learns to
solve a broader class of problems, and to make fewer
mistakes in problem solving.
An agent is improving its efficiency if it learns to solve
more efficiently (for instance, by using less time or space
resources) the problems from its area of competence.
40
Illustration: concept learning
Learn the concept of ill cell by comparing examples of ill cells
with examples of healthy cells, and by creating a generalized
description of the similarities between the ill cells :
Learned concept
((1 ? )
(? dark))
Concept
examples
((1 light)
(2 dark))
((1 dark)
(2 dark))
+
+
((1 light)
(2 light))
_
((1 dark)
(2 light))
_
((1 dark)
(1 dark))
+
41
Ability to learn: classification
The learned concept is used to diagnose other cells
“Ill cell” concept
((1 ?) (? dark))
Is this
cell ill?
No
Is this
cell ill?
((1 light)
(1 light))
((1 dark)
(1 light))
Yes
This is an example of reasoning with incomplete information.
42
Extended agent architecture
The learning engine implements methods
for extending and refining the knowledge
in the knowledge base.
Intelligent Agent
Input/
Sensors
User/
Environment
Problem Solving
Engine
Learning
Engine
Output/
Effectors
Knowledge Base
Ontology
Rules/Cases/Methods
43
Sample tasks for intelligent agents
Planning: Finding a set of actions that achieve a certain goal.
Example: Determine the actions that need to be performed in order to
repair a bridge.
Critiquing: Expressing judgments about something according to certain
standards.
Example: Critiquing a military course of action (or plan) based on the
principles of war and the tenets of army operations.
Interpretation: Inferring situation description from sensory data.
Example: Interpreting gauge readings in a chemical process plant to infer
the status of the process.
44
Sample tasks for intelligent agents (cont.)
Prediction: Inferring likely consequences of given situations.
Examples:
Predicting the damage to crops from some type of insect.
Estimating global oil demand from the current geopolitical world situation.
Diagnosis: Inferring system malfunctions from observables.
Examples:
Determining the disease of a patient from the observed symptoms.
Locating faults in electrical circuits.
Finding defective components in the cooling system of nuclear reactors.
Design: Configuring objects under constraints.
Example: Designing integrated circuits layouts.
45
Sample tasks for intelligent agents (cont.)
Monitoring: Comparing observations to expected outcomes.
Examples:
Monitoring instrument readings in a nuclear reactor to detect accident
conditions.
Assisting patients in an intensive care unit by analyzing data from the
monitoring equipment.
Debugging: Prescribing remedies for malfunctions.
Examples:
Suggesting how to tune a computer system to reduce a particular type of
performance problem.
Choosing a repair procedure to fix a known malfunction in a locomotive.
46
Sample tasks for intelligent agents (cont.)
Instruction: Diagnosing, debugging, and repairing student behavior.
Examples:
Teaching students a foreign language.
Teaching students to troubleshoot electrical circuits.
Control: Governing overall system behavior.
Example:
Managing the manufacturing and distribution of computer systems.
Any useful task:
Information fusion.
Travel planning.
Email management.
47
How are agents built
Intelligent Agent
Domain
Expert
Knowledge
Engineer
Inference Engine
Dialog
Programming
Knowledge Base
Results
A knowledge engineer attempts to understand how a subject
matter expert reasons and solves problems and then encodes
the acquired expertise into the agent's knowledge base.
The expert analyzes the solutions generated by the agent
(and often the knowledge base itself) to identify errors, and
the knowledge engineer corrects the knowledge base.
48
Why it is hard
The knowledge engineer has to become a kind of subject
matter expert in order to properly understand expert’s problem
solving knowledge. This takes time and effort.
Experts express their knowledge informally, using natural
language, visual representations and common sense, often
omitting essential details that are considered obvious. This
form of knowledge is very different from the one in which
knowledge has to be represented in the knowledge base
(which is formal, precise, and complete).
This transfer and transformation of knowledge, from the
domain expert through the knowledge engineer to the agent, is
long, painful and inefficient (and is known as "the knowledge
acquisition bottleneck“ of the AI systems development
process).
49
Why are intelligent agents important
Humans have limitations that agents may alleviate
(e.g. memory for the details that isn’t effected by
stress, fatigue or time constraints).
Humans and agents could engage in mixed-initiative
problem solving that takes advantage of their
complementary strengths and reasoning styles.
50
Why are intelligent agents important (cont)
The evolution of information technology makes
intelligent agents essential components of our future
systems and organizations.
Our future computers and most of the other systems
and tools will gradually become intelligent agents.
We have to be able to deal with intelligent agents either
as users, or as developers, or as both.
51
Intelligent agents: Conclusion
Intelligent agents are systems which can perform
tasks requiring knowledge and heuristic methods.
Intelligent agents are helpful, enabling us to do our
tasks better.
Intelligent agents are necessary to cope with the
increasing challenges of the information society.
52
Main Areas of AI
 Search, especially
heuristic search (puzzles,
games)
 Knowledge representation
(including formal logic)
 Planning
 Reasoning with
uncertainty, including
probabilistic reasoning
 Learning
 Agent architectures
 Robotics and perception
 Natural language
processing
Agent
Robotics
Reasoning
Search
Perception
Learning
Knowledge Constraint
rep.
satisfaction
Planning
Natural
language
...
Expert
Systems
53