Intelligent Agents - The Computer Science Department

Download Report

Transcript Intelligent Agents - The Computer Science Department

Intelligent Agents
Russell and Norvig: AI: A Modern Approach
Mike Wooldridge: An Introduction to MAS
Outline
• Agents and environments
• Rationality
• PEAS (Performance measure,
Environment, Actuators, Sensors)
• Environment types
• Agent types
Agents
• An agent is anything that can be viewed
as perceiving its environment through
sensors and acting upon that environment
through actuators.
• The agent function maps from percept
histories to actions:
f: P*  A
A Semantic Framework
4
Vacuum-cleaner world
• Percepts: location and contents, e.g., [A,Dirty]
• Actions: Left, Right, Suck
• Function-table (table look-up agent)
Percept
Action
[A, Clean]
[A, Dirty]
[B, Clean]
[B, Dirty]
Right
Suck
Left
Suck
Agency
•
•
•
•
Autonomous
Reactivity
Proactivity
Social ability
Reactivity
• If a program’s environment is guaranteed to be fixed,
the program need never worry about its own success
or failure – program just executes blindly
– Example of fixed environment: compiler
• The real world is not like that: things change,
information is incomplete. Many (most?) interesting
environments are dynamic
• A reactive system is one that
– maintains an ongoing interaction with its environment,
– responds to changes that occur in it.
Proactiveness
• Reacting to an environment is easy
(e.g., stimulus  response rules)
• But we generally want agents to do
things for us
• Hence goal directed behavior
• Pro-activeness = generating and
attempting to achieve goals; not driven
solely by events; taking the initiative
Social Ability
• The real world is a multi-agent environment:
we cannot go around attempting to achieve
goals without taking others into account
• Some goals can only be achieved with the
cooperation of others
• Social ability in agents is the ability to
interact with other agents (and possibly
humans) via some kind of agentcommunication language.
Rational agents
• An agent should strive to "do the right
thing", based on what it can perceive and
the actions it can perform. The right action
is the one that will cause the agent to be
most successful.
• Performance measure: An objective
criterion for success of an agent's
behavior.
Rational agents
• Rational Agent: For each possible percept
sequence, a rational agent should select an
action that is expected to maximize its
performance measure, given the evidence
provided by the percept sequence and whatever
built-in knowledge the agent has.
– The performance measure that defines the criterion of
success.
– The agent’s prior knowledge of the environment.
– The actions that the agent can perform.
– The agent’s percept sequence to date.
Rational agents
• Rationality is distinct from omniscience (allknowing with infinite knowledge)
•
• Agents can perform actions in order to modify
future percepts so as to obtain useful information
(information gathering, planning)
• An agent is autonomous if its behavior is
determined by its own experience (with ability to
learn and adapt).
Specify the setting for intelligent agent design:
PEAS Description
•
•
•
•
Performance measure
Environment
Actuators
Sensors
Specify the setting for intelligent agent design:
PEAS Description
•
•
•
•
Performance measure
Environment
Actuators
Sensors
Environment Types
Fully Observable vs. Partially
Observable
• An fully observable environment is one in which
the agent can obtain complete, accurate, up-todate information about the environment’s state.
• Most moderately complex environments
(including, for example, the everyday physical
world and the Internet) are partially observable.
• The more accessible an environment is, the
simpler it is to build agents to operate in it.
• Do not maintain the internal state to keep track of the
world.
15
Environment Types
Deterministic vs. non-deterministic
• A deterministic environment is one in which any
action has a single guaranteed effect — there is no
uncertainty about the state that will result from
performing an action.
• In deterministic environments, agents do not worry
about uncertainty.
• Non-deterministic environments present greater
problems for the agent designer.
16
Environment Types
Episodic vs. sequential
• In an episodic environment, the performance of
an agent is dependent on a number of discrete
episodes, with no link between the performance
of an agent in different scenarios.
• In episodic environment, agents do not think
ahead.
17
Environment Types
Static vs. dynamic
• A static environment is one that can be assumed to
remain unchanged except by the performance of
actions by the agent.
– Agents do not keep looking at ENV when making
decisions.
• A dynamic environment is one that has other
processes operating on it, and which hence changes
in ways beyond the agent’s control.
18
Environment Types
Discrete vs. continuous
• An environment is discrete if there are a fixed,
finite number of actions and percepts in it.
• Continuous environments have a certain level of
mismatch with computer systems
19
Agent types
•
•
•
•
Simple reflex agents
Model-based reflex agents
Goal-based agents
Utility-based agents
Generality
Simple reflex agents
Percepts
Reflex
rules
Inference
Engine
ENV
Actions
Use condition-action rules to map the agent’s perceptions directly to action.
Making decisions only with inputs.
Model-based reflex agents
World
Model
Update
World
Model
Percepts
ENV
Rules
Decision
Actions
Have an internal model (state) of the external environment.
Goal-based agents
World
Model
Goals
Tasks
Problem
Solving
Methods
Update
World
Model
Percepts
ENV
Trigger/Prioritize
Goals/Tasks
Select
goals/tasks
Select Methods/
Actions
Actions
Utility-based agents
World
Model
Goals
Tasks
Problem
Solving
Methods
Update
World
Model
Percepts
ENV
Trigger/Prioritize
Goals/Tasks
Select
goals/tasks
Utility
Select Methods/
Actions
Actions
Task of Software Agents
• Interacting with human users
– Personal assistants (email processing)
– Information/Product search
– Sales
– Chat room host
– Computer generated characters in games
• Interacting with other agents
– Facilitators.
– Brokers.
Intelligent Behavior of Agents
•
•
•
•
•
•
Learning about users
Learning about information sources
Learning about categorizing information
Learning about similarity
Constraint satisfaction algorithms
Reasoning using domain-specific
knowledge
• Planning
Technologies of Software Agents
•
•
•
•
•
•
•
Machine learning
Information retrieval
Agent communication
Agent coordination
Agent negotiation
Natural language understanding
Distributed objects
Multi-Agent Systems
•What are MAS
•Objections to MAS
•Agents and objects
•Agents and expert systems
•Agent communication languages
•Application areas
What are Multi-Agent Systems?
MultiAgent Systems: A Definition
• A multiagent system is one that
consists of a number of agents, which
have different goals and interact with
one-another
• To successfully interact, they will
require the ability to cooperate,
compete, and negotiate with each
other, much as people do
MultiAgent Systems: A Definition
• Two key problems:
– How do we build agents capable of independent,
autonomous action, so that they can successfully
carry out tasks we delegate to them? (agent
design)
– How do we build agents that are capable of
interacting (cooperating, coordinating, negotiating)
with other agents in order to successfully carry out
those delegated tasks, especially when the other
agents cannot be assumed to share the same
interests/goals? (society design)
Multi-Agent Systems
• It addresses questions such as:
– How can cooperation emerge in societies of selfinterested agents?
– What kinds of communication languages can agents
use?
– How can self-interested agents recognize conflict,
and how can they (nevertheless) reach agreement?
– How can autonomous agents coordinate their
activities so as to cooperatively achieve goals?
• These questions are all addressed in part by other
disciplines (notably economics and social sciences).
• Agents are computational, information processing
entities.
Objections to MAS
• Isn’t it all just Distributed/Concurrent
Systems?
There is much to learn from this community,
but:
• Agents are assumed to be autonomous,
capable of making independent decision –
so they need mechanisms to synchronize
and coordinate their activities at run time.
Objections to MAS
• Isn’t it all just AI?
• We don’t need to solve all the problems of
artificial intelligence (i.e., all the
components of intelligence).
• Classical AI ignored social aspects of
agency. These are important parts of
intelligent activity in real-world settings.
Objections to MAS
• Isn’t it all just Economics/Game Theory?
These fields also have a lot to teach us in
multiagent systems (like rationality), but:
• Insofar as game theory provides
descriptive concepts, it doesn’t always tell
us how to compute solutions; we’re
concerned with computational, resourcebounded agents.
Objections to MAS
• Isn’t it all just Social Science?
• We can draw insights from the study of
human societies, but again agents are
computational, resource-bounded
entities.
Agents and Objects
• Are agents just objects by another
name?
• Object:
– encapsulates some state
– communicates via message passing
– has methods, corresponding to
operations that may be performed on this
state
Agents and Objects
• Main differences:
– agents are autonomous:
they decide for themselves whether or not to
perform an action on request from another agent
– agents are smart:
capable of flexible (reactive, pro-active, social)
behavior, and the standard object model has
nothing to say about such types of behavior
– agents are active:
a multi-agent system is inherently multi-threaded, in
that each agent is assumed to have at least one
thread of active control
Objects do it for free…
• agents do it because they want to
• agents do it for money
Agents and Expert Systems
• Expert systems typically disembodied
‘expertise’ about some (abstract) domain of
discourse (e.g., blood diseases)
• Example: MYCIN knows about blood diseases
in humans
– It has a wealth of knowledge about blood diseases,
in the form of rules
– A doctor can obtain expert advice about blood
diseases by giving MYCIN facts, answering
questions, and posing queries
Agents and Expert Systems
• Main differences:
– Distributed.
– agents situated in an environment:
MYCIN is not aware of the world — only
information obtained is by asking the user
questions
– agents act:
MYCIN does not operate on patients
Agent Communication
• speech acts
• KQML
Speech Acts
• Searle (1969) identified various different types of speech
act:
– representatives:
such as informing, e.g., ‘It is raining’
– directives:
attempts to get the hearer to do something e.g., ‘please make the
tea’
– commisives:
which commit the speaker to doing something, e.g., ‘I promise to… ’
– expressives:
whereby a speaker expresses a mental state, e.g., ‘thank you!’
– declarations:
such as declaring war or christening
KQML
• Knowledge Query and Manipulation Language
– A language for the “message structure” of agent communication
• It describes the “speech act” of the message using a set
of performatives (communicative verbs).
• Each performative has required and optional arguments.
• The content language of the message is not part of
KQML, but can be specified by KQML performatives.
An Example
(stream-all
:content “(PRICE ?my-profolio ?price)”
: receiver stock-server
: language PROLOG
: ontology NYSE
)
The stream-all performative asks a set of answers
to be returned into a stream of replies.
KQML Performatives
• It describes the speech acts of the
message.
• It specifies the communication protocol to
be used.
• Classified into 7 categories.
Categories of Performatives
• Basic query: evaluate, ask-if, ask-about, askone, ask-all
• Multiple-response query: stream-about, streamall
• Response: reply, sorry
• Generic information: tell, achieve (ask other
agents to create a goal), cancel, untell (undo
tell), unachieve (forget the previous goal)
• Generator: standby, ready, next, rest, discard
• Capability-definition: advertise, recommend,
subscribe, monitor, import, export
• Networking: register, unregister, forward,
broadcast, route.
Application Areas
• Agents are usefully applied in domains where
autonomous action is required.
• Main application areas:
– Distributed Systems
– Networks
– Human-Computer Interfaces
Domain 1: Distributed Systems
• In this area, the idea of an agent is seen
as a natural metaphor, and a development
of the idea of concurrent object
programming.
• Example domains:
– air traffic control (Sydney airport)
– business process management
– power systems management
– distributed sensing
– factory process control
Domain 2: Networks
• There is currently a lot of interest in mobile
agents, that can move themselves around
a network (e.g., the Internet) operating on
a user’s behalf
• This kind of functionality is achieved in the
TELESCRIPT language developed by
General Magic for remote programming
• Applications include:
– hand-held PDAs with limited bandwidth
– information gathering
Domain 3: HCI
• One area of much current interest is the use of
agent in interfaces
• The idea is to move away from the direct
manipulation paradigm that has dominated for so
long
• Agents sit ‘over’ applications, watching, learning,
and eventually doing things without being told —
taking the initiative
• Pioneering work at MIT Media Lab (Pattie Maes):
– news reader
– web browsers
– mail readers