Intelligent Agents
Download
Report
Transcript Intelligent Agents
Intelligent agents
5.1 Characteristics of an intelligent agent
5.2 Agents and objects
5.3 Agent architectures
5.3.1 Logic-based architectures
5.3.2 Emergent behavior architectures
5.3.3 Knowledge-level architectures
5.3.4 Layered architectures
5.4 Multiagent systems
5.4.1 Benefits of a multiagent system
5.4.2 Building a multiagent system
5.4.3 Communication between agents
5.5 Summary
2
What agents are ?
What is an agent?
“An over-used term” (Patti Maes, MIT Labs, 1996)
“Agent” can be considered as a theoretical concept
from AI.
Many different definitions exist in the literature…..
Agent Definition (1)
An agent is an entity which is:
Autonomous (independent), in the sense that it can act without
direct intervention from humans or other software processes, and
controls over its own actions and internal state.
Flexible which means:
Responsive (reactive): agents should perceive their environment
and respond to changes that occur in it;
Proactive: agents should not simply act in response to their
environment, they should be able to exhibit opportunistic, goaldirected behavior and take the initiative when appropriate;
Social: agents should be able to interact with humans or other
artificial agents
“A Roadmap of agent research and development”,
N. Jennings, K. Sycara, M. Wooldridge (1998)
Agent Definition (2)
American Heritage Dictionary:
agent ” … one that acts or has the power or
authority to act… or represent
another”
Agent Definition (3)
"An agent is anything that can be
viewed as perceiving its environment
through sensors and acting upon that
environment through effectors."
Russell & Norvig
Agent Definition (4)
"Autonomous agents are computational
systems that inhabit some complex
dynamic environment, sense and act
autonomously in this environment”
Pattie Maes
Agent Definition (5)
“Intelligent
agents
continuously
perform three functions: perception of
dynamic
conditions
in
the
environment;
action
to
affect
conditions in the environment; and
reasoning to interpret perceptions,
solve problems, draw inferences, and
determine actions.”
Barbara Hayes-Roth
Agents & Environments
The agent takes sensory input from its
environment, and produces as output
actions that affect it.
sensor
input
Agent
Environment
action
output
Agent Characterisation
An agent is responsible for satisfying specific goals.
There can be different types of goals such as achieving a
specific status, maximising a given function (e.g., utility),
etc.
beliefs
knowledge
Goal1
Goal2
The state of an agent includes state of its internal
environment + state of knowledge and beliefs about its
external environment.
Examples of agents
Control systems
e.g. Thermostat
Software daemons
e.g. Mail client
But… are they known as Intelligent Agents?
N
What is “intelligence”?
What intelligent agents are ?
“An intelligent agent is one that is capable of
flexible autonomous action in order to
meet its design objectives, where flexible
means three things:
reactivity: agents are able to perceive their environment,
and respond in a timely fashion to changes that occur in it
in order to satisfy its design objectives;
pro-activeness: intelligent agents are able to exhibit
goal-directed behavior by taking the initiative in order to
satisfy its design objectives;
social ability: intelligent agents are capable of interacting
with other agents (and possibly humans) in order to satisfy
Wooldridge & Jennings
its design objectives”;
Agent Definition (9)
[Wikipedia: (The free Encyclopedia),
http://www.wikipedia.org ]
In computer science, an intelligent agent (IA)
is a software agent that exhibits some form of
artificial intelligence that assists the user and will
act on their behalf, in performing non-repetitive
computer-related tasks. While the working of
software agents used for operator assistance or
data mining is often based on fixed preprogrammed rules, "intelligent" here implies the
ability to adapt and learn.
Intelligent Agent [IBM]
Intelligent Agents
Software entities that carry out some
set of operations on behalf of a user or
another program with some degree of
independence or autonomy, and in so
doing employ some knowledge or
representation of a user’s goals or
desires.
IBM, Intelligent Agent Definition
Internal and External Environment of an Agent
External Environment:
user, other humans, other agents,
applications, information sources,
their relationships, platforms,
servers, networks, etc.
Balance
Internal Environment:
architecture, goals, abilities, sensors,
effectors, profile, knowledge,
beliefs, etc.
Agent Definition (6) [Terziyan, 1993, 2007]
Intelligent Agent is an entity that is able to keep continuously balance between its
internal and external environments in such a way that in the case of unbalance agent
can:
• change external environment to be in balance with the internal one ... OR
• change internal environment to be in balance with the external one … OR
• find out and move to another place within the external environment where balance
occurs without any changes … OR
• closely communicate with one or more other agents (human or artificial) to be able to
create a community, which internal environment will be able to be in balance with the
external one … OR
• configure sensors by filtering the set of acquired features from the external
environment to achieve balance between the internal environment and the deliberately
distorted pattern of the external one. I.e. “if you are not able either to change the
environment or adapt yourself to it, then just try not to notice things, which make you
unhappy”
Agent Definition (6) [Terziyan, 1993]
The above means that an agent:
1) is goal-oriented, because it should have at least one goal - to
keep continuously balance between its internal and external
environments ;
2) is creative because of the ability to change external environment;
3) is adaptive because of the ability to change internal environment;
4) is mobile because of the ability to move to another place;
5) is social because of the ability to communicate to create a
community;
6) is self-configurable because of the ability to protect “mental
health” by sensing only a “suitable” part of the environment.
Uses of intelligent agent
To search the web for specific piece of information,
consult a selection of search engines and filter the web
pages and return only two or three pages that precisely
match user needs are presented.
In the trading on the stock exchanges profit is due to
rapid reaction to minor price fluctuations (changes).
And this is well handled by agents. (For human, by
the time he made a decision, the opportunity would
have been lost)
23
Uses of intelligent agent
For large and complex software system, it is hard to
Maintain centrally.
Designed and tested against every probability.
Can modularise the software by
Changing modules into autonomous agents.
System is self managing, - provided with knowledge of
how to cope in particular situations, rather than being
explicitly programmed to handle every foreseeable
eventuality (expected probability) .
24
Characteristics of agent
Autonomy
Refers to an agent’s ability to make its own decisions
based on its own experience and circumstances
Persistence (constancy)
Refers to Agents ability to control its own internal state
and behavior, implying that an agent functions
continuously within its environment, i.e., it is
persistent over time.
25
Characteristics of agent
The ability to interact with its environment Agents are
situated, i.e., they are responsive to the demands of
their environment and are capable of acting upon it.
Interaction
with a physical environment requires
perception through sensors, and action through
actuators or effectors.
Interaction with a purely software environment is more
straightforward, requiring only access to and
manipulation of data and programs.
26
Characteristics of Intelligent Agents
reactive,
goal-directed,
adaptable,
socially capable.
27
Characteristics of Intelligent Agents
Reactive
Agent reacts because of some events.
Example:
Agent whose only role is to place a warning on your
computer screen when the printer has run out of
paper.
28
Characteristics of Intelligent Agents
Goal Directed
In modules of conventional computer code, goal
directed can thought in a limited sense that they have
been programmed to perform a specific task regardless
of their environment.
In an intelligent agent, agent decide its own goals and
choose its own actions to pursue its goals. It must also
be able to respond to unexpected changes in its
environment.
29
Characteristics of Intelligent Agents
Adaptable
Agent has to balance reactive and goal-directed
behavior, typically through a mixture of problem
solving, planning, searching, decision making, and
learning through experience.
30
Characteristics of Intelligent Agents
Social capability
Refers to the ability to cooperate and negotiate with
other agents (or humans), which forms the basis of
Multi agents system
31
Characteristics of Intelligent Agents
Overall
balance reactive and goal-directed behavior through
problem solving, planning, searching, decision
making, and learning.
Mobile agent – travel to remote computers, carry out
task and return home with the task completed. (eg:
determine a person’s travel plan). There is potential for
malicious mobile agents, so security is a prime
consideration for sites that accept them.
32
Agents vs. Objects
Objects
Allow complex problems to be broken down into simpler
constituents while maintaining the integrity of the
overall system.
Objects are viewed as obedient servants (cannot say No).
Agents
Intelligent agents can be seen as independent beings,
referred to as autonomous agents, (can say No).
33
Agents vs. Objects
When an agent receives a request to perform an action,
it will make its own decision, based on its beliefs and
in pursuit of its goals.
Agent behaves more like an individual with his or her
own personality
Agent-based systems are analogous (similar) to human
societies or organizations.
34
Agents vs. Objects
When an agent receives a request to perform an action,
it will make its own decision, based on its beliefs and
in pursuit (follow up) of its goals.
Agent behaves more like an individual with his or her
own personality
Agent-based systems are analogous to human societies
or organizations.
35
Agents and Objects
Objects
Agents
Autonomy
Autonomy is not required.
Object perform a task to achieve
the developer’s overall goal.
Object declare a method as
public, allowing other objects to
use that method.
Autonomy is required. Agent
can only request the actions of
another agent. What action to
take rests with the receiver of
the message.
Intelligence
Intelligence is not required
Intelligence is required
Persistence
Objects could be made to
persist from one run of a
program to another. Single
thread of control, sequential.
Agents persist in the sense that
they are constantly “switched
on” and operate concurrently.
Multiple thread of control
36
Tutorial
applicant
job advisor
In groups ( Maximum 4)
Consider you want to develop an Intelligent Agents for
Job finder system. You have two parties: applicant and
Job advisor (who looking for applicants).
How many agents that are needed
What are the goals of each agent?
Describe the communication processes between them?
Describe how each of them could behave Intelligently?
37
Agent Architectures
Agent Architecture gives the internal representation
(and reasoning capabilities) of an agent.
Four different schools of thought about how agent
architecture, balancing between reactive and goaldirected behaviour.
Logic based.
Emergent Behavior.
Knowledge level.
Layered.
38
Logic based
Logical deduction based on a symbolic representation
of the environment.
Elegant (clear)and rigorous (accurate).
Relies on the environment’s to remain without any
change during the reasoning process.
Difficult to symbolically represent the environment
and reasoning about it.
39
Emergent Behaviour (1)
Based on argument that logical deduction about the
environment is too detail, time-consuming.
Eg: In emergency situation (like a heavy object is falling on you), the
priority should be to move out of the way rather than to analyze and
prove the observation.
Agents
has only a set of reactive responses to
circumstances. Intelligent behaviour emerge from
combination of such responses.
Agents are reactive (does not include symbolic world
model or ability to perform complex symbolic reasoning).
Eg: Brooks’ subsumption architecture (containing behavior modules that
link actions to observed situations without any reasoning at all.
40
Example Emergent Behaviour :
Brooks Subsumption Architecture
The behaviours are
arranged into hierarchy,
Low-level behaviour has
precedence over higherlevel goal-oriented
behaviours
Simple and practical (also
highly effective).
Drawback : the emphasis
placed on the local
environment may lead to a
lack of awareness of the
bigger picture.
41
Knowledge level Architecture
Using knowledge-level agents where agent is a knowledge-
based system (deliberative agent).
Represent symbolic model of the world and make decisions
via logical reasoning based on pattern matching and
symbolic manipulation.
A deliberative agent’s knowledge determines its behavior in
accordance with Newell’s Principle of Rationality:
If an agent has knowledge that one of its actions will
lead to one of its goals, then the agent will select that
action.
42
Example approach: Beliefs–desires –intentions
(BDI) Architecture
BELIEFS - Knowledge of the environment; DESIRES-Overall goals.
Both Together, shape the INTENTIONS (the selected options that the
system commits itself toward achieving)
The intentions stay as long as they remain both consistent with the desires
and achievable according to the beliefs.
DELIBERATION- Determining what to do, (the desires or goals is).
MEANS-END-ANALYSIS- determining how to do it.
Need to balance between reactivity and goal-directedness (between
reconsidering intentions frequently and infrequently
The cautious approach is best in a rapidly changing environment and the
bold approach is best in a slowly changing environment.
43
BDI Architecture
44
Layered Architecture
Adopt the two different positions: balance between reactive and goal-
directed behaviour.
Example: Touring Machines (application where autonomous drivers of
vehicles negotiating crowded streets)
Three (3)specific layers: a REACTIVE layer, a PLANNING layer for
goal-directed behaviour, and a MODELING layer for modelling the
environment.
Problem : ensure balancing the layers; (an intelligent control
subsystem can ensure that each layer has an appropriate share of
power).
45
Example: Touring Machines
REACTIVE layer
PLANNING layer - goal-directed behavior
MODELING layer - modeling the environment.
46
47
48
Just a piece of software, but...
It senses its environment
Understands what it senses
Change environment based on its understanding
Understand?
Inferences are key
Can apply problem solving strategies
More than “data-driven” processing
49
Multiagent System
Team of agents working together.
Distributed artificial intelligence (DAI) , a branch of AI
- attempts to mimic a society of humans working
together.
Multiagent systems (MASs), or agent-oriented or
agent-based systems, and Blackboard systems are
important approach to DAI.
Blackboard systems: is an artificial intelligence application based on the
blackboard architectural model, where a common knowledge base, the "blackboard",
is iteratively updated by a diverse group of specialist knowledge sources, starting
with a problem specification and ending with a solution. Each knowledge source
updates the blackboard with a partial solution when its internal constraints match
the blackboard state.
“The solution is the sum of its parts”
50
Multiagent System
A system in which several interacting, intelligent agents
pursue a set of individually held goals or perform a set of
individual tasks.
What are the benefits of MAS?
How do agents interact?
How do agents pursue goals and perform tasks?
51
Main Benefits of MAS
Can handle complex problems
(large and cannot be solved by a single hardware or software system).
Intelligence in agents can handle a variety of circumstances.
Well-designed agents will ensure that every circumstance is handled
in an appropriate manner even though it may not have been
explicitly anticipated.
Can handle distributed problems
(data/information exist in different locations/times/clustered into
groups requiring different processing methods or semantics)
Require a distributed solution, which can be provided by agents
running concurrently, each with its own thread of control
52
Other Benefits of MAS
More natural intelligence.
Fast and efficient – due to concurrently running.
Robust and reliable – due to ability to take over.
Scalable – adding agents.
Granular ( likable )- operate at an appropriate level of detail.
Ease of development - encapsulation and reuse.
Cheaper Cost.
MASs, on the one hand, are suited to the design and construction of
complex, distributed software systems
53
Agent levels of abstraction
54
Building MAS
key design decisions
when, how, and with whom should agents interact?
Cooperative models
several agents try to combine their efforts to accomplish as a
group what the individuals cannot.
Competitive models
each agent tries to get what only some of them can have.
In either type of model, agents are generally assumed to
be honest.
55
Building MAS
Design Decision
bottom-up
top-down.
Bottom-up - agents built with sufficient capabilities
(such as communication protocols) to enable them to
interact effectively ( start from the mechanism level)
Top-down – (or societal norms) — are applied at the
group level in order to define how agents should
interact. (Start from Social level)
56
Building MAS
MAS represents computer models of human functional
roles with some interaction structure:
hierarchical control structure : one agent is the superior
of other subordinate agents.
Peer group relations, in a team-based organization.
3 models for managing agent interaction
Contract Nets.
Cooperative Problem Solving (CPS)
Shifting Matrix Management (SMM)
57
Contracts Net
Manager agent generates tasks and monitor the
executions.
Manager has agreements with contractor agents that
will execute the tasks. Each agents has roles that can
be taken dynamically.
Manager agent advertises tasks to other agents.
Interested Agents submit bid.
Manager evaluates the bids and awards contracts to
appropriate agents.
(a) Manager advertises a task;
(b) potential contractors bid for
the task;
Manager and contractor linked by a contract and
communicate privately while the contract is
executed.
Managers supply task information
(c) manager awards the
contract;
Contractor reports progress and final result
(d) manager and contractor
communicate privately
The negotiation process may recur if a contractor
subdivides its task and awards contracts to other
agents, for which it is the manager.
58
Cooperative problem-solving (CPS) Framework
Stage 1: recognition. Some agents recognize
the potential for cooperation with an agent
that is seeking assistance, possibly because it
has a goal it cannot achieve in isolation.
Stage 2: team formation. An agent that
recognized the potential for cooperative action
at Stage 1 solicits further assistance. If
successful, this stage ends with a group having
a joint commitment to collective action.
Stage 3: plan formation. The agents attempt
to negotiate a joint plan that they believe will
achieve the desired goal.
Top down model
Stage 4: team action. The newly agreed plan
of joint action is executed. By adhering to an
agreed social convention, the agents maintain
a close-knit relationship throughout.
59
Shifting Matrix Management (SMM)
The nodes represent people
Inspired by Mintzberg’s
Shifting Matrix Management
model of organizational
structures
Allows multiple lines of
authority, reflecting the
multiple functions expected of
a flexible workforce.
Regard lines of authority as
temporary, typically changing
as different projects start and
finish.
60
Shifting Matrix Management (SMM)
Stage 1: goal selection. Agents select the tasks they want to perform,
based on their initial mental states.
Stage 2: individual planning. Agents select a way to achieve their
goals. In particular, an agent that recognizes its intended goal is
common to other agents would have to decide whether to pursue the
goal in isolation or in collaboration with other agents.
Stage 3: team formation. Agents that are seeking cooperation
attempt to organize themselves into a team. The establishment of a
team requires an agreed code of conduct, a basis for sharing
resources, and a common measure of performance.
Stage 4: team planning. The workload is distributed among team
members.
Stage 5: team action. The team plan is executed by the members
under the team’s code of conduct.
Stage 6: shifting. The last stage of the cooperation process, which
marks the disbanding of the team, involves shifting agents’ goals,
positions, and roles. Each agent updates its probability of teamworking with other agents, depending on whether or not the
completed team-working experience with that agent was successful.
This updated knowledge is important, as iteration through the six
stages takes place until all the tasks are accomplished.
Agent Communications
How agents communicate with each other?
Synchronous communication is rather like a conversation — after sending a message, the
sending agent awaits a reply from the recipient.
Asynchronous communication is more akin to sending an email or a letter — although
you might expect a reply at some future time, you do not expect the recipient to read or act
upon the message immediately.
Messages structure
Standard between agents, regardless of the domain in which they are operating.
Message should be understandable by all agents regardless of their domain, even if they do
not understand its content.
Thus, structure needs to be standardized such that domain-specific content is self-contained
within it. Only specialist agents need to understand the content, but all agents need to be
able to understand the form of the message.
Structures for achieving this are called agent communication languages (ACLs) such as
Knowledge Query and Manipulation Language (KQML).
FIPA-ACL by Foundation for Intelligent Physical Agents (FIPA)
62
Q&A
63