coppin chapter 19
Download
Report
Transcript coppin chapter 19
Chapter 19
Intelligent Agents
1
Chapter 19 Contents (1)
Intelligence
Autonomy
Ability to Learn
Other Agent Properties
Reactive Agents
Utility-Based Agents
Utility Functions
Interface Agents
Mobile Agents
2
Chapter 19 Contents (2)
Information Agents
Multiagent Systems
Subsumption Architecture
BDI Architectures
Horizontal and Vertical Architectures
Accessibility
Learning Agents
Robotic Agents
Braitenberg Vehicles
3
Intelligence
An agent is a tool that carries out tasks on
behalf of a human user.
An intelligent agent possesses domain
knowledge and the ability to use that
knowledge to solve its problems more
efficiently.
Intelligent agents are often able to learn,
and have other properties that we will look
at in the following slides.
4
Autonomy
Autonomy is the ability to act independently of
the human user’s instructions.
Hence, a buying agent that needs to make a
quick decision about an increased bid can use
autonomy to do so without the need to waste
time by consulting a human.
Autonomy is a an important feature of many
intelligent agents, but is not seen in many
other Artificial Intelligence techniques.
5
Ability to Learn
Many agents can learn from their
environments and from their success or
failure at solving problems.
Agents can learn from a user or from other
agents.
When a human tells an agent it has solved
a problem poorly it can learn from this and
avoid making the same mistakes in the
future.
6
Other Agent Properties
Co-Operation: interaction between agents.
Versatility: ability to carry out a range of
different tasks.
Benevolence: helpfulness to other agents
and people.
Veracity: tendency to tell the truth.
Mobility: ability to move about in the
Internet or another network (or the real
world).
7
Reactive Agents
Also known as reflex agents.
Uses a production system to determine
what action to carry out based on current
inputs.
Example: spam mail filter.
Does not perform well when the
environment changes.
Does not deal well with unexpected events.
8
Utility-Based Agents
Agents that attempt to achieve some
specified goal, usually using search or
planning methods.
An agent, for example, might have the goal
of finding interesting web pages.
The agent would have various actions it
could perform such as fetching web pages
and examining them.
9
Utility Functions (1)
More sophisticated goal-based agents
have utility functions to decide which
goals to accept.
The agent is always attempting to both
achieve its goals, and to maximise some
utility function.
Hence, the web researching agent would
have a utility function that measured how
interesting web pages were, and would
attempt to find the most interesting page it
could.
10
Utility Functions (2)
A utility function maps the set of states to
the set of real numbers.
Hence, an agent with a utility function can
determine how “happy” it is in any given
state.
Example: Static board evaluators used in
playing games.
A rational agent is one that will always try
to maximise its utility functions.
This is true even if this results in seemingly
bizarre behaviour.
11
Interface Agents
An interface agent is a personal
assistant.
Example: a tool used to help a user
learn to use a new software package.
Interface agents observe a user’s
behavior and make recommendations
accordingly.
12
Mobile Agents
Mobile agents can move from one location
to another.
This can mean physical locations (for
robots) or network locations.
A computer virus is a kind of mobile agent.
Viruses are usually autonomous but not
intelligent.
Mobile agents are efficient, but can pose a
severe security risk.
Mobile agents can be combined to produce
a distributed computing architecture.
13
Information Agents
Also known as Internet agents.
Information agents gather information
from the Internet (or other source of data).
Can be static or mobile.
Can be taught by example: “find me more
information like this”.
Information agents need to be
sophisticated to deal with the “dirty”
nature of much of the data on the Internet.
14
Multiagent Systems (1)
A multiagent system depends on a number
of agents.
Each agent has incomplete information
and cannot solve the problem on its own.
By cooperating, all the agents together can
solve the problem.
Similar to the way in which ant colonies
work.
15
Multiagent Systems (2)
Agents in multi-agent systems usually
have the ability to communicate and
collaborate with each other.
Learning multi-agent systems can be
developed, for example to control the
individual limbs of a robot.
An agent team is a group of agents that cooperate to achieve some common goal –
such as arranging the various components
of a trip: flight, train, taxi, hotel etc.
16
Subsumption Architecture (1)
Architecture for intelligent agents –
invented by Brooks in 1985.
Consists of a set of inputs, outputs and
modules in layers. For example:
Each module is an AFSM (Augmented
Finite State Machine) – based on
production rules of the form input ->
action.
17
Subsumption Architecture (2)
The rules are situated action rules, as they
determine what the agent will do in given
situations.
Such an agent is said to be situated.
An AFSM triggers when its input exceeds a
threshold.
The layers in the architecture act
asynchronously, but can affect each other.
One layer can suppress the outputs of
some layers, while taking into account
output from other layers.
18
BDI Architectures
Belief Desire Intention Architectures.
Beliefs: statements about the environment.
Desires: goals
Intentions: plans for how to achieve the goals.
The agent considers the options available,
and commits to one.
This option becomes the agent’s intention.
Agents can be bold (carries out its
intentions no matter what) or cautious
(constantly reassesses its intentions).
19
Horizontal and Vertical Architectures
The
subsumption
architecture and
TouringMachines are
examples of horizontal
architectures:
Layers act in parallel and all contribute to an
overall output.
InteRRaP is an example of a vertical
layered architecture:
Outputs are passed through from one layer to the
next, until the last layer produces the final output.
20
Accessibility
Some agents operate in accessible environments, where
all relevant facts are available to the agent
Most agents must operate in inaccessible environments
where some information is unavailable.
For example, chess playing is accessible, poker playing
is inaccessible.
Additionally, environments can be deterministic or
stochastic.
Markov Decision Processes are useful for dealing with
stochastic, accessible environments.
21
Learning Agents
Agents learn using mechanisms such as
neural networks and genetic algorithms.
Learning enables an agent to solve
problems it has not previously faced, and
to learn from past experience.
Multi-agent learning can produce much
more impressive results.
Such learning can be centralized or
decentralized – agents learn individually or
contribute to the learning of the whole
group.
22
Robotic Agents
Unlike software agents, robotic agents
exist in the real world.
Robots operate in a stochastic,
inaccessible environment, and must also
be able to deal with large numbers of other
agents (such as humans) and other
complicating factors.
It is important for robotic agents to deal
with change and uncertainty well.
23
Braitenberg Vehicles
Simple robotic agents that can exhibit
complex behavior.
There are 14 classes of vehicles.
Class 1: simply moves faster the more
light there is.
Class 2: two
configurations – one
moves towards light,
the other away.
These can be thought of as being bold and
timid.
24