Intelligent Agent Technology and Application

Download Report

Transcript Intelligent Agent Technology and Application

Intelligent Agent
Technology and Application
Course overview
and
what is intelligent agent
©Intelligent Agent Technology and Application, 2006, Ai Lab NJU
Before we start
2

Assoc. Prof., Dr. Gao Yang

[email protected]

Ai Lab, CS Dept., NJU

83686586(O)

Room 403-A, Mengminwei Building

Http://cs.nju.edu.cn/gaoy

Courseware could be found from my homepage.
©Gao Yang, Ai Lab NJU
Sept. 2006
Motivation
3

Agents, the next paradigm for software?

Agent-Oriented taking over for Object-Oriented?

Agents crucial for open distributed systems?

Agents the most natural entity in e-business?

Agent and peer-to-peer technology inseparable?

Which is the killer application using the agent
technology?
©Gao Yang, Ai Lab NJU
Sept. 2006
What will you learn from this course?

Upon completed this course a student should
•
Know what an agent and an agent system is.
•
Have a good overview of important agent issues:
•
4
•
Agent Negotiation, Coordination and Communication.
•
Micro and macro agent Architectures.
•
Agent Learning.
•
Agent Model and Theory.
•
Agent Communication.
•
Agent Application.
Get valuable hands-on experience in developing
intelligent system.
©Gao Yang, Ai Lab NJU
Sept. 2006
Lectures

















5
2006.9.18
Course overview and what is intelligent agent
2006.9.25
Negotiation in MAS(i)
2006.10.10
Negotiation in MAS(ii)
2006.10.17
Agent learning (i)
2006.10.24
Agent learning (ii)
2006.10.31
Agent communication language (i)
2006.11.7
Agent communication language (ii)
2006.11.14
RoboCup
2006.11.21
Agent architectures(Micro) (i)
2006.11.28
Agent architectures(Micro) (ii)
2006.12.5
Agent model and theory(i)
2006.12.12
Agent model and theory(ii)
2006.12.19
Mobile Agent (i)
2006.12.26
Mobile Agent (ii)
2007.1.2
Mobile Agent (iii)
2007.1.9
Summary of this course
Other issues: Architectures of multi-agent system(Macro), Coordination in
MAS, Agent oriented software engineering, Agent oriented programming,
Agent and p2p computing, Agent and Grid computing, Classification of agents
and its application,
©Gao Yang, Ai Lab NJU
Sept. 2006
Recommended books

G.Weiss, editor. "Multiagent Systems". MIT Press, 1999.

J. Ferber. "Multi-Agent Systems". Addison-Wesley, 1999.

G. M. P. O'Hare and N. R. Jennings, editors. "Foundations of Distributed AI".
Wiley Interscience, 1996.

M. Singh and M. Huhns. "Readings in Agents". Morgan-Kaufmann
Publishers, 1997.

Shi Zhong-zhi. “Intelligent agent and its application” (in Chinese). Science
press, 2000.

Michael Wooldridge. “An Introduction to Multiagent Systems”. John Wiley &
Sons press, 2002. (Shi Chun-yi et al. in Chinese)

6
And other selected papers and websites.
©Gao Yang, Ai Lab NJU
Sept. 2006
Assessment
7

Lecturee
10%

Paper reading
10%

Experiments
20%

Final Exam (open)
©Gao Yang, Ai Lab NJU
60%
Sept. 2006
What is intelligent agent

Field that inspired the agent fields?
– Artificial Intelligence

–
Software Engineering

–
8
Agent architecture, MAS, Coordination
Game Theory and Economics


Agent as an abstracted entity
Distributed System and Computer Network

–
Agent intelligence and micro-agent
?
Agent Negotiation
There are two kinds definition of agent
– Often quite narrow
– Extremely general
©Gao Yang, Ai Lab NJU
Agent
Sept. 2006
General definitions

American Heritage Dictionary
–

Russel and Norvig
–

”An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that
environment through effectors.”
Maes, Parrie
–
9
”... One that acts or has the power or authority to act ... or
represent another”
”Autonomous agents are computational systems that
inhabit some complex dynamic environment, sense and act
autonomously in this environment, and by doing so realize
a set of goals or tasks for which they are designed”.
©Gao Yang, Ai Lab NJU
Sept. 2006
Agent: more specific definitions

Smith, Cypher and Spohrer
–

Hayes-Roth
–
10
”Let us define an agent as a persistent software entity
dedicated to a specific purpose. ’Persistent’ distinguishes
agents from subroutines; agents have their own ideas
about how to accomplish tasks, their own agendas.
’Special purpose’ distinguishes them from multifunction
applications; agents are typically much smaller.
”Intelligent Agents continuously perform three functions:
perception of dynamic conditions in the environment;
action to affect conditions in the environment; and
reasoning to interpret perceptions, solve problems, draw
inferences, and determine actions.
©Gao Yang, Ai Lab NJU
Sept. 2006
Agent: industrial definitions

IBM
–
”Intelligent agents are software entities that carry out some
set of operations on behalf of a user or another program
with some degree of independence or autonomy, and in
doing so, employ some knowledge or representations of
the user’s goals or desires”
11
©Gao Yang, Ai Lab NJU
Sept. 2006
Agent: weak notions

Wooldridge and Jennings
–
12
An Agent is a piece of hardware or (more commonly) softwarebased computer system that enjoys the following properties

Autonomy: agents operate without the direct intervention of
humans or others, and have some kind of control over their
actions and internal state;

Pro-activeness: agents do not simply act in response to their
environment, they are able to exhibit goal-directed behavior by
taking the initiative.

Reactivity: agents perceive their environment and respond to
it in timely fashion to changes that occur in it.

Social Ability: agents interact with other agents (and possibly
humans) via some kind of agent-communication language.”
©Gao Yang, Ai Lab NJU
Sept. 2006
Agent: strong notions

Wooldridge and Jennings
–
13
Weak notion in addition to

Mobility: the ability of an agent to move around a
network

Veracity: agent will not knowingly communicate false
information

Benevolence: agents do not have conflicting goals and
always try to do what is asked of it.

Rationality: an agent will act in order to achieve its
goals and will not act in such a way as to prevent its
goals being achieved
©Gao Yang, Ai Lab NJU
Sept. 2006
Summary of agent definitions

An agent act on behalf user or another entity.

An agent has the weak agent characteristics. (Autonomy, Pro-
activeness, Reactivity, Social ability)

An agent may have the strong agent characteristics. (Mobility,
Veracity, Benevolence, Rationality)
14
©Gao Yang, Ai Lab NJU
Sept. 2006
Dear child gets many names…

15
Many synonyms of the term “Intelligent agent”
–
Robots
–
Software agent or softbots
–
Knowbots
–
Taskbots
–
Userbots
–
……
©Gao Yang, Ai Lab NJU
Sept. 2006
Why the buzz around the agents?

Lack of programming paradigm for distributed systems.

Tries to meet problems of the “closed world” assumption in
object-orientation.

Agents is a frequently used term to describe software in
general (due to vague definition) .

16
Massive media hype in the era of the dot-coms.
©Gao Yang, Ai Lab NJU
Sept. 2006
Autonomy is the key feature of agent

Examples
–
–
Thermostat

Control / Regulator

Any control system
Agent
Action
Input
Sensor
Input
Software Daemon

Print server

Http server
Environment

17
Most software daemons
©Gao Yang, Ai Lab NJU
Sept. 2006
Type of environment

An agent will not have complete control over its
environment, but have partial control, in that it can
influence it.
–

Classification of environment properties [Russell
1995, p49]
–
–
–
–
–
18
Scientific computing or MIS in traditonal computing.
Accessible vs. inaccessible
Deterministic vs. non-deterministic
Episodic vs. non-episodic
Static vs. dynamic
Discrete vs. continuous
©Gao Yang, Ai Lab NJU
Sept. 2006
Accessible vs. inaccessible

19
Accessible vs. inaccessible
–
An accessible environment is one in which the
agent can obtain complete, accurate, up-to-date
information about the environment’s state. (also
complete observable vs. partial observable)
–
Accessible: sensor give complete state of the
environment.
–
In an accessible environment, agent needn’t keep
track of the world through its internal state.
©Gao Yang, Ai Lab NJU
Sept. 2006
Deterministic vs. non-deterministic

20
Deterministic vs. non-deterministic
–
A deterministic environment is one in which any
action has a single guaranteed effect , there is no
uncertainty about the state that will result from
performing an action.
–
That is, next state of the environment is
completely determined by the current state and
the action select by the agent.
–
Non-deterministic: a probabilistic model could be
available.
©Gao Yang, Ai Lab NJU
Sept. 2006
Episodic vs. non-episodic

21
Episodic vs. non-episodic
–
In an episodic environment, the performance of
an agent is dependent on a number of discrete
episodes, with no link between the performance
of an agent in different scenarios. It need not
reason about the interaction between this and
future episodes. (such as a game of chess)
–
In an episodic environment, agent doesn’t need
to remember the past, and doesn’t have to think
the next episodic ahead.
©Gao Yang, Ai Lab NJU
Sept. 2006
Static vs. dynamic

Static vs. dynamic
–
A static environment is one that can assumed to
remain unchanged expect by the performance of
actions by the agents.
–
A dynamic environment is one that has other
processes operating on it which hence changes
in ways beyond the agent’s control.
22
©Gao Yang, Ai Lab NJU
Sept. 2006
Discrete vs. continuous

23
Discrete vs. continuous
– An environment is discrete if there are a fixed,
finite number of actions and percepts in it.
©Gao Yang, Ai Lab NJU
Sept. 2006
Why classify environments

The type of environment largely determines the
design of agent.

Classifying environment can help guide the agent’s
design process (like system analysis in software
engineering).

Most complex general class of environments
–
24
Are inaccessible, non-deterministic, non-episodic,
dynamic, and continuous.
©Gao Yang, Ai Lab NJU
Sept. 2006
Discuss about environment: Gripper

Gripper is a standard example for probabilistic
planning model
–
Robot has three possible actions: paint (P), dry
(W) and pickup (U)
–
State has four binary features: block painted,
gripper dry, holding block, gripper clean
25
–
Initial state:
–
Goal state:
©Gao Yang, Ai Lab NJU
Sept. 2006
Discuss about environment: Gripper
s8
s12
(P,1,1)
(U,0.95,1)
(U,0.05,-0.1)
s7
(P,1,-1)
s4
(W,0.8,-0.1)
s6
(U,0.95,-0.1)
(P,0.1,-1)
s10
s11
(P,0.9,-0.1)
(U,0.5,-1)
(W,0.8,-0.1)
(W,0.2,-0.1)
s3
s2
(U,0.05,-0.1)
(U,0.5,-0.1)
(U,0.5,-0.1)
(W,0.8,-0.1)
s9
(W,0.2,-0.1)
s5
(P,0.9,-0.1)
(P,0.1,-1)
s1
(U,0.5,-0.1)
(W,0.2,-0.1)
Gripper
26
©Gao Yang, Ai Lab NJU
Sept. 2006
Intelligent agent vs. agent

27
An intelligent agent is one that is capable of flexible
autonomous action in order to meet its design
objectives, where flexibility means three things:
–
Pro-activeness: the ability of exhibit goal-directed
behavior by taking the initiative.
–
Reactivity: the ability of percept the environment,
and respond in a timely fashion to changes that
occur in it.
–
Social ability: the ability of interaction with other
agents (include human).
©Gao Yang, Ai Lab NJU
Sept. 2006
Pro-activeness

28
Pro-activeness
–
In functional system, apply pre-condition and postcondition to realize goal directed behavior.
–
But for non-functional system (dynamic system), goal must
remain valid at least until the action complete.
–
agent blindly executing a procedure without regard to
whether the assumptions underpinning the procedure are
valid is a poor strategy.

Observe incompletely

Environment is non-deterministic

Other agent can affect the environment
©Gao Yang, Ai Lab NJU
Sept. 2006
Reactivity

29
Reactivity
–
Agent must be responsive to events that occur in
its environment.
–
Building a system that achieves an effective
balance between goal-directed and reactive
behavior is hard.
©Gao Yang, Ai Lab NJU
Sept. 2006
Social ability

30
Social ability
– Must negotiate and cooperate with others.
©Gao Yang, Ai Lab NJU
Sept. 2006
Agent vs. object

Object
–
31
Are defined as computational entities that
encapsulate some state, are able to perform
actions, or methods on this state, and
communicate by message passing.

Are computational entities.

Encapsulate some internal state.

Are able to perform actions, or methods, to change this
state.

Communicate by message passing.
©Gao Yang, Ai Lab NJU
Sept. 2006
Agent and object

32
Differences between agent and object
–
An object can be thought of as exhibiting
autonomy over its state: it has control over it. But
an object does not exhibit control over it’s
behavior.
–
Other objects invoke their public method. Agent
can only request other agents to perform actions.
–
“Objects do it for free, agents do it for money.”
–
(implement agents using object-oriented
technology)……Thinking it.
©Gao Yang, Ai Lab NJU
Sept. 2006
Agent and object
33
–
In standard object model has nothing whatsoever
to say about how to build systems that integrate
reactive, pro-active, social behavior.
–
Each has their own thread of control. In the
standard object model, there is a single thread of
control in the system.
–
(agent is similar with an active object.)
–
Summary,

Agent embody stronger notion of autonomy than object

Agent are capable of flexible behavior

Multi-agent system is inherently multi-threaded
©Gao Yang, Ai Lab NJU
Sept. 2006
Agent and expert system

Expert system
–

34
Is one that is capable of solving problems or
giving advice in some knowledge-rich domain.
The most important distinction
–
Expert system is disembodied, rather than being
situated.
–
It do not interact with any environment. Give
feedback or advice to a third part.
–
Are not required to interact with other agents.
©Gao Yang, Ai Lab NJU
Sept. 2006
Example of agents
Mobile
Customer
Mobile
Customer
Agent
(Peer)
Mobile
Customer
35
©Gao Yang, Ai Lab NJU
Agent
(Peer)
Agent
(Peer)
Agent
(Peer)
Mobile
Customer
Sept. 2006
Distributed Artificial Intelligence (DAI)

DAI is a sub-field of AI

DAI is concerned with problem solving where
agents solve (sub-) tasks (macro level)

Main area of DAI
–
Distributed problem solving (DPS)

–
Centralized Control and Distributed Data (Massively
Parallel Processing)
Multi-agent system (MAS)

Distributed Control and Distributed Data (coordination
crucial)
Some histories
36
©Gao Yang, Ai Lab NJU
Sept. 2006
DAI is concerned with……




Agent granularity (agent size)
Heterogeneity agent (agent type)
Methods of distributing control (among agents)
Communication possibilities
Distributed
AI

MAS
Distributed
– Coarse agent granularity Computing
– And high-level communication
Distributed
Problem
Solving
37
©Gao Yang, Ai Lab NJU
Artificial
Intelligence
Multi-Agent
Systems
Sept. 2006
DAI is not concerned with……
38

Issues of coordination of concurrent processes at
the problem solving and representational level.

Parallel computer architecture, parallel
programming languages or distributed operation
system.

No semaphores, monitors or threads etc.

Higher semantics of communication (speech-act
level)
©Gao Yang, Ai Lab NJU
Sept. 2006
Motivation behind MAS

To solve problems too large for a centralized agent
–

To allow interconnection and interoperation of
multiple legacy system
–
39
E.g. Financial system
E.g. Web crawling

To provide a solution to inherently distributed
system

To provide a solution where expertise is distributed

To provide conceptual clarity and simplicity of
design
©Gao Yang, Ai Lab NJU
Sept. 2006
Benefits of MAS

Faster problem solving

Decreasing communication
–
Higher semantics of communication (speech-act
level)
40

Flexibility

Increasing reliability
©Gao Yang, Ai Lab NJU
Sept. 2006
Heterogeneity degrees in MAS

Low
–

Medium
–

Identical agents, different resources
Different agent expertise
High
–
Share only interaction protocol (e.g. FIPA or
KQML)
41
©Gao Yang, Ai Lab NJU
Sept. 2006
Cooperative and self-interested MAS


42
Cooperative
–
Agents designed by interdependent designers
–
Agents act for increased good of the system (i.e. MAS)
–
Concerned with increasing the systems performance and
not the individual agents
Self-interested
–
Agents designed by independent designer
–
Agents have their own agenda and motivation
–
Concerned with the benefit of each agent (’individualistic’)
–
The latter more realistic in an Internet-setting?
©Gao Yang, Ai Lab NJU
Sept. 2006
Our categories about MAS

Cooperation
–

Competitive
–

Both has a common object
Each have different objects which are
contradictory.
Semi-competitive
–
Each have different objects which are conflictive,
but the total system has one explicit (or implicit)
object
The first now is known as TEAMWORK.
43
©Gao Yang, Ai Lab NJU
Sept. 2006
Distributed AI perspectives
Distributed
AI
Perspectives
Agent
T he or y
H yb
Gr
ou
p
ge
ua
c
re
La
ng
Ar
tu
er
e
er at iv
ec
ign
D el ib
iv e
t
hi
Des
R ea ct
aches
Appro
c
i
f
Speci
ri d
Coop
erat
ion
eds
Testb
Methods
tion
dina
Coor
©Gao Yang, Ai Lab NJU
Coh
Beh erent
avi
or
ns
io
at
c
i
pl
Ap
De
si
gn
s
ol
To
44
Planning
n
io
at
i
t
go
Ne
s
si
ly
a
An
Sept. 2006
Our Thinking in MAS

Single benefit vs. collective benefit

No need central control

Social intelligence vs. single intelligence

Self-organize system
–
45
Self-form, self-evolve

Intelligence is emergence, not innative

…..
©Gao Yang, Ai Lab NJU
Sept. 2006
Conclusions of lecture

Agent has general definition, weak definition and
strong definition

Classification of the environment

Differences between agent and intelligent agent,
agent and object, agent and expert system

Multi-agent system is macro issues of agent
systems
46
©Gao Yang, Ai Lab NJU
Sept. 2006
Coursework

47
1. Give other examples of agents (not necessarily
intelligent) that you know of. For each, define as
precisely as possible:
– (a). the environment that the agent occupies, the
states that this environment can be in, and the
type of environment.
– (b). The action repertoire available to the agent,
and any pre-conditions associated with these
actions;
– (c). The goal, or design objectives of the agent –
what it is intended to achieve.
©Gao Yang, Ai Lab NJU
Sept. 2006
Coursework

48
2. If a traffic light (together with its control system)
is considered as intelligent agent, which of agent’s
properties should be employ? Illustrate your answer
by examples.
©Gao Yang, Ai Lab NJU
Sept. 2006
Coursework

3. Please determine the environment’s type.
Chess
Poker
Minesweeper
Eshopping
Accessible??
Deterministic
??
Episodic??
Static??
Discrete??
49
©Gao Yang, Ai Lab NJU
Sept. 2006
References

50
[Russell 1995] S. Russell and P. Norvig. Artificial Intelligence: A Modern
Approach. Prentice-Hall, 1995.
©Gao Yang, Ai Lab NJU
Sept. 2006