Peer-to-peer based Recommendations for Mobile Commerce
Download
Report
Transcript Peer-to-peer based Recommendations for Mobile Commerce
Lecture 1: Introduction
SIF8072
Distributed Artificial Intelligence
and
Intelligent Agents
http://www.idi.ntnu.no/~agent/
Lecturer: Sobah Abbas Petersen
Email: [email protected]
Lecture Outline
1. Practical Information
2. Definition of an Agent
3. Distributed Artificial Intelligence and Multiagent Systems
4. Agent Typology
5. Summary and references
2
Practical Information - I
Course-related information:
Web-page – http://www.idi.ntnu.no/~agent/
Lectures: Thursdays, 15:00-17:00, room R4
Web-page – http://www.idi.ntnu.no/~agent/lectures/
Exercises: Mondays, 17:00-19:00, room R4
Web-page – http://www.idi.ntnu.no/~agent/exercises/
Written Exam: Wednesday, 14th May
Web-page – http://www.idi.ntnu.no/~agent/exam/ (past exam papers)
3
Practical Information - II
Curriculum:
• ”Introduction to MultiAgent Systems” by Michael Wooldridge
available from TAPIR, price: NOK 375
(http://www.csc.liv.ac.uk/~mjw/pubs/imas/)
• Additional Articles
– List available from http://www.idi.ntnu.no/~agent/curriculum/
Exercises and Project:
4 mandatory exercises and 1 mandatory project
Questions regarding Exercises and Project:
Teaching Assistant: Peep Kungas
Email: [email protected]
4
Lecture Plan
Date
Lecture
1
16.01 2003
Introduction, Overview and Technology
2
23.01 2003
Multi-agent Interactions
3
30.01 2003
Negotiation
4
06.02.2003
Coordination
5
13.02.2003
Agent Communication Languages
6
20.02.2003
Agent Architectures
7
27.02.2003
Multi-agent Systems Architectures
8
06.03.2003
Agent Theory
9
13.03.2003
Mobile Agents
10
20.03.2003
Agent-oriented Software Engineering
11
27.03.2003
Agent-mediated Electronic Commerce
12
03.04.2003
Summary
5
Example 1
”When a space probe makes its long flight from
Earth to outer planets, a ground crew is usually
required to continue to track its progress and
decide how to deal with unexpected eventualities.
This is costly and, if decisions are required
quickly, it is simply not practical. For these
reasons, organisations like NASA are seriously
investigating the possibility of making the probes
more autonomous – giving them richer decision
making capabilities and responsibilities.”
6
Example 2
”Searching the Internet for the answer to a specific
query can be a long and tedious process. So, why
not allow a computer program – an agent – do
searches for us? The agent would typically be
given a query that would require synthesizing
information from various different internet
information sources.”
7
Example 3
”After a wet and cold winter, you are in need of a
last minute holiday somewhere warm. After
specifying your requirements to your Personal
Digital Assistant (PDA), it converses with a
number of different web sites which sell services
such as flights and hotel rooms. After hard
negotiation on your behalf with a range of sites,
your PDA presents you with a package holiday.”
8
Overview 1
•
Five ongoing trends have marked the history of
computing:
1. Ubiquity
•
Reduction in the cost of computing capability
2. Interconnection
•
Computer systems are networked into large distributed
systems
3. Intelligence
•
The complexity of tasks that can be automated and delegated
to computers
4. Delegation
•
Judgement of computer systems are frequently accepted
5. Human-orientation
•
Use concepts and metaphors that reflect how we understand
the world
9
Overview 2
• These trends present major challenges to software
developers. e.g.
– Delegation – act independently.
– Intelligence – act in a way that represents our best
interests while interacting with other humans or
systems.
Need systems that can act effectively on our behalf.
• Systems must must have the ability to cooperate
and reach agreements with other systems.
New field: Multi-agent Systems
10
Overview 3
• An agent is a system that is capable of
independent action on behalf of its user or owner.
• A multi-agent system is one that consists of a
number of agents which interact with one another.
• In order to successfully interact, agents need
ability to cooperate, coordinate and negotiate.
11
Two Key Problems
1.
How do we build agents that are capable of independent,
autonomous action in order to successfully carry out the
tasks that we delegate to them? (Micro aspects)
2. How do we build agents that are capable of interacting
(cooperating, coordinating, negotiating) with other agents
in order to successfully carry out the tasks we delegate to
them? (Macro aspects)
12
Fields that inspired agents
• Artificial Intelligence
– Agent intelligence, micro aspects
• Software Engineering
– Agent as an abstraction
• Distributed systems and Computer Networks
– Agent architectures, multi-agent systems, coordination
There are many definitions of agents – often too narrow or
too general.
13
Definitions of Agents 1
American Heritage Dictionary:
”... One that acts or has the power or authority to act ... or represent another”
Russel and Norvig:
”An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through effectors.”
Maes, Pattie:
”Autonomous Agents are computational systems that inhabit some complex
dynamic environment, sense and act autonomously in this environment, and by
doing so realize a set of goals or tasks for which they are designed”.
14
Definitions of Agents 2
IBM:
”Intelligent agents are software entities that carry out some
set of operations on behalf of a user or another program
with some degree of independence or autonomy, and in
doing so, employ some knowledge or representations of
the user’s goals or desires”.
15
Definitions of Agents 3
• An agent is autonomous: capable of acting
independently, exhibiting control over its internal
state.
An agent is a computer system capable of
autonomous action in some environment.
System
Input
Output
Environment
16
Definition of Agent 4
• Examples of trivial/non-interesting agents are:
– Thermostat, UNIX deamon, e.g. biff
An intelligent agent is a computer system capable
of flexible autonomous action in some
environment.
- By flexible we mean:
- Reactive
- Pro-active
- Social
17
Properties of Agents 1
• Autonomous
– Capable of independent action without our interference
• Reactive
– Maintains an ongoing interaction with its environment and
responds to changes that occur in (in time for the response to be
useful).
• Pro-active
– Generating and attempting to achieve goals; not driven solely by
events; taking the initiative.
• Social
– The ability to interact with other agents (and possibly humans) via
some kind of agent communication language and perhaps
cooperate with others.
18
Properties of Agents 2
• Mentalistic notions, such as beliefs and
intentions are often referred to as properties
of strong agents.
• Other properties are:
– Mobility: the ability of an agent to move around a network.
– Veracity: agent will not knowingly communicate false information.
– Benevolence: agents do not have conflicting goals and always try
to do what is asked of it.
– Rationality: an agent will act in order to achieve its goals and will
not act in such a way as to prevent its goals being achieved.
19
Agents and Objects 1
• Are agents just objects by another name?
Objects do it for free…
• Agents do it because they want to!
• Agents do it for money!
20
Agents and Objects 2
Main differences:
– Agents are autonomous: agents embody a stronger notion of
autonomy than objects, in particular, agents decide for themselves
whether or not to perform an action.
– Agents are smart: capable of flexible (reactive, pro-active social)
behaviour; standard object models do not have such behaviour.
– Agents are active: a multi-agent system is inherently multi-
threaded in that each agent is assumed to have atleast one thread of
active control.
21
Let’s take a minute……
• Discuss with your neighbour what you think of
this definition.
• Try to come up with a few examples of agents that
you know.
22
Why agents?
• Today, we have a distributed environment that
cannot be completely specified – open
environments.
• Former paradigms, such as OOP, cannot
completely satisfy our needs:
– They were designed for constructing systems in a
completely specified environment - a closed world.
23
How can we work in an Open
Environment
• By copying human behaviour:
– Perceive the environment
Agent
– Affect the environment
– Have a model of behaviour
Environment
– Have intentions and motivations to be fulfilled
by implementing corresponding goals
24
Distributed Artificial Intelligence
(DAI)
•
•
DAI is a sub-field of AI
DAI is concerned with problem solving where
agents solve (sub-) tasks (macro level)
Distributed AI
•
Main areas of DAI
Distributed
Computing
Artificial
Intelligence
1. Multi-Agent Systems (MAS)
2. Distributed Problem Solving (DPS)
Distributed
Problem
Solving
Reference: B. Moulin, B. Chaib-draa. ”An Overview of Distributed
Artificial Intelligence”. In: G. M. P. O'Hare, N. R. Jennings (eds).
Foundations of Distributed Artificial Intelligence, John Wiley & Sons, 1996, pp. 3-56.
Multi-Agent
Systems
25
DAI Concerns
• DAI is concerned with:
– Agent granularity
– Heterogenity of agents
– Methods of distributing control among agents
– Communication possibilities
• DAI is not concerned with:
– Issues of coordination of concurrent processes at the problem
solving and representational level
– Parallel Computer Architectures, Parallel Programming Languages
or Distributed Operating Systems
26
DPS and MAS
• DPS considers how the task of solving a particular
problem can be divided among a number of
modules that cooperate in dividing and sharing
knowledge about the problem and its evolving
solution(s).
• MAS is concerned with the behaviour of a
collection of autonomous agents aiming to solve a
given problem.
27
Decentralisation
• An important concept in DAI and MAS
– No central control; control is distributed
– Knowledge or information sources may also be
distributed
28
Multi-agent Systems (MAS)
Contains a number of agents which interact with one another through
communication. The agents are able to act in an environment; where
each agent will act upon or influence different parts of the
environment.
Reference: Wooldridge, An Introduction to Multiagent Systems, p. 105
Multi-agent System
Environment
29
Motivation for MAS
• To solve problems that are too large for a centralized agent
• To allow interconnection and interoperation of multiple
legacy systems
• To provide a solution to inherently distributed problems
• To provide solutions which draw from distributed
information sources
• To provide solutions where expertise is distributed
• To offer conceptual clarity and simplicity of design
30
Benefits of MAS
• Faster problem solving
• Decrease in communication
• Flexibility
• Increased reliability
31
Cooperative and Self-interested
MAS
• Cooperative
– Agents designed by interdependent designers
– Agents act for increased good of the system
– Concerned with increasing the performance of the system
• Self-interested
– Agents designed by independent designers
– Agents have their own agenda and motivation
– Concerned with the benefit and performance of the individual
agent
More realistic in an Internet setting?
32
Interaction and Communication
in MAS
• To successfully interact, agents need ability to cooperate,
coordinate and negotiate.
• This requires communication:
– Plan /message passing
– Information exchange using shared repositories
• Important characteristics of communication:
– Relevance of the information
– Timeliness
– Completeness
33
Let’s take a minute……
• Discuss with your neighbour:
– A problem that can be solved by a MAS
– Advantages and disadvantages of using a MAS for your
particular problem
34
Agent Typology 1
•
One of the most referred to typologies is given
by Nwana, BT Research Labs
–
•
Reference: H. S. Nwana. ”Software Agents: An
Overview”, Knowledge Engineering Review, Vol. 11,
No. 3, 1996, 40 pages
Several dimensions of typology:
–
–
–
Mobility - mobile or static.
Deliberative or reactive.
Primary attributes, such as autonomy, learning and
cooperation.
35
Agent Typology 2
• A part view of an agent typology
Autonomous
Software
systems
Learning
Cooperate
Collaborative
Agents
Smart
Agents
Interface
Agents
36
Agent Typology 3
Nwana identified the following seven types of agents:
1.
Collaborative agents - autonomous and cooperate.
2.
Interface agents - autonomous and learn.
3.
Mobile agents - able to move around a network.
4.
Information/Internet agents - manages the information on the internet.
5.
Reactive agents - stimulus-response behaviour.
6.
Hybrid agents - combination two or more agent philosophies.
7.
Smart agents - autonomous, learn and cooperate.
•
Criticisms of this Typology
–
Confuses agents with what they do (e.g. Info search) and the
technology (e.g. reactive, mobile).
37
Agent Typology 4
•
Collaborative agents
–
Hypothesis/Goal: The capabilities of the collection of agents is greater
than any of its members.
–
Main Motivation: To solve problems that are too large for a single
agent.
•
Interface agents
–
Hypothesis/Goal: A personal assistant that collaborates with the user.
–
Main Motivation: To eliminate humans performing several, manual
sub-operatioions.
–
Example: A personal assistant that finds a suitable package holiday for
the user.
38
Agent Typology 5
•
Mobile agents
–
Hypothesis/Goal: Agents need not be stationary!
–
Main Motivation: To reduce communication costs
–
Example: Aglets
•
Information/Internet agents
–
Hypothesis/Goal: Reduce information overload problem
–
Main Motivation: The need for tools to manage information
explosion
–
Example: agents that reside on servers and access the distributed
on-line information on the Internet
39
Agent Typology 6
•
Reactive agents
–
Hypothesis/Goal: Physical grounding hypothesis:
representations grounded in the physical world.
•
Hybrid agents
–
Defn:constitutes a combination of two or more agent
philosophies (e.g deliberative & reactive).
–
Hypothesis/Goal: Gains from the combination of
philosophies >> gains from the same philosophy.
40
Agent Typology 7
Heterogeneous agents
•
Definition: System of agents of several types
•
e.g. mobile and interface agents in the same system
•
Realistic in an Internet (open-system) setting
•
Motivation: Interoperability is plausible
•
Requires Standards for communication among the
agents:
•
Agent Communication Languages and protocols
•
Cooperation conventions
41
Other Types of Agents
Some of these may not exhibit any agent properties
as discussed earlier.
•
Desktop Agents: e.g.
–
Operating System agents – interact with the OS to perform tasks on
behalf of the user.
–
•
Application agents – e.g. Wizards
Web search agents – act as information brokers between information
suppliers (e.g. Websites) and information consumers (e.g. users)
42
Operating System Agents
User
Agent
Application
GUI Shell
OS API
Memory
Mgmt.
File
Mgmt.
Process
Mgmt.
43
Web Search Agents
User
Query
Web
Browser
Query Server
Response
Index
database
Web
Web robot
Search Engine
44
Information Filtering Agents
User
Web
Browser
News Server
Web
Indexed
articles
User
Profile
Indexing
Engine
Media
Filtering Agent
45
Let’s take a minute……
• Discuss with your neighbour the main points in
this lecture.
46
Summary
• An agent is a system that is capable of
independent action on behalf of its user or owner.
• A multi-agent system is one that consists of a
number of agents which interact with one another.
• In order to successfully interact, agents need
ability to cooperate, coordinate and negotiate.
47
Definition of Agents - Summary
•
•
An agent acts on behalf of another user or entity
An agent has the weak agent properties:
–
•
An agent may have strong agent properties :
–
•
autonomy, pro-activity, reactivity and social ability
mentalistic notions such as beliefs and desires
Other properties discussed in the context of
agents:
–
mobility, veracity, benevolence and rationality
48
References
•
Curriculum: Wooldridge: ”Introduction to MAS”
–
•
Chapters 1 & 2
Article:Agent Typology
–
H. S. Nwana. ”Software Agents: An Overview”, Knowledge
Engineering Review, Vol. 11, No. 3, 1996, 40 pages
•
Recommended Reading (not curriculum)
–
B. Moulin, B. Chaib-draa. ”An Overview of Distributed
Artificial Intelligence”. In: G. M. P. O'Hare, N. R. Jennings
(eds). Foundations of Distributed Artificial Intelligence, John
Wiley & Sons, 1996, pp. 3-56.
49
Next Lecture: Multi-agent
Interactions
Will be based on:
”Multi-agent Interactions”,
Chapter 6 in
Wooldridge: ”Introduction to MultiAgent
Systems”
50
FBI Agents Ordering Pizza
FBI agents conducted a raid of a psychiatric hospital in San Diego that
was under investigation for medical insurance fraud. After hours of
reviewing thousands of medical records, the dozens of agents had
worked up quite an appetite. The agent in charge of the investigation
called a nearby pizza parlour with delivery service to order a quick
dinner for his colleagues. The following telephone conversation took
place and was recorded by the FBI because they were taping all
conversations at the hospital.
Source: http://jewel.morgan.edu/~salimian/humor/humor_094.html
51
FBI Agents Ordering Pizza, contd.
•
Agent: Hello. I would like to order 19 large pizzas and 67 cans of soda.
•
Pizza Man: And where would you like them delivered?
•
Agent: We're over at the psychiatric hospital.
•
Pizza Man : The psychiatric hospital?
•
Agent : That's right. I'm an FBI agent.
•
Pizza Man : You're an FBI agent?
•
Agent : That's correct. Just about everybody here is.
•
Pizza Man : And you're at the psychiatric hospital?
•
Agent : That's correct. And make sure you don't go through the front doors. We have them locked. You
will have to go around to the back to the service entrance to deliver the pizzas.
•
Pizza Man : And you say you're all FBI agents?
•
Agent : That's right. How soon can you have them here?
•
Pizza Man : And everyone at the psychiatric hospital is an FBI agent?
•
Agent : That's right. We've been here all day and we're starving.
•
Pizza Man : How are you going to pay for all of this?
•
Agent : I have my checkbook right here.
•
Pizza Man : And you're all FBI agents?
•
Agent : That's right. Everyone here is an FBI agent. Can you remember to bring the pizzas and sodas
to the service entrance in the rear? We have the front doors locked.
•
Pizza Man : I don't think so. Click.
52