Do software agents know what they talk about?

Download Report

Transcript Do software agents know what they talk about?

Do software agents know
what they talk about?
Agents and Ontology
dr. Patrick De Causmaecker,
Nottingham, March 7-11, 2005
Overview






Agents
Ontology
Communication
RDF
Semantic Web
Sample Implementations
Nottingham, March
2005
Agents and Ontology [email protected]
2
Agents








Examples
Definitions
Properties
Why Agents
Pitfalls
Models
Architectures
Standards
Nottingham, March
2005
Agents and Ontology [email protected]
3
Examples






Tim Berners-Lee example of negotiating
Agents
Agents in a Route Planning Application
Tele Truck
Planning of Lab Sessions
Personal Assistant
Intelligent Room Project
Nottingham, March
2005
Agents and Ontology [email protected]
4
Tim Berners Lee’s example

The entertainment system was belting out the
Beatles' "We Can Work It Out" when the phone rang.
When Pete answered, his phone turned the sound
down by sending a message to all the other local
devices that had a volume control. His sister, Lucy,
was on the line from the doctor's office: "Mom needs
to see a specialist and then has to have a series of
physical therapy sessions. Biweekly or something. I'm
going to have my agent set up the appointments."
Pete immediately agreed to share the chauffeuring.
Nottingham, March
2005
Agents and Ontology [email protected]
5
Tim Berners Lee’s example

At the doctor's office, Lucy instructed her Semantic
Web agent through her handheld Web browser. The
agent promptly retrieved information about Mom's
prescribed treatment from the doctor's agent, looked
up several lists of providers, and checked for the
ones in-plan for Mom's insurance within a 20-mile
radius of her home and with a rating of excellent or
very good on trusted rating services. It then began
trying to find a match between available appointment
times (supplied by the agents of individual providers
through their Web sites) and Pete's and Lucy's busy
schedules. (The emphasized keywords indicate terms
whose semantics, or meaning, were defined for the
agent through the Semantic Web.)
Nottingham, March
2005
Agents and Ontology [email protected]
6
Tim Berners Lee’s example

In a few minutes the agent presented them with a
plan. Pete didn't like it—University Hospital was all
the way across town from Mom's place, and he'd be
driving back in the middle of rush hour. He set his
own agent to redo the search with stricter
preferences about location and time. Lucy's agent,
having complete trust in Pete's agent in the context
of the present task, automatically assisted by
supplying access certificates and shortcuts to the
data it had already sorted through.
Nottingham, March
2005
Agents and Ontology [email protected]
7
Tim Berners Lee’s example

Almost instantly the new plan was presented: a much
closer clinic and earlier times—but there were two
warning notes. First, Pete would have to reschedule a
couple of his less important appointments. He
checked what they were—not a problem. The other
was something about the insurance company's list
failing to include this provider under physical
therapists: "Service type and insurance plan status
securely verified by other means," the agent
reassured him. "(Details?)"
Nottingham, March
2005
Agents and Ontology [email protected]
8
Tim Berners Lee’s example

Lucy registered her assent at about the same
moment Pete was muttering, "Spare me the details,"
and it was all set. (Of course, Pete couldn't resist the
details and later that night had his agent explain how
it had found that provider even though it wasn't on
the proper list.)
Nottingham, March
2005
Agents and Ontology [email protected]
9
Agents in a route planning
application

The problem:




Mobile nurses travel from patient to patient
during the day.
They have to meet some patients within
certain time windows.
They have to finish in a fixed number of
hours
They want to be near their home at lunch
time, do not like certain patients,…
Nottingham, March
2005
Agents and Ontology [email protected]
10
Agents in a route planning
application

The procedure :




The dispatching center calculates the
routes for the day.
They try to minimise the travel time and to
equalise the workload over the routes
The nurses are assigned a route according
to some criteria.
The nurses may negotiate and exchange
routes
Nottingham, March
2005
Agents and Ontology [email protected]
11
Agents in a route planning
application

Agents?



Agents representing the nurses effectively
do the negotiation.
They use a measure for sympathy
reflecting the agent’s interrelationships
This sympathy allows for some memory in
the system
Nottingham, March
2005
Agents and Ontology [email protected]
12
Agents in a route planning
application

Negotiation :

Agents switch between 3 states:

Enquiring


Listening


Agents with a non satisfactory route (personal cost >
30)
Agents with a satisfactory route (personal cost <=
30)
Occupied

Nottingham, March
2005
Agents involved in a discussion
Agents and Ontology [email protected]
13

Sympathy





In the cause of a negotiaion, an agent may
accept a worsening of his own loopcost
It calculates cost = loopcost – sympathy/2
Cost outside [0,80] is rejected
In [0,80] chance of rejection is cost/81
In case of acception, the cost is send to
the requesting agent
Nottingham, March
2005
Agents and Ontology [email protected]
14

Sympathy

The accepting agent accepts if the cost is less
than 20, or else with a probability of
1-(cost-19)/61


In case of acception, the requesting agent
augments his sympathy for the offering agent with
cost, and vice versa for the offering agent.
Negotiation stops when all agents are in the
listening state.
Nottingham, March
2005
Agents and Ontology [email protected]
15
Clarke-Wright
Patient data
Generating trajectories
Tabu Search
Fixed trajectories
Draft assignment
Assigning trajectories
to nurses
Negotiations
Trajectory swaps lead to
sympathy level changes
Agent data
Result
Trajectories - Nurses
Fig. 3Diagram of the algorithms used to solve the mobile nursing service problem
Nottingham, March
2005
Agents and Ontology [email protected]
16
Tele Truck
Two levels of scheduling using agents in
transportation

1.
2.
Shipment contracting between firms
Effectively planning the transport using trucks
Tele Truck mainly concentrated on the
second issue




Trucks consist of a driver, a carrier and an
engine
They may be on different places and must be
brought together for a certain job
Tele truck uses a bidding scheme based on the
contract net protocol (CNP)
Nottingham, March
2005
Agents and Ontology [email protected]
17
Personal assistant

A personal assistant is a software agent
following your actions and trying to help




It can follow your surfing behaviour and decide to
track certain pages for you.
It can read over your shoulder and try to find
related documents on your hard disk.
It can be a helping paperclip.
It can follow links on the website you are reading
and point at nearby sites of interest to you.
Nottingham, March
2005
Agents and Ontology [email protected]
18
Personal assistant

In order to perform well, such an agent
must





Act autonomously
Be able to learn
Be able to build your profile
Techniques of AI must be used for learning
Datamining can reveal patterns in your
behaviour
Nottingham, March
2005
Agents and Ontology [email protected]
19
Intelligent Room Project






Room behaves as a person
Reasons about what happens in the
room
Tries to anticipate
Has a lot of sensorial inputs
Is equiped with gigantic computer
power
http://www.ai.mit.edu/people/mhcoen/
Nottingham, March
2005
Agents and Ontology [email protected]
20
Reference


“An Introduction to Multiagent
Systems”, Michael Wooldridge,
Department of Computer Science,
University of Liverpool, UK, John Wiley
& Sons, LTD, 2002 ISBN 0-471-49691X.
Links
Nottingham, March
2005
Agents and Ontology [email protected]
21
Links



http://allserv.kahosl.be/~patdc/Agents/
http://www.csc.liv.ac.uk/~mjw
http://www.csc.liv.ac.uk/~mjw/links/
Nottingham, March
2005
Agents and Ontology [email protected]
22
Introduction to Agent Based
Systems




What
Vision
Viewpoints
Criticising MAS
Nottingham, March
2005
Agents and Ontology [email protected]
23
What

Five trends have dominated the history
of computer





Ubiquity
Interconnection
Intelligence
Delegation
Human-orientation
Nottingham, March
2005
Agents and Ontology [email protected]
24
Ubiquity (Allgegenwärtigkeit)

The decreasing cost of computerpower allows
to introduce it in unexpected environments.





Electrical devices
Bordcomputers
Mobiles
…
Vb.
http://ingenieur.kahosl.be/projecten/amobe/
Nottingham, March
2005
Agents and Ontology [email protected]
25
Interconnectivity



Computers are networked (Internet)
Distributed systems are no longer
considered strange beasts of rare
species, hard to handle and understand,
not available for human control.
Nowadays we have to think of
interaction as the fundamental force of
computerscienc
Nottingham, March
2005
Agents and Ontology [email protected]
26
Intelligence


The complexity of the task that we trust
a computer is increasing every day.
Our ability to build trustworthy systems
that can operate in critical situations
increases.
Nottingham, March
2005
Agents and Ontology [email protected]
27
“The A380, which will seat 555 passengers in a typical three-class
interior layout, will enter airline service in 2006.”
Nottingham, March
2005
Agents and Ontology [email protected]
28
Delegation


As a consequence, we trust ever more
complex tasks to the computer
(navigate an airplane, play at the stock
market…)
Computersystems reach a level of
control over humans and society before
only heard of in science fiction stories.
Nottingham, March
2005
Agents and Ontology [email protected]
29
Human-orientation



The first computers were programmed
through switches. One had to understand all
details of the machine to be able to use it.
Afterwards, the textual interfaces allowed to
interface with the computer on a line per line
basis.
From 1980 we have seen graphical usere
interfaces appering. The user can manipulate
objects such as files, programs, devices
through their icons.
Nottingham, March
2005
Agents and Ontology [email protected]
30
Mark 1 Colossus (Christmas 1943)
Nottingham, March
2005
Agents and Ontology [email protected]
31
Paper thin screens
Nottingham, March
2005
The manipulation paradigm
Agents and Ontology [email protected]
32
Mission

The fundamental mission for software
developers is:


How do we incorporate those trends in our
applications.
E.g.
Ubiquity and interconnecion: “global
computing”, 1010 processors?!
 Delegation: how to build devices that can
take on our tasks in our place?
 …
Nottingham, March

2005
Agents and Ontology [email protected]
33
Multi agent systems




An agent is a computersystem that is able to
function as a representant of its owner.
An agent can find out what it needs to realise
its design goals.
A multi agent system consists of
communicating agents.
Those agents will represent owners with
deiverse interests and goals. They will have
to collaborate, co-ordinate and negotiate.
Nottingham, March
2005
Agents and Ontology [email protected]
34
Multi agent systems:
the problem


How do we build agents that are capable to
function independently and autonomously in
order to perform their tasks? (agent design)
How do we build agents that are able to
interact with other agents to successfully
perform their tasks, especially in the case
that the agents do not share interests and
goals? (society design)
Nottingham, March
2005
Agents and Ontology [email protected]
35
Questions




How can collaboration emerge in societies of
self-interested agents?
Which languages can agents use in their
communication?
How do self-interested agents find out when
their goals are conflicting and how can the
reach agreement?
Hoe do autonomous agents co-ordinate
activities?
Nottingham, March
2005
Agents and Ontology [email protected]
36
Vision

Scenario 1

Due to an unexpected system failure, a
space probe approaching Saturn looses
contact with its Earth-based ground crew
and becomes disoriented. Rather than
simply disapperaing into the void, the
probe recognizes that there has been a key
system failure, diagnoses and isolates the
fault, and correctly re-orients itself in order
to make contact with its ground crew.
Nottingham, March
2005
Agents and Ontology [email protected]
37
Autonomous vehicles
Nottingham, March
2005
Agents and Ontology [email protected]
38
Vision

Scenario 2

A key air-traffic control system at the main airport
of Ruritania suddenly fails, leaving flights in the
vicinity of the airport with no air-traffic control
support. Fortunately, autonomous air-trafiic control
systems in nearby airports recognize the failure of
their peer, and cooperate to track and deal with all
affected flights. The potentially disastrous
situation passes without incident.
Nottingham, March
2005
Agents and Ontology [email protected]
39
Vision

Scenario 3

After the wettest and coldest (UK) winter on
record, you are in desparate need of a last minute
holiday somewhere warm and dry. After specifying
your requirements to your personal digital
assistant (PDA), it converses with a number of
different Web sites, which sell services such as
flights, hotel rooms, and hire cars. After hard
negotiation on your behalf with a range of sites,
your PDA presents you with a package holiday.
Nottingham, March
2005
Agents and Ontology [email protected]
40
Some views of the field


Agents as a paradigm for software
engineering.
Agents as a tool for understanding
human societies.
Nottingham, March
2005
Agents and Ontology [email protected]
41
Agents as a paradigm for
software engineering.


Interaction is the key. Programs that
proces a specific input and produce a
specified output, are a minority.
In recent years, tools have been
designed and developed to build
systems of interacting components.
Nottingham, March
2005
Agents and Ontology [email protected]
42
Agents as a tool for
understanding human
societies.


“Psychohistory” allows sociological
predictions (Azimov)
Socilogists can use MAS to build
simulations. (E.g. : How did social
complexity evolve in the Paleolithicum?)
Nottingham, March
2005
Agents and Ontology [email protected]
43
Objections to MAS




Just distributed/concurrent
programming?
Just artificial intelligence?
Just game theory?
Just social science?
Nottingham, March
2005
Agents and Ontology [email protected]
44
Distributes/concurrent
programming?

Important work has been done in this field
since the 70’s. Agenten build on this work
and add a dimension:


Autonomousness: synchronisation mechanisms
are not hard coded.
Encounters have an economical meaning because
of the self interested property of the agents. This
differes from a situation where components are
build to co-operate.
Nottingham, March
2005
Agents and Ontology [email protected]
45
Artificial intelligence?

Agents are sometimes considered as a
division of AI. or vice-versa:


AI has concentrated on learning, planning,
understanding…, an agent integrates these parts
to arrive at decisions. Most of the agents (99%)
use conventional programming and do not
incorporate any AI at all.
The social aspect has not been investigated in AI
at all. It is an essential constituent of any solution
build on agents. It distinguishes the human kind
from its peer creatures, the annimals.
Nottingham, March
2005
Agents and Ontology [email protected]
46
Nottingham, March
2005
Agents and Ontology [email protected]
47
Game theory?


The very same pioneers that founded
computer science created game theory and
artifical intelligence: von Neumann, Turing.
Game theory is widely applied within MAS,
but:
The methods of game theory result in techniques
and concepts. MAS use those.
 The rational agent from game theory may not
have any meaning at all in the real world. The
purely self-interested agents cannot contribute
sufficiently toe social wellbeing even to warrant
Nottingham,survival.
March

2005
Agents and Ontology [email protected]
48
Social science?


The domain offers possibilities to
sociology to eperiment.
But agent systems are not at all
comparable to real world societies when
complexity is at stake.
Nottingham, March
2005
Agents and Ontology [email protected]
49
Questions on scenario 3



How do you specify your preferences?
How does the agent compare the
different offers?
Which algorithms govern the
negotiations?
Nottingham, March
2005
Agents and Ontology [email protected]
50
AIMA


An agent is anything that can be viewed
as perceiving its environment through
sensors and acting upon that
environment through effectors
(Russel and Norvig, 1995)
Nottingham, March
2005
Agents and Ontology [email protected]
51
Maes


Software agents differ from
conventional software in that they are
long-lived, semi-autonomous, proactive,
and adaptive.
http://agents.www.media.mit.edu/groups
/agents/
Nottingham, March
2005
Agents and Ontology [email protected]
52
KidSim


Let us define an agent as a persistent software entity
dedicated to a specific purpose. "Persistent"
distinguishes agents from subroutines; agents have
their own ideas about how to accomplish tasks, their
own agendas. "Specific purpose" distinguishes them
from entire multifunction applications; agents are
typically much smaller.
http://www.dnai.com/~cypher/Publications/CACM/KidSimCACM.html
Nottingham, March
2005
Agents and Ontology [email protected]
53
Wooldridge and Jennings

Perhaps the most general way in which
the term agent is used is to denote a
hardware or (more ususally) softwarebased computer system that enjoys the
following properties:


Autonomy, social ability, reactivity, proactiveness
http://www.csc.liv.ac.uk/~mjw/
Nottingham, March
2005
Agents and Ontology [email protected]
54
Further Examples




Agents controlling production cells
Agents for consistency checking in a
distributed database system
Agents for assitence in programme
previewing
Agents in a framework for power
controll
Nottingham, March
2005
Agents and Ontology [email protected]
55
Agents controlling production
cells
Business Logic
Manufacturing Execution System
A
A
A
A
Cel
Cel
Cel
Cel
Production Line
Nottingham, March
2005
Agents and Ontology [email protected]
56
Agents for consistency checking
in a distributed database system
User 1
Goal and Task
Specifications
User 2
User i
Interface
Agent 2
Interface
Agent k
Results
Interface
Agent 1
Task
Task
Agent 1
Demand for
Information/
Answer
Information
Agent 1
Task
Conflict Resolution
Proposed
Solution
Task
Agent j
Information
Integration
Collaborative
Demand Processing
Information
Agent 2
Data Source 2
Data Source 3
Information
Agent m
Question/
Answer
Data Source 1
Nottingham, March
2005
Data Source n
Agents and Ontology [email protected]
57
Agents for assitence in
programme previewing
Viewers
PDA
Constraint 1
Constraint 2
Broadcast Planning
And Administration System
Viewing
Room
A
Nottingham, March
2005
XML
View
Agent
Agents and Ontology [email protected]
58
Agents in a framework for
power controll





Peek levels determine the energy cost
Management system allows for leveling
Measuring components can be accessed
over an intranet connection
They can be programmed at a low level
Use agents to model the framework
Nottingham, March
2005
Agents and Ontology [email protected]
59
Properties
Re-active
On external asynchronous stimuli
Autonomous
Controls own actions
Goal Directed
Takes initiatives
Persistent
Is “living”, not “running”
Communicative
With other agents and humans
Learning
Adaptiveness based on experience
Mobile
Moves among machines in network
Nottingham, March
2005
Agents and Ontology [email protected]
60
Why Agents
Load reduction
Delay independence
Protocols
Asynchronousness
Nottingham, March
2005
Trusted Agent travels between server
and client, carries safe protocol
Agents perform in real time, not over
uncertain networks
Agents encapsulate protocols, for better
adaptability and maintainability
Togetter with autonomy, agents can act
on their own, without necessity for
continuous interaction
Agents and Ontology [email protected]
61
Why Agents (cont)
Adaptivity
Heterogenity
Robustness
Nottingham, March
2005
Agents react autonomously on changes
in the environment
Distributed systems are heterogeneous
in nature. Agents are optimal for system
integration
Mobile agents can make a distributed
system more robust and fault tolerant by
deciding autonomously in case of an
error
Agents and Ontology [email protected]
62
Agents as a Network
Computing Paradigm

Client Server
Client

Server
Data
Know How
Code-on-demand
Client
Know How
Nottingham, March
2005
Know How
Server
Data
Agents and Ontology [email protected]
63
Agents as a Network
Computing Paradigm (cont)

Mobile Agents
Agent
Network
Nw
Agent
Know How
Know How
Host
Host
Nottingham, March
2005
Agents and Ontology [email protected]
Network
64
Multi Agent Systems (MAS)




Standalone agents incorporate all
functionalities of an agent.
Multi Agent systems define co-operating
agents for tasks of which an individual
agent is not capable
Intelligence is an emerging property
The Internet is a driving factor
Nottingham, March
2005
Agents and Ontology [email protected]
65
Multi Agent Systems (cont)

Aspects of research


Helping agents (Middle agents)


Aggregation, Communication, Coordination, Collaboration, Negotiation
Facilitator, Mediator, Broker, Matchmaker,
Blackboard
Agent interaction

Language, Format, Ontology
Nottingham, March
2005
Agents and Ontology [email protected]
66
Where to apply

User interface


Distributed systems


Personal representative for the user,
Metaphorically enforced image of functionality
Asynchronicity, autonomy, unreliable
communication
Algorithmic paradigm


E.g. optimization problem, agents incorporate
separate strategies
E.g. heat distribution in a plate
Nottingham, March
2005
Agents and Ontology [email protected]
67
Where to apply (cont)

Software engineering


Agents abstract communication, control,
decision taking, autonomy, persistence
Enforce encapsulation, modularity, reusability, concurrency, distributed operation
Nottingham, March
2005
Agents and Ontology [email protected]
68
Pitfalls








Agents solve everything
Buzzwords are concepts
Forget the main objective: software
Multi agent systems solve all problems in
distributed computing
Related technology is irrelevant
Forget concurrency
Build your own architecture
We have a general architecture
Nottingham, March
2005
Agents and Ontology [email protected]
69
Pitfalls









Agents use too much ai
Agents use too little ai
You see agents everywhere
You have too many agents
You have too little agents
You spend all your time on an infrastructure
Your system is disorganised
Let us start from a tabula rasa
Forget about de facto standards
Nottingham, March
2005
Agents and Ontology [email protected]
70