Intelligent Agents

Download Report

Transcript Intelligent Agents

Introduction to Artificial Intelligence
LECTURE 2: Intelligent Agents
•
•
•
•
What is an intelligent agent?
Structure of intelligent agents
Environments
Examples
Introduction to AI. H.Feili, ([email protected])
1
Intelligent agents: their
environment and actions
Introduction to AI. H.Feili, ([email protected])
2
Ideal rational agents
• For each possible percept sequence, an ideal
rational agent should take the action that is
expected to maximize its performance
measure, based on evidence from the percept
sequence and its built-in knowledge.
• Key concept:
mapping from perceptions to actions
• Different architectures to realize the mapping
Introduction to AI. H.Feili, ([email protected])
3
Structure of intelligent agents
• Agent program: a program that implements
the mapping from percepts to actions
• Architecture: the platform to run the
program (note: not necessarily the hardware!)
•
Agent = architecture + program
• Examples:
– medical diagnosis
– satellite image analysis
– refinery controller
Introduction to AI. H.Feili, ([email protected])
- part-picking robot
- interactive tutor
- flight simulator
4
Illustrative example: taxi driver
Agent
Type
Percepts
Actions
Goals Environment
Camera
Steer
Safety
Driver GPS
Accelerate Speed
Speedometer Brake
Legal
Sonar
Profit
Video
cameras
Introduction to AI. H.Feili, ([email protected])
Roads
Drivers
Traffic
Pedestrians
Customers
5
Table-Driven Agents
function Table-driven-agent(percept) returns action
static: percepts, a sequence, initially empty
table, indexed by percept sequences (given)
append percept to the end of percepts
action := LOOKUP(percepts, table)
return action
• Keeps a list of all percepts seen so far
• Table too large
• takes too long to build
• might not be available
Introduction to AI. H.Feili, ([email protected])
6
Simple Reflex Agent (1)
Introduction to AI. H.Feili, ([email protected])
7
Simple Reflex Agent (2)
function Simple-Reflex-Agent(percept) returns action
static: rules, a set of condition-action rules
state := Interpret-Input (percept)
rule := Rule-Match(state, rule)
action := Rule-Action[rule]
return action
• No memory, no planning
Introduction to AI. H.Feili, ([email protected])
8
Reflex Agents with States (1)
Introduction to AI. H.Feili, ([email protected])
9
Reflex Agents with States (2)
function Reflex-Agent-State(percept) returns action
static: rules, a set of condition-action rules
state, a description of the current state
state := Update-State (state, percept)
rule := Rule-Match(state, rules)
action := Rule-Action[rule]
state := Update-State (state, action)
return action
• still no longer-term planning
Introduction to AI. H.Feili, ([email protected])
10
Goal-based Agents (1)
Introduction to AI. H.Feili, ([email protected])
11
Goal-based Agents (2)
function Goal-Based-Agent(percept, goal) returns action
static: rules, a set of condition-action rules
state, a description of the current state
state := Update-State (state, percept)
rule := Plan-Best-Move(state, rules, goal)
action := Rule-Action[rule]
state := Update-State (state, action)
return action
• longer term planning, but what about cost?
Introduction to AI. H.Feili, ([email protected])
12
Utility-based Agents (1)
Introduction to AI. H.Feili, ([email protected])
13
Utility-based Agents (2)
• Add utility evaluation: not only how close
does the action take me to the goal, but also
how useful it is for the agent
• Note: both goal and utility-based agents can
plan with constructs other than rules
• Other aspects to be considered:
– uncertainty in perceptions and actions
– incomplete knowledge
– environment characteristics
Introduction to AI. H.Feili, ([email protected])
14
Properties of environments
What is the environment where the agent acts like?
• Accessible to inaccesible: is the state of the world
fully know at each step?
• Deterministic to nondeterministic: how much is the
next state determined by the current state?
• Episodic to non-episodic: how much state memory?
• Static to dynamic: how much independent change?
Discrete to continuous: how clearly are the actions
and percepts differentiated?
Introduction to AI. H.Feili, ([email protected])
15
Examples of environments
• Chess: accessible, deterministic, nonepisodic,
static, discrete.
• Poker: inaccessible, nondeterministic,
nonepisodic, static, discrete.
• Satelite image analysis:accessible, deterministic
nonepisodic, semi static, continuous
• Taxi driving: all no!
Introduction to AI. H.Feili, ([email protected])
16
Environment Programs (1)
• A program to run the individual agents and
coordinate their actions -- like an operating
system.
• Control strategies:
– sequential: each agent perceives and acts once
– asynchronous: let the agents communicate
– blackboard: post tasks and have agents pick them
• Agents must not have access to the
environment program state!
Introduction to AI. H.Feili, ([email protected])
17
Environment Programs (2)
function Run-Environment (state, Update-Function,
agents, Termination-Test, Performance-Function)
repeat
for each agent in agents do
Percept[agent] := Get-Percept(agent, state)
for each agent in agents do
Action[agent] := Program[agent](Percept([agent])
states := Update-Function(actions, agents, state)
scores := Performance-Function(scores, agents, states)
until Termination-Test(states) return scores
Introduction to AI. H.Feili, ([email protected])
18
Environment Programs:
Examples
• Chess
– two agents, take turns to move.
• Electronic stock market bidding
– many agents, asynchronous, blackboard-based.
• Robot soccer playing
– in the physical world, no environment program!
Introduction to AI. H.Feili, ([email protected])
19
Summary
• Formulate problems in terms of agents,
percepts, actions, states, goals, and
environments
• Different types of problems according to the
above characteristics.
• Key concepts:
– generate and search the state space of problems
– environment programs: control architectures
– problem modelling is essential
Introduction to AI. H.Feili, ([email protected])
20
?
Introduction to AI. H.Feili, ([email protected])
21