agent function
Download
Report
Transcript agent function
AI in game (I)
권태경
Fall, 2006
outline
• AI definition taxonomy
• agents
What is AI?
Views of AI fall into four categories:
Thinking humanly Thinking rationally
Acting humanly Acting rationally
The textbook advocates "acting rationally"
Acting humanly: Turing Test
• Turing (1950) "Computing machinery and intelligence":
• "Can machines think?" "Can machines behave intelligently?"
• Operational test for intelligent behavior: the Imitation Game
• Predicted that by 2000, a machine might have a 30% chance of
fooling a lay person for 5 minutes
• Anticipated all major arguments against AI in following 50 years
• Suggested major components of AI: knowledge, reasoning,
language understanding, learning
•
Thinking humanly: cognitive modeling
• Need to get inside the actual workings of human minds
•
• Goal
– Program’s I/O and timing behaviors match corresponding human
behaviors
–
• How to validate? It requires
1) Predicting and testing behavior of human subjects (top-down)
or
2) Direct identification from neurological data (bottom-up)
• Real “cognitive science” should experiment actual
humans or animals
•
Thinking rationally: "laws of thought"
•
Aristotle: what are correct arguments/thought
processes?
•
–
•
•
•
Syllogism
Direct line through mathematics and
philosophy to modern AI
Problems:
1. Informal knowledge, e.g. grey zone, certainty
2. Solving a problem in principle vs. doing so in
practice
3.
Acting rationally: rational agent
• A rational agent is one that acts so as to achieve
the best outcome or when there is uncertainty,
the best expected outcome
– E.g. autonomous control, adapt to change
–
• Correct inference cannot do the job sometimes
– Acting rationally does not necessarily involve
inference
– E.g. a reflex action
–
• We need learning
– Understanding
howprinciples
the world of
works
helpsagents
to generate
Focus on general
rational
more effective strategies to deal with it
and on components for constructing them
–
Rational agents
• An agent is an entity that perceives and acts
•
• Abstractly, an agent is a function from percept
histories to actions:
•
[f: P* A]
• For any given class of environments and tasks,
we seek the agent (or class of agents) with the
best performance
• Percept: the agent’s perceptual inputs at any given instant
• Caveat: computational limitations make perfect
outline
• AI taxonomy
• agent
Agents
• An agent is anything that can be viewed as
perceiving its environment through sensors and
acting upon that environment through actuators
•
• Human agent:
– eyes, ears, and other organs for sensors
– hands, legs, mouth, and other body parts for
actuators
–
• Robotic agent:
– cameras and infrared range finders for sensors
– various motors for actuators
Agents and environments
• The agent function: an abstract mathematical
description
– The agent function maps from percept histories to
actions:
–
[f: P* A]
• The agent program: a concrete implementation
– The agent program runs on the physical architecture to
implement f
Vacuum-cleaner world
• Percepts: location and contents, e.g., [A,Dirty]
•
• Actions: Left, Right, Suck, NoOp
•
Rational agents: definition
• Rational Agent: For each possible percept
sequence, a rational agent should select an
action that is expected to maximize its
performance measure, given the evidence
provided by the percept sequence and whatever
built-in knowledge the agent has.
•
• Performance measure: An objective criterion for
success of an agent's behavior
– E.g., performance measure of a vacuum-cleaner
agent could be amount of dirt cleaned up, amount of
time taken, amount of electricity consumed, amount of
noise generated, etc.
Rational agents: issues
• Rationality is distinct from omniscience (allknowing with infinite knowledge)
•
• Agents can perform actions in order to obtain
useful information or to modify future percepts
(information gathering, exploration)
– Not for performance maximization
• An agent is autonomous if its behavior is
determined by its own experience (with ability to
learn and adapt)
•
– Become independent of the prior knowledge from its
PEAS: formalization
• PEAS: Performance measure, Environment,
Actuators, Sensors
• Must first specify the setting for intelligent agent
design
•
• Consider, e.g., the task of designing an
automated taxi driver:
•
– Performance measure
–
– Environment
– Actuators
PEAS
• automated taxi driver:
•
– Performance measure: Safe, fast, legal,
comfortable trip, maximize profits
–
– Environment: Roads, other traffic, pedestrians,
customers
–
– Actuators: Steering wheel, accelerator, brake,
signal, horn
–
Environment types
• Fully observable (vs. partially observable): An agent's
sensors give it access to the complete state of the
environment at each point in time
• Deterministic (vs. stochastic): The next state of the
environment is completely determined by the current state
and the action executed by the agent
– In partially observable case, it could appear to be stochastic
– If the environment is deterministic except for the actions of other
agents, then the environment is strategic
–
• Episodic (vs. sequential): The agent's experience is
divided into atomic "episodes"
– each episode consists of the agent perceiving and then performing
a single action
– the choice of action in each episode depends only on the episode
itself
Environment types
• Static (vs. dynamic): The environment is unchanged while
an agent is deliberating
– The environment is semidynamic if the environment itself does not
change with the passage of time but the agent's performance
score does
• Discrete (vs. continuous): A limited number of distinct,
clearly defined percepts and actions
•
• Single agent (vs. multiagent): An agent operating by itself
in an environment
– Entity B is an agent or merely a stochastically behaving object?
• Maximize its performance measure depending on agent A’s behavior
– Multiagent
• competitive vs. cooperative
•
• Communication
• A hardest combination from 6 categories?
Environment types
Fully observable
Deterministic
Episodic
Static
Discrete
Single agent
Chess with
a clock
Yes
Strategic
No
Semi
Yes
No
Chess without
a clock
Yes
Strategic
No
Yes
Yes
No
Taxi driving
No
No
No
No
No
No
• The environment type largely determines the agent design
•
• The real world is (of course) partially observable, stochastic,
sequential, dynamic, continuous, multi-agent
•
Agent functions and programs
• An agent is completely specified by the
agent function mapping percept
sequences to actions
• One agent function (or a small
equivalence class) is rational
•
• Aim: find a way to implement the rational
agent function concisely
•
Table-lookup agent
function Table-Driven-Agent(percept) returns an action
static: percepts, a sequence // initially empty
table, a table of actions
// indexed by percept sequences
append percept to the end of percepts
action <- LookUp (percepts, table)
return action
• Drawbacks:
–
–
–
–
Huge table
Take a long time to build the table
No autonomy
Even with learning, need a long time to learn the table
entries
Agent types
• Four basic types in order of increasing
generality:
•
– Simple reflex agents
– Model-based reflex agents
– Goal-based agents
– Utility-based agents
• How to convert into learning agents
Simple reflex agents
Simple reflex agents
function Simple-Reflex-Agent(percept) returns an action
static: rules, a set of condition-action rules
state <- Interrupt-Input (percept)
rule <- Rule-Match (state,rules)
action <- Rule-Action (rule)
return action
• A condition-action rule
– If (condition) then (do a specific action)
• Interrupt-Input: abstract description of the current state
• Agent’ intelligence is limited
– It works well when environments are fully observable
– If partially observable, some problem, e.g. infinite loop, can occur
– Randomized action can escape from infinite loops
Model-based reflex agents
Model-based reflex agents
function Model-Based-Reflex-Agent(percept) returns an action
static: state, a description of the current world
rules, a set of condition-action rules
action, the most recent action
state <- Update-State (state, action, percept)
rule <- Rule-Match (state, rules)
action <- Rule-Action (rule)
return action
• To handle partial observability, agent keeps track of the
part of the world it cannot see now
– Internal state
• Tries to model the world in two ways
– How the world evolves independently of the agent
– How the agent’s action affect the world
(model-based) Goal-based agents
•
•
•
•
The agent needs goal information that describes desirable situations
Consider future
Search and planning to find action sequences for goal
Less efficient but more flexible
Utility-based agents
• Goals alone are not enough to generate high-quality
behavior sometimes
– Goal are often binary distinction e.g. happy vs. unhappy
• A utility function maps a state (or its sequence) onto a real
number, e.g. the degree of happiness
– Can provide a tradeoff between conflicting goals e.g. speed vs.
security
• If multiple goals, the likelihood of success of each goal can
be weighed up against the importance of the goals
Utility-based agents
Learning agents
• So far, we talked about various methods for selecting
actions
– We have not explained how the agent programs come into being
–
• 4 major components
– Learning element is responsible for making improvements
• Percept has no idea of how to evaluate the state of the world
• Uses feedback from critic
– Critic tells the learning agent how well the agent is doing
• in terms of performance standard
• Note that performance standard is fixed
– Performance element is responsible for selecting external actions
• This is the agent in the previous slides
• Takes percept and decides on actions
– Problem generator is responsible for suggesting actions that will
lead to new and informative experiences
Learning agents
reference
• Artificial Intelligence: A Modern Approach
(Second Edition) by Stuart Russell and
Peter Norvig, Prentice Hall
Potential project ideas
• Realty problem
• Education or Entrance exam problem