Note 1: introduction
Download
Report
Transcript Note 1: introduction
Artificial intelligence
Information
•
•
•
•
•
Instructor – Johnson Thomas
325 North Hall
Tel: 918 594 8503
e-mail : [email protected]
Office hours:
– Monday 5-7pm
– Tuesday 2-4pm
Alternates between Tulsa and Stillwater
Information
• TA – zang, yihong
• E-mail:[email protected]
• Office hours – to be decided
Information
• Book
• artificial intelligence: a modern approach
Russell and Norvig
Prentice Hall
Information
•
•
•
•
Practical x 3 – 45 marks (3 x 15)
Homework x 3 – 30 marks (3 x 10)
Mid-Term – 30 marks
Final Examination – 35 marks
Artificial intelligence
introduction
•
•
•
•
•
•
What is AI?
Many definitions
Thought processes and reasoning
Behavior
Thought and behavior like humans
Ideal thought and behavior – rationality
– A system is rational if It does the ‘right thing
given what it knows
Introduction
• Systems that think like humans
– Computers with minds
– Automation of activities that we associate with
human thinking e.g. decision-making, problem
solving, learning , …
• Systems that act like humans
– Machines that perform the functions that require
intelligence when performed by people
– Make computers do things at which, at the moment,
people are better
Introduction
• Systems that think rationally
– The study of mental faculties through the use
of computational models
– The study of the computations that make it
possible to perceive, reason, and act
• Systems that act rationally
– Study of the design of intelligent agents
– Concerned with intelligent behavior in artifacts
Introduction
• Acting humanly – Turing test
• The computer passes the test if a human
interrogator, after posing some written
questions, cannot tell whether the written
responses come from a person or not.
Introduction
• Therefore, the computer would need to possess
the following :
– Natural language processing
• Communication
– Knowledge representation
• Store knowledge
– Automated reasoning
• Use stored information to answer questions and draw new
conclusions
– Machine learning
• Adaptation and pattern recognition/extrapolation
Introduction
• Turing test – no physical interaction
• Total Turing test
– Interrogator can test subject’s perceptual
abilities
• Need computer vision to perceive objects
– Interrogator can pass physical objects
• Need Robotics to manipulate objects and move
about
Introduction
• Thinking humanly – cognitive modeling
• Understand the workings of human minds
– Once we have a theory of the mind, we can
express the theory as a computer program
• Cognitive science – brings together
computer models from AI and
experimental techniques from psychology
Introduction
•
•
•
•
Thinking rationally – laws of thought
Logic
Difficult in practice
Knowledge can be informal
Introduction
• Acting rationally – rational agents
• agent – something that acts
–
–
–
–
–
–
More than just a program
Can perceive their environment
Can adapt to change
Operate under autonomous control
Persists over a prolonged time.
Can take on another’s goals
• Rational agent – acts to achieve best outcome
possible
History
Subjects which have contributed to AI
• Philosophy 428 BC – present
• Mathematics 800-present a
• Economics 1776-present
• Neuroscience 1861 – present
• Psychology 1879-present
• Computer engineering of 1940-present
• Control theory and cybernetics 1948-present
• linguistics 1957-present
History
• First AI work - McCulloch and Pits (1943)
– Model of artificial neurons
– Based on
• Physiology and functions of the brain
• Analysis of propositional logic
• Turing’s theory of computation
– Each neuron is ‘on’ or ‘off’
– Neuron switches ‘on’ in response to
stimulation by a sufficient number of
neighboring neurons
History
• Hebb demonstrated an updating rule for
modifying the connection strengths
between neurons
• Minsky and Edwards built the first neural
network computer – 1950
• Turing’s article in 1950 – “computing
machinery and intelligence” – introduced
Turing test, machine learning, genetic
algorithms, enforcement learning
History
• Birthplace of AI – meeting organized by
McCarthy at Dartmouth college in 1956.
• Some early programs
– Logic theorist (LT) for proving theorems
– General Problem Solver (GPS) – imitates
human problem solving protocols
– Physical symbol system hypothesis –any
system exhibiting intelligence must operate by
manipulating data structures composed of
symbols
History
• Geometric theorem prover (1959)
• Program for checkers
• McCarthy (1958)
– Defined Lisp
– Invented time sharing
– Advice taker program – first program to embody
general knowledge of the world
• Minsky – a number of micro world projects that
require intelligence
– Blocks world – rearrange blocks In a certain way,
using a robot hand that can pick up one block at a
time
History
•
•
•
•
•
•
vision project
Vision and constraint propagation
Learning theory
Natural language understanding
Planning
Perceptrons
History
• Use of domain specific knowledge that allows
larger reasoning steps
• Dendral program – Inferring molecular structure
from the information provided by a mass
spectrometer
• Expert systems for medical diagnosis – mycin
– No theoretical framework
– Knowledge acquired by Interviewing experts
– Rules reflected uncertainty associated with medical
knowledge
History
• First commercial expert system R1 – DEC
– For configuring computers
• Every major U.S. corporation had expert
systems
• Japanese fifth generation project
• US microelectronics and computer
technology corporation (MCC)
• Britain’s Alvey’s program
History
• Hopfield – statistical approaches for neural
networks
• Hinton – neural net models of memory
• so-called connectionist models (as
opposed to symbolic models promoted
earlier)
History
• AI is now a science – based on rigorous
theorems or hard experimental evidence
•
Applications
• Autonomous planning and scheduling
– NASA’s remote agent program
• controls scheduling of operations for a spacecraft.
• Plans are generated from high level goals
specified from the ground.
• Monitors operation of the spacecraft as the plans
are executed
• Game playing
– IBM’s deep Blue – defeated world champion
Applications
• Autonomous control
– ALVINN computer vision system to steer a car
• Diagnosis
– Medical diagnosis programs based on
probabilistic analysis
• Logistics planning
– Logistics planning and scheduling for
transportation in gulf war
Applications
• Robotics
• Language understanding and problem
solving
Intelligent agents
Agents
• Agent – anything that can be viewed as
perceiving its environment through
sensors and acting upon that environment
through actuators
• An agent is a function from percept
histories to an action
f : P* A
Agents
• Human agent
– Ears, eyes, … for sensors
– Hands, legs, … for actuators
• Robotic agent
– Cameras … for sensors
– Motors … for actuators
• Software agents
– Keystrokes, file contents, network packets … as
sensory inputs
– Display on the screen, writing files, sending network
packets …
Agent
sensors
percepts
Environment
?
actions
actuators
Agents
• Percept – refers to the agents perceptual inputs
at any given instant
• Percept sequence – complete history of
everything the agent has perceived
• An agent’s choice of action at any given instant
can depend on the entire percept sequence
observed to date
• Agent function - maps any given percept
sequence to an action. This function describes
the behavior of an agent
Agents
• Agent function - an abstract mathematical
description
• Agents program – an implementation
Agents
• Vacuum cleaner world
– Two locations: squares A and B
– Vacuum agents perceives which square it is in
and whether there is dirt in the square
– Vacuum agent can choose to move left, move
right, Suck dirt, do nothing
– Agent function – if current square is dirty, then
suck , otherwise move to the other square
– Tabulation of agent function
B
A
Agents
• Partial tabulation of simple agent function
Percept sequence
================
[A, clean]
[A, dirty ]
[B, clean]
[B, dirty ]
[A, clean], [A, clean]
[A, clean], [A, dirty]
…
[A, clean], [A, clean], [A, clean]
[A, clean], [A, clean], [A, dirty]
…
action
=======
right
suck
left
suck
right
suck
…
right
suck
…
Agents
• Rational agent – does the right thing, that is,
every entry in the table for the agent function is
filled out correctly
• Performance measure – criterion for success of
an agent’s behavior
– Sequence of actions based on percepts it received
– Sequence of actions causes the environment to go
through a sequence of states
– If the sequence Is desirable, agent has performed
well
Agents
• Vacuum cleaner agent – measure performance
by the amount of dirt cleaned up in a single eight
hour shift
• Clean, dump dirt, clean, dump dirt …?
• More suitable performance measure -reward
agent for having a clean floor
• Design performance measures according to
what one wants in the environment - not
according to how one thinks the agent should
behave
Agents
• Rationality
• What is rational at any given time depends
on
– performance measure that defines the
criterion of success
– Agent’s prior knowledge of the environment
– Actions that the agent can perform
– Agent’s percept sequence to date
Agents
• Definition
For each possible percept sequence, a
rational agent should select an action that
is expected to maximize its performance
measure, given the evidence provided by
the percept sequence and whatever builtin knowledge the agent has
Agents
• Vacuum cleaner agent – cleans the square
if it is dirty and moves to the other square
if it is not
• Is this a rational agent?
Agents
• What is its performance measure
– Performance measure awards one point for
each clean square at each time step over a
lifetime of 1000 times steps
• What is known about the environment
– Geography of environment is known a priori.
Dirt distribution and initial location of agent
are not known. Clean squares stay clean.
Sucking cleans the current square. Left and
right move the agent
Agents
• What sensors and actuators does it have
– Available actions – left, right, suck, NoOp
– Agent perceives Its location and whether that
location contains dirt
• Under these conditions, the agent is
rational
Agents
• Note:
– rationality maximizes expected performance
– Perfection maximizes actual performance
• Rational agent
– Gathers information
– Learns as much as possible from what it
perceives
Agents
• Task of computing the agent function
– When the agent is being designed, some of
the computation is done by its designers
– When it is deliberating on its next action, the
agent does more computation
– The agent learns from experience – it does
more computation as it learns
Agents
• Autonomous – the agent should learn
what It can to compensate for partial or
incorrect prior knowledge
– Vacuum cleaning agent that learns to foresee
where and when additional dirt will appear will
do better than one that does not
• Agent will have some initial knowledge as
well as an ability to learn. Initially it will
make mistakes.
Agents
• Specifying the task environment
– Vacuum cleaning agent
• Performance measure, and environment, sensors
and actuators (PEAS)– this is the task
environment
– Automated taxi driver
• Performance measure – getting to correct
destination; minimizing fuel consumption , wear
and tear, trip time, cost, traffic law violations;
maximizing safety, comfort, profits
• Environment – variety of roads, traffic, pedestrians,
road works, etc. Interact with the passengers,
snow
Agents
• Actuators – accelerator, steering, braking; output to
a display screen or voice synthesizer;
communicate with other vehicles
• Sensors – controllable cameras, speedometer
odometer, accelerometer, engine and other system
sensors , GPS, sensors to detect distances to
other cars and obstacles, keyboard or microphone
for the passenger to request a destination
Agents
• Properties of task environments
– Fully observable vs partially observable
• Fully observable – If agents sensors give it access
to the complete state of the environment at each
point in time
• Partially observable – because of noisy and
inaccurate sensors or because parts of the state
are missing from the sensor data
Agents
– Deterministic vs stochastic
• Deterministic – if the next state of the environment
is completely determined by the current state and
the action executed by the agent
• Otherwise stochastic – taxi driving – cannot predict
traffic
– Episodic vs sequential
• Episodic – agents experience is divided into atomic
episodes. Episode – consists of agent perceiving
and performing a single action. Next episode does
not depend on the actions taken in previous
episodes
• Sequential – current decision can affect future
decisions
Agents
– static vs dynamic
• dynamic – if environment can change while an
agent is deliberating
• Static otherwise
– Discrete vs continuous
• Time
• Percepts
• Actions
– Single agent vs multiagent
•
•
•
•
Aim is to maximize performance
Competitive multiagent environment
Cooperative multiagent environment
Communication in multiagent environment
Agents
• Crossword puzzle
– Fully observable, deterministic, sequential,
static, discreet, single agent
• Medical diagnosis
– Partially observable, stochastic, sequential,
dynamic, continuous, single agent
– Can be multiagent – deal with patients etc.
Agents
• Structure
– So far talked about behavior
– Agent program implements the agent function
mapping percepts to actions
– This program will run on some sort of
computing device with physical sensors and
actuators (architecture)
agent = architecture + program
Agents
• Skeleton of program
– Input : current percept from sensors
– Output : action to actuators
• Agent function takes entire percept history
as input
Agents
• Four basic kinds of agent programs
– simple reflex agents
– Model based reflex agents
– Goal based agents
– Utility based agents
• Can use a table look up approach – but
not practical
Agents
Function TABLE-DRIVEN_AGENT(percept) returns
an action
static: percepts, a sequence initially empty
table, a table of actions, indexed by percept
sequence
append percept to the end of percepts
action LOOKUP(percepts, table)
return action
This approach is doomed to failure
Agents
• Simple reflex agents
– simplest kind of agent
– Select actions on the basis of the current
percept, ignoring the rest of the percept
history
– Example – vacuum cleaning agent
• Decision based on current location and whether
location contains dirt
Agents
• Code for vacuum cleaning agent
function reflex-vacuum-agent ([location, status]) returns
an action
if status = dirty then return suck
else if location = A then return right
else if location = B then return left
Agents
• Condition-action rule
– Example
if car-in-front-is-braking then initiate-braking
Agents
Agent
sensors
percepts
Environment
What the world is like now
Condition-action rules
What action I
should do now
actions
actuators
Agents
• More general and flexible approach
– Build a general purpose Interpreter for
condition action rules
– Create rule sets for specific task
environments
Agents
function SIMPLE-REFLEX-AGENT(percept) returns
an action
static: rules, a set of condition-action rules
state INTERPRET-INPUT(percept)
rule RULE-MATCH(state, rule)
action RULE-ACTION[rule]
return action
Agents
state INTERPRET-INPUT(percept)
• This function generates an abstracted description of
the current state from the percept
rule RULE-MATCH(state, rule)
• This function returns the first row in the set of rules
that matches the given state description
action RULE-ACTION[rule]
• Action defined
Agents
• Current decision can be made on the basis of only
the current percept – and environment must be fully
observable
• Suppose vacuum agent is deprived of its location
sensor and has only a dirt sensor
• 2 possible percepts – [Dirty], [Clean]
• It can Suck in response to Dirty
• What about Clean?
• Moving Left fails (forever) if it happens to start in
square A
• Moving Right fails (forever) if it happens to start in
square B
Agents
• Will only work if the environment is fully
observable otherwise infinite loops may
occur.
• Can exit from infinite loops if the agent
can randomize its actions
Agents
• Model based reflex agents
• Partial observability – agent must keep
track of the part of the word it cannot see
now
• that is, agent maintains internal state that
depends on the percept history and
thereby reflects at least some of the
unobserved aspects of the current state
Agents
• Example – changing lanes – agent needs
to keep track of where the other cars are If
it cannot see them all at once
Agents
• Updating internal state Information as time goes
by
– Need some information about how the world involves
independently of the agent
• overtaking car generally will be closer than it was a moment
ago
– Need some information about how the agents own
actions affect the world
• When agent turns the steering wheel clockwise, the car turns
to the right
• this knowledge about ‘how the world works’ – a
model of the world
• agent that uses such a model – model based
agent
Agents
Agents
function REFLEX-AGENT-WITH-STATE(percept)
returns an action
static: rules, a set of condition-action rules
state, a description of the current world state
action, the most recent action.
state UPDATE-STATE(state, action, percept)
rule RULE-MATCH(state, rule)
action RULE-ACTION[rule]
return action
Agents
state UPDATE-STATE(state, action, percept)
• responsible for creating the new internal state
description
– Interprets the new percept in the light of the
existing knowledge about the state
– Uses information about how the world evolves to
keep track of the unseen parts of the world
– Knows about what the agents actions do to the
state of the world
Agents
• Knowing about the current state of the
environment is not always enough to decide
what to do
• Agent needs some sort of goal information also
• Agent a program combines of this with
information about the results of possible actions
(the same information as was used to update
internal state in the reflex agent) to choose the
action
Agents
Agents
• Searching and planning are needed to find
actions sequences that achieve the agents
goals
Agents
• Certain goals can be reached in different
ways.
– Some are better, have a higher utility.
• Utility function maps a (sequence of)
state(s) onto a real number.
– Describes the associated degree of
‘happiness’
Agents
• Improves on goals:
– Selecting between conflicting goals
• Speed and safety - utilities function specifies the
appropriate tradeoff
– Select appropriately between several goals
based on likelihood of success.
Agents
Agents
• All previous agent-programs describe
methods for selecting actions.
– Yet it does not explain the origin of these
programs.
– Learning mechanisms can be used to perform
this task.
– Teach them instead of instructing them.
– Advantage is the robustness of the program
toward initially unknown environments.
Agents
Agents
• Learning agent – 4 components
– Learning element: introduce improvements in
performance element.
• Critic provides feedback on agents performance based on
fixed performance standard.
– Performance element: selecting actions based on
percepts.
• Corresponds to the previous agent programs
– Problem generator: suggests actions that will lead
to new and informative experiences.
• Exploration vs. exploitation
Agents
• Automated taxi agent
– Performance element – collection of knowledge and
procedures that taxi has for selecting its driving
actions
• Taxi drives using this performance element
– Critic observes the world and passes information to
the learning element
• Taxi makes quick left turn across three lanes of traffic; critic
observes abuse by other drivers.
– Learning element formulates a rule saying this is a
bad action
– Performance element is modified by installing the
new rule
– Problem generator identifies certain areas of behavior
in need of improvement and suggest experiments