Transcript PowerPoint

CS 416
Artificial Intelligence
Lecture 9
Logical Agents
Chapter 7
Chess Article
Garry Kasparov reflects on computerized chess
• IBM should have released the contents of Deep Blue to chess
community to advance research of computation as it relates to chess
• Kudos to Deep Junior for putting information in public domain so state of
the art can advance
• Deep Blue made one good move that surprised Kasparov (though he
thinks a person was in the loop)
• Deep Junior made a fantastic sacrifice that reflects a new
accomplishment for computerized chess
http://www.opinionjournal.com/extra/?id=110003081
Where are we?
We’ve studied classes of search problems
• Find the best sequence of actions
– Uninformed search (BFS, Iterative Deepening)
– Informed search (A*)
• Find the best value of something (possibly a sequence)
– Simulated annealing, genetic algorithms, hill climbing
• Finding the best action in an adversarial setting
Logical Agents
What are we talking about, “logical?”
• Aren’t search-based chess programs logical
– Yes, but knowledge is used in a very specific way
 Win the game
 Not useful for extracting strategies or understanding
other aspects of chess
• We want to develop more general-purpose knowledge
systems that support a variety of logical analyses
Why study knowledge-based agents
Partially observable environments
•
combine available information (percepts) with general knowledge to
select actions
Natural Language
•
Language is too complex and ambiguous. Problem-solving agents are
impeded by high branching factor.
Flexibility
•
Knowledge can be reused for novel tasks. New knowledge can be
added to improve future performance.
Components of knowledge-based agent
Knowledge Base
• Store information
– knowledge representation language
• Add information (Tell)
• Retrieve information (Ask)
• Perform inference
– derive new sentences (knowledge) from existing
sentences
The wumpus world
A scary world, indeed
• A maze in a cave
• A wumpus who will eat you
• One arrow that can kill the wumpus
• Pits that can entrap you (but not the
wumpus for it is too large to fall in)
• A heap of gold somewhere
But you have sensing and action
Sensing (each is either on or off – a single bit)
• wumpus emits a stench in adjacent squares
• pits cause a breeze in adjacent squares
• gold causes glitter you see when in the square
• walking into wall causes a bump
• death of wumpus can be heard everywhere in world
But you have sensing and action
Action
• You can turn left or right 90 degrees
• You can move forward
• You can shoot an arrow in your facing direction
An example
An example
Our agent played well
• Used inference to relate two different percepts observed from
different locations
• Agent is guaranteed to draw correct conclusions if percepts
are correct
Knowledge Representation
Must be syntactically and semantically correct
Syntax
• the formal specification of how information is stored
– a+2=c
(typical mathematical syntax)
– a2y += (not legal syntax for infix (regular math) notation)
Semantics
• the meaning of the information
– a+2=c
(c is a number whose value is 2 more than a)
– a+2=c
(the symbol that comes two after ‘a’ in the alphabet is ‘c’)
Logical Reasoning
Entailment
• one sentence follows logically from another
–
– the sentence a entails the sentence b
• a entails b (
) if and only if for every model in which a
is true, b is also true
Model: a description of the world where every
relevant sentence has been assigned truth or
falsehood
An example
After one step in wumpus
world
• Knowledge base (KB) is
– A set of all game states that are
possible given:
 rules of game
 percepts
• How does the KB represent the
game?
Building the KB
Consider a KB intended to represent the presence
of pits in a Wumpus world where [1,1] is clear and
[2,1] has a breeze
• There are three cells with
two conditions each
• 23 = Eight possible models
• According to percepts and
rules, KB is well defined
Model Checking
The agent wishes to check all models of the game
in which a pit is in the three candidate spots
• Enumerate all models
where three candidate
spots may have pits
• 3 pits, two conditions each
• 23 = Eight models
Checking entailment
Can “a1:There is no pit in [1, 2]” be true?
• Enumerate all states where
a1 is true
• For all models where KB
is true, a1 is true also
• The KB entails a1
Checking entailment
Can “a2: There is no pit in [2, 2]” be true?
• Enumerate all states
where a2 is true
• For all models where KB
is true, a2 is not always
true also
• KB does not entail a2
Logical inference
Entailment permitted logic
•
we inferred new knowledge from entailments
Inference algorithms
•
The method of logical inference we demonstrated is called
model checking because we enumerated all possibilities to
find the inference
Inference algorithms
There are many inference algorithms
• If an inference algorithm, i, can derive a from KB, we write
Inference Algorithms
Sound
• Algorithm derives only entailed sentences
• An unsound algorithm derives falsehoods
Complete
• inference algorithm can derive any sentence that is entailed
– Means inference algorithm cannot become caught in
infinite loop?
Propositional (Boolean) Logic
Syntax of allowable sentences
• atomic sentences
– indivisible syntactic elements
– Use uppercase letters to represent a proposition that can
be true or false
– True and False are predefined propositions where True
means always true and False means always false
Atomic sentences
Syntax of atomic sentences
• indivisible syntactic elements
• Use uppercase letters to represent a proposition that can be
true or false
• True and False are predefined propositions where True
means always true and False means always false
Complex sentences
Formed from atomic sentences using connectives
• ~ (or
= not): the negation
• ^ (and): the conjunction
• V (or): the disjunction
• => (or
= implies): the implication
•  (if and only if): the biconditional
Backus-Naur Form (BNF)
Propositional (Boolean) Logic
Semantics
• given a particular model (situation), what are the rules that
determine the truth of a sentence?
• use a truth table to compute the value of any sentence with
respect to a model by recursive evaluation
Truth table
Example from wumpus
A square is breezy only if a neighboring square has a pit
• B1,1  (P1,2 V P2,1)
A square is breezy if a neighboring square has a pit
•
(P1,2 V P2,1) => B1,1
Former is more powerful and true to Wumpus rules
A wumpus knowledge base
• Initial conditions
– R1: ~P1,1
no pit in [1,1]
• Rules of Breezes (for a few
example squares)
– R2: B1,1  (P1,2 V P2,1)
– R3: B2,1  (P1,1 V P2,1 V P3,1)
• Percepts
– R4: ~B1,1
– R5: B2,1
We know: R1 ^ R2 ^ R3 ^ R4 ^ R5
Inference
Does KB entail a (KB -> a?)
• Is there a pit in [1,2]: P1,2?
• Consider only what we need
– B1,1 B2,1 P1,1 P1,2 P2,1 P2,2 P3,1
– 27 permutations of models to check
• For each model, see if KB is true
• For all KB = True, see if a is true
Model
Checking
Inference
Truth table
Concepts related to entailment
logical equivalence
•
Two sentences a and b are logically equivalent if they are true in the same set of
models… a b
validity (or tautology)
•
a sentence that is valid in all models
– P V ~P
– deduction theorem: a entails b if and only if a implies b
satisfiability
•
a sentence that is true in some model
•
a entails b  (a ^ ~b) is unsatisfiable
Logical Equivalences
Know these equivalences
Reasoning w/ propositional logic
Inference Rules
• Modus Ponens:
– Whenever sentences of form a => b and a are given
the sentence b can be inferred
 R1: Green => Martian
 R2: Green
 Inferred: Martian
Reasoning w/ propositional logic
Inference Rules
• And-Elimination
– Any of conjuncts can be inferred
 R1: Martian ^ Green
 Inferred: Martian
 Inferrred: Green
Use truth tables if you want to confirm inference
rules
Example of a proof
P?
~P
~B
P?
B
P? P?
Example of a proof
~P
~P
~B
P?
B
~P P?
Constructing a proof
Proving is like searching
• Find sequence of logical inference rules that lead to desired
result
• Note the explosion of propositions
– Good proof methods ignore the countless irrelevant
propositions
Monotonicity of knowledge base
Knowledge base can only get larger
• Adding new sentences to knowledge base can only make it get larger
– If (KB entails a)
 ((KB ^ b) entails a)
• This is important when constructing proofs
– A logical conclusion drawn at one point cannot be invalidated by a
subsequent entailment