Transcript PowerPoint
CS 416
Artificial Intelligence
Lecture 10
Logic
Chapters 7 and 8
Homework Assignment
First homework assignment is out
Due October 20th
Midterm
October 25th
Up through chapter 9 (excluding chapter 5)
Review
Store information in a knowledge base
• Backus-Naur Form (BNF)
• Use equivalences to isolate what you want
• Use inference to relate sentences
– Modus Ponens
– And-Elimination
Constructing a proof
Proving is like searching
• Find sequence of logical inference rules that lead to desired
result
• Note the explosion of propositions
– Good proof methods ignore the countless irrelevant
propositions
The two inference rules are sound but not complete!
Completeness
Recall that a complete inference algorithm is one
that can derive any sentence that is entailed
Resolution is a single inference rule
• Guarantees the ability to derive any sentence that is entailed
– i.e. it is complete
• It must be partnered with a complete search algorithm
Resolution
Resolution is a single inference rule
• Guarantees the ability to derive any sentence that is entailed
– i.e. it is complete
• It must be partnered with a complete search algorithm
Resolution
Unit Resolution Inference Rule
• If m and li are
complementary
literals
Resolution Inference Rule
Also works with clauses
But make sure each literal appears only once
We really just want the resolution to return A
Resolution and completeness
Any complete search algorithm, applying only the
resolution rule, can derive any conclusion
entailed by any knowledge base in propositional
logic
• More specifically, refutation completeness
– Able to confirm or refute any sentence
– Unable to enumerate all true sentences
What about “and” clauses?
Resolution only applies to “or” clauses (disjunctions)
• Every sentence of propositional logic can be transformed to a
logically equivalent conjunction of disjunctions of literals
Conjunctive Normal Form (CNF)
• A sentence expressed as conjunction of disjunction of literals
CNF
An algorithm for resolution
We wish to prove KB entails a
• That is, will some sequence of inferences create new
sentences in knowledge base equal to a?
• If some sequence of inferences created ~ a, we would know
the knowledge base could not also entail a
– a ^ ~ a is impossible
An algorithm for resolution
To show KB cannot resolve a
• Must show (KB ^ ~a) is unsatisfiable
– No possible way for KB to entail (not a)
– Proof by contradiction
An algorithm for resolution
Algorithm
• (KB ^ ~a) is put in CNF
• Each pair with complementary literals is resolved to produce
new clause which is added to KB (if novel)
– Cease if no new clauses to add (~a is not entailed)
– Cease if resolution rule derives empty clause
If resolution generates a… a ^ ~a = empty clause
(~a is entailed)
Example of resolution
Proof that there is not a pit in P1,2: ~P1,2
• KB ^ P1,2 leads to empty clause
• Therefore ~P1,2 is true
Formal Algorithm
Horn Clauses
Horn Clause
• Disjunction of literals where at most one is positive
– (~a V ~b V ~c V d)
– (~a V b V c V ~d) Not a Horn Clause
Horn Clauses
Why?
•
Every Horn clause can be written as an implication:
– (a conjunction of positive literals) => single positive literal
– Inference algorithms can use forward chaining and
backward chaining
– Entailment computation is linear in size of KB
Horn Clauses
Can be written as a special implication
• (~a V ~b V c) becomes (a ^ b) => c
– (~a V ~b V c) == (~(a ^ b) V c) … de Morgan
– (~(a ^ b) V c) == ((a ^ b) => c
… implication elimination
Forward Chaining
Does KB (Horn clauses) entail q (single symbol)?
• Keep track of known facts (positive literals)
• If the premises of an implication are known
– Add the conclusion to the known set of facts
• Repeat process until no further inferences or q is added
Forward Chaining
Forward Chaining
Properties
• Sound
• Complete
– All entailed atomic sentences will be derived
Data Driven
• Start with what we know
• Derive new info until we discover what we want
Backward Chaining
Start with what you want to know, a query (q)
• Look for implications that conclude q
– Look at the premises of those implications
Look for implications that conclude those premises…
Goal-Directed Reasoning
• Can be less complex search than forward chaining
Making it fast
Example
• Problem to solve is in CNF
– Is Marvin a Martian given --- M == 1 (true)?
Marvin is green --- G=1
Marvin is little
--- L=1
(little and green) implies Martian --- (L ^ G) => M
~(L^G) V M
~L V ~G V M
– Proof by contradiction… are there true/false values for G, L, and M that are
consistent with knowledge base and Marvin not being a Martian?
G ^ L ^ (~L V ~G V M) ^ ~M == empty?
Searching for variable values
Want to find values such that:
G ^ L ^ (~L V ~G V M) ^ ~M == 0
• Randomly consider all true/false assignments to variables
until we exhaust them all or find match (model checking)
– (G, L, M) = (1, 0, 0)… no
= (0, 1, 0)… no
= (0, 0, 0)… no
= (1, 1, 0)… no
• Alternatively…
Backtracking Algorithm
Davis-Putnam Algorithm (DPLL)
• Search through possible assignments to (G, L, M) via depth-first search
(0, 0, 0) to (0, 0, 1) to (0, 1, 0)…
G
– Each clause of CNF must be true
L
M
Terminate consideration when clause evaluates to false
– Use heuristics to reduce repeated computation of propositions
Early termination
Pure symbol heuristic
Unit clause heuristic
Cull branches from tree
Searching for variable values
Other ways to find (G, L, M) assignments for:
G ^ L ^ (~L V ~G V M) ^ ~M == 0
• Simulated Annealing (WalkSAT)
– Start with initial guess (0, 1, 1)
– With each iteration, pick an unsatisfied clause and flip one
symbol in the clause
– Evaluation metric is the number of clauses that evaluate to true
– Move “in direction” of guesses that cause more clauses to be
true
– Many local mins, use lots of randomness
WalkSAT termination
How do you know when simulated annealing is
done?
• No way to know with certainty that an answer is not possible
– Could have been bad luck
– Could be there really is no answer
– Establish a max number of iterations and go with best
answer to that point
So how well do these work?
Think about it this way
• 16 of 32 possible assignments are models (are satisfiable)
for this sentence
• Therefore, 2 random guesses should find a solution
• WalkSAT and DPLL should work quickly
What about more clauses?
If # symbols (variables) stays the same, but
number of clauses increases
• More ways for an assignment to fail (on any one clause)
• More searching through possible assignments is needed
• Let’s create a ratio, m/n, to measure #clauses / # symbols
• We expect large m/n causes slower solution
What about more clauses
Higher m/n means fewer
assignments will work
If fewer assignments work
it is harder for DPLL and
WalkSAT
Combining it all
4x4 Wumpus World
• The “physics” of the game
–
–
• At least one wumpus on board
–
• A most one wumpus on board (for any two squares, one is free)
–
n(n-1)/2 rules like: ~W1,1 V ~W1,2
• Total of 155 sentences containing 64 distinct symbols
Wumpus World
Inefficiencies as world becomes large
• Knowledge base must expand if size of world expands
Preferred to have sentences that apply to all
squares
• We brought this subject up last week
• Next chapter addresses this issue
Chapter 8: First-Order Logic
What do we like about propositional
logic?
It is:
• Declarative
– Relationships between variables are described
– A method for propagating relationships
• Expressive
– Can represent partial information using disjunction
• Compositional
– If A means foo and B means bar, A ^ B means foo and bar
What don’t we like about
propositional logic?
Lacks expressive power to describe the
environment concisely
• Separate rules for every square/square relationship in
Wumpus world
Natural Language
English appears to be expressive
• Squares adjacent to pits are breezy
But natural language is a medium of communication, not
a knowledge representation
• Much of the information and logic conveyed by language is dependent
on context
• Information exchange is not well defined
• Not compositional (combining sentences may mean something different)
• It is ambiguous
But we borrow representational
ideas from natural language
Natural language syntax
• Nouns and noun phrases refer to objects
– People, houses, cars
• Properties and verbs refer to object relations
– Red, round, nearby, eaten
• Some relationships are clearly defined functions where there is only one
output for a given input
– Best friend, first thing, plus
We build first-order logic around objects and relations
Ontology
• a “specification of a conceptualization”
• A description of the objects and relationships that can exist
– Propositional logic had only true/false relationships
– First-order logic has many more relationships
• The ontological commitment of languages is different
– How much can you infer from what you know?
Temporal logic defines additional ontological
commitments because of timing constraints