Transcript document

CS 416
Artificial Intelligence
Lecture 10
Reasoning in Propositional Logic
Chapters 7 and 8
Midterm
9 days from now (Mar 2nd)?
Immediately following Spring Break (Mar 14th)?
Curling in the news (Wall Street Journal)
It’s like chess on ice (Roy Sinclair, World Curling Federation)
• The essence of Olympic befuddlement
• Inexplicably, it was a cable TV hit in 2002
• 70 hours on NBC (larger audience than
other events)
• US Men go for
GOLD on
Wednesday
Review
Store information in a knowledge base
• Backus-Naur Form (BNF)
• Use equivalences to isolate what you want
• Use inference to relate sentences
– Modus Ponens
– And-Elimination
Constructing a proof
Proving is like searching
• Find sequence of logical inference rules that lead to desired
result
• Note the explosion of propositions
– Good proof methods ignore the countless irrelevant
propositions
The two inference rules are sound but not complete!
Completeness
Recall that a complete inference algorithm is one
that can derive any sentence that is entailed
Resolution is a single inference rule
• Guarantees the ability to derive any sentence that is entailed
– i.e. it is complete
• It must be partnered with a complete search algorithm
Resolution
Unit Resolution Inference Rule
• If m and li are
complementary
literals
Resolution Inference Rule
Also works with clauses
But make sure each literal appears only once
We really just want the resolution to return A
Resolution and completeness
Any complete search algorithm, applying only the
resolution rule, can derive any conclusion
entailed by any knowledge base in propositional
logic
• More specifically, refutation completeness
– Able to confirm or refute any sentence
– Unable to enumerate all true sentences
What about “and” clauses?
Resolution only applies to “or” clauses (disjunctions)
• Every sentence of propositional logic can be transformed to a
logically equivalent conjunction of disjunctions of literals
(recall literal can be negated atomic sentence)
Conjunctive Normal Form (CNF)
• A sentence expressed as conjunction of disjunction of literals
CNF
An algorithm for resolution
We wish to prove KB entails a
• That is, will some sequence of inferences create new
sentence in knowledge base equal to a?
• Proof by contradiction
– Show that assuming ~a invalidates KB
– Must show (KB ^ ~a) is unsatisfiable
 produces the empty clause
An algorithm for resolution
Algorithm
• (KB ^ ~a) is put in CNF
• For each pair of sentences with a pair of clauses containing
complementary literals
– Resolve to produce new clause which is added to KB (if novel)
– Cease if no new clauses to add (~a is not entailed)
– Cease if resolution rule derives empty clause
 If resolution generates a… a ^ ~a = empty clause

(a is entailed)
Example of resolution
Prove ~P1,2 (there is not a pit in P1,2):
• KB ^ P1,2 leads to empty clause
• Therefore ~P1,2 is true
Formal Algorithm – It’s “complete”
A complete resolution algorithm
Inference using resolution is complete
• Can be slow (it was the first problem shown to be NP
Complete)
• Real-world knowledge bases contain a restricted type of
clause, implications
– (a conjunction of positive literals) => single positive literal
Horn Clauses
Horn Clause
• Disjunction of literals where at most one is positive
– (~a V ~b V ~c V d)
– (~a V b V c V ~d) Not a Horn Clause
What’s good about Horn Clauses
•
Two new inference algorithms are possible
– Forward chaining
– Backward chaining
•
Entailment computation is linear in size of KB
Forward Chaining
Does KB (Horn clauses) entail q (single symbol)?
• Keep track of known facts (positive literals)
• If the premises of an implication are known
– Add the conclusion to the known set of facts
• Repeat process until no further inferences or q is added
Forward Chaining
Forward Chaining
Properties
• Sound
• Complete
– All entailed atomic sentences will be derived
Data Driven
• Start with what we know
• Derive new info until we discover what we want
Backward Chaining
Start with what you want to know, a query (q)
• Look for implications that conclude q
– Look at the premises of those implications
 Look for implications that conclude those premises…
Goal-Directed Reasoning
• Can be less complex search than forward chaining
Making it fast
Example
• Problem to solve is in CNF
– Is Marvin a Martian given --- M == 1 (true)?
 Marvin is green --- G=1
 Marvin is little
--- L=1
 (little and green) implies Martian --- (L ^ G) => M
~(L^G) V M
~L V ~G V M
– Proof by contradiction… are there true/false values for G, L, and M that are
consistent with knowledge base and Marvin not being a Martian?
 G ^ L ^ (~L V ~G V M) ^ ~M == empty?
Searching for variable values
Want to find values such that:
G ^ L ^ (~L V ~G V M) ^ ~M == 0
• Randomly consider all true/false assignments to variables
until we exhaust them all or find match (model checking)
– (G, L, M) = (1, 0, 0)… no
= (0, 1, 0)… no
= (0, 0, 0)… no
= (1, 1, 0)… no
• Alternatively…
Backtracking Algorithm
Davis-Putnam Algorithm (DPLL)
• Search through possible assignments to (G, L, M) via depthfirst search (0, 0, 0) to (0, 0, 1) to (0, 1, 0)…
G
L
M
– Each clause of CNF must be true
 Terminate consideration when clause evaluates to false
– Some culling of tree is possible using heuristics/logic
 Variable appears with same “sign” in all clauses
Searching for variable values
Other ways to find (G, L, M) assignments for:
G ^ L ^ (~L V ~G V M) ^ ~M == 0
• Simulated Annealing (WalkSAT)
– Start with initial guess (0, 1, 1)
– With each iteration, pick an unsatisfied clause and flip one
symbol in the clause
– Evaluation metric is the number of clauses that evaluate to true
– Move “in direction” of guesses that cause more clauses to be
true
– Many local mins, use lots of randomness
WalkSAT termination
How do you know when simulated annealing is
done?
• No way to know with certainty that an answer is not possible
– Could have been bad luck
– Could be there really is no answer
– Establish a max number of iterations and go with best
answer to that point
Vary clauses and number of symbols
Higher m/n means fewer
assignments will work
If fewer assignments work
it is harder for DPLL and
WalkSAT
Combining it all
4x4 Wumpus World
• The “physics” of the game
–
–
• At least one wumpus on board
–
• A most one wumpus on board (for any two squares, one is free)
–
n(n-1)/2 rules like: ~W1,1 V ~W1,2
• Total of 155 sentences containing 64 distinct symbols
Wumpus World
Time is not taken into account
• “Rules” must have a temporal variable
Inefficiencies as world becomes large
• Knowledge base must expand if size of world expands
Prefer to have sentences that apply to all squares
and times
• We brought this subject up last week
• Next chapter addresses this issue
Chapter 8: First-Order Logic
What do we like about propositional logic?
It is:
• Declarative
– Relationships between variables are described
– A method for propagating relationships
• Expressive
– Can represent partial information using disjunction
• Compositional
– If A means foo and B means bar, A ^ B means foo and bar
What don’t we like about propositional logic?
Lacks expressive power to describe the
environment concisely
• Separate rules for every square/square relationship in
Wumpus world
Natural Language
English appears to be expressive
• Squares adjacent to pits are breezy
But natural language is a medium of communication, not
a knowledge representation
• Much of the information and logic conveyed by language is dependent
on context
• Information exchange is not well defined
• Not compositional (combining sentences may mean something different)
• It is ambiguous
But we borrow representational ideas from natural
language
Natural language syntax
• Nouns and noun phrases refer to objects
– People, houses, cars
• Properties and verbs refer to object relations
– Red, round, nearby, eaten
• Some relationships are clearly defined functions where there is only one
output for a given input
– Best friend, first thing, plus
We build first-order logic around objects and relations
Ontology
• a “specification of a conceptualization”
• A description of the objects and relationships that can exist
– Propositional logic had only true/false relationships
– First-order logic has many more relationships
• The ontological commitment of languages is different
– How much can you infer from what you know?
 Temporal logic defines additional ontological
commitments because of timing constraints
Formal structure of first-order logic
Models of first-order logic contain:
• Domain: a set of objects
– Alice, Alice’s left arm, Bob, Bob’s hat
• Relationships between objects
– Represented as tuples
 Sibling (Alice, Bob), Sibling (Bob, Alice)
 On head (Bob, hat)
 Person (Bob), Person (Alice)
– Some relationships are functions if a given object is related to exactly
one object in a certain way
 leftArm(Alice)  Alice’s left arm
First-order logic models
– Unlike in propositional logic, models in FOL are more than
just truth assignments to the objects… they are
relationships among the objects
First-order logic syntax
Constant Symbols
• A, B, Bob, Alice, Hat
Predicate Symbols
Names of things are arbitrary
Knowledge base adds meanin
• is, onHead, hasColor, person
Function Symbols
• Mother, leftLeg
Each predicate and function symbol has an arity
• A constant the fixes the number of arguments
Semantics
Semantics defines the objects and relationships
• Father (John, Jim) … John is Jim’s father (or Jim is John’s
father)
• This amounts to an interpretation
Number of possible domain elements (objects) is
unbounded… integers for example
• Number of models is unbounded and many interpretations for
each model
– Checking entailment by enumeration is usually impossible
Term
A logical expression that refers to an object
• Constants
– We could assign names to all objects, like providing a
name for every shoe in your closet
• Function symbols
– Replaces the need to name all the shoes
– OnLeftFoot(John))
 Refers to a shoe, some shoe
Atomic Sentences
Formed by a predicate symbol followed by parenthesized
list of terms
• Sibling (Alice, Bob)
• Married (Father(Alice), Mother(Bob))
An atomic sentence is true in a given model, under a
given interpretation, if the relation referred to by the
predicate symbol holds among the objects referred to by
the arguments
Complex sentences
We can use logical connectives
• ~Sibling(LeftLeg(Alice), Bob)
• Sibling(Alice, Bob) ^ Sibling (Bob, Alice)
Quantifiers
A way to express properties of entire collections
of objects
• Universal quantification (forall, )
– The power of first-order logic
– Forallx King(x) => Person(x)
( x)  Person( x)
– xxis: King
a variable
Universal Quantification
Forall x, P
• P is true for every object x
• Forall x, King(x) => Person(x)
• x=
– Richard the Lionheart
– King John
– Richard’s left leg
– John’s left leg
– The crown
Universal Quantification
x : King ( x)  Person( x)
Note that all of these are true
• Implication is true even if the premise is false
•
is the right connective to use for
By asserting a universally quantified sentence, you
assert a whole list of individual implications
Existential Quantification
There exists,
• There exists an x such that Crown(x) ^ OnHead(x, John)
• It is true for at least one object
• ^ is the right connective for