Propositional Logic - University of Victoria

Download Report

Transcript Propositional Logic - University of Victoria

Propositional Logic
Grammatical Complexity
• A natural language is complex. E.g.
– The dog chased the cat.
– The dog that ate the rat chased the cat.
– The cherry blossoms in the spring … sank.
Ambiguity
• A natural language has ambiguity, e.g.:
– There’s a girl in the room with a telescope.
• Which one of the figures the sentence intends to capture?
• In short we need to define a syntax.
Syntax
Propositional Constants
• Examples:
–
–
–
–
raining
r32aining
rAiNiNg
rainingorsnowing
• Non-Examples:
– 324567
– raining.or.snowing
Compound Sentences
• Negations:
raining
• Conjunctions:
(rainingsnowing)
• The arguments of a conjunction are called conjuncts.
• Disjunctions:
(rainingsnowing)
• The arguments of a disjunction are called disjuncts.
Compound Sentences
• Implications:
(raining cloudy)
• The left argument of an implication is the antecedent.
• The right argument of an implication is the consequent.
• Reductions:
(cloudy raining)
• The left argument of a reduction is the consequent.
• The right argument of a reduction is the antecedent.
• Equivalences:
(cloudy raining)
Notice that the constituent sentences within any compound sentence can be either
simple sentences or compound sentences or a mixture of two.
Parenthesis Removal
• One disadvantage of our notation, as written, is that the parentheses tend to
build up and need to be matched correctly. It would be nice if we could
dispense with parentheses, e.g. simplifying
((pq) r) into
pq r
• Unfortunately, we cannot do without parentheses entirely, since then we
would be unable to render certain sentences unambiguously.
• For example, the sentence shown above could have resulted from dropping
parentheses from either of the following sentences.
((p q)r )
(p (q r ))
Operator Precedence
• The solution to this problem is the use of operator precedence.
The following table gives a hierarchy of precedences for our
operators.
• The operator has higher precedence than , has higher
precedence than ,and has higher precedence than ,
, and .




Parenthesis
• Note that just because precedence allows us to delete
parentheses in some cases, this doesn’t mean that we can
dispense with parentheses entirely.
E.g.
(p q) sr)
Here, we cannot remove the parenthesis
Semantics
Algebra and Logic
• The semantics of logic is similar to the semantics of algebra.
• Algebra is unconcerned with the real-world meaning of variables
like x and y.
– What is interesting is the relationship between the variables
expressed in the equations we write; and
– algebraic methods are designed to respect these relationships, no
matter what meanings or values are assigned to the constituent
variables.
• In a similar way, logic itself is unconcerned with what sentences
say about the world being described.
– What is interesting is the relationship between the truth of simple
sentences and the truth of compound sentences within which the
simple sentences are contained.
– Also, logical reasoning methods are designed to work no matter
what meanings or values are assigned to the logical “variables”
used in sentences.
Interpretation
• Although the values assigned to variables are not crucial in the
sense just described, in talking about logic itself, it is sometimes
useful to make these assignments explicit and to consider
various assignments or all assignments and so forth. Such an
assignment is called an interpretation.
• A propositional logic interpretation (i) is an association
between the propositional constants in a propositional language
and the truth values T or F.
• p iT
• q iF
• r iT
can be written also
pi =T
qi =F
ri =T
Operator Semantics
• The notion of interpretation can be extended to all sentences by
application of operator semantics.
• Negation:

T F
F T
• For example, if the interpretation of p is F, then the
interpretation of p is T.
• For example, if the interpretation of p is T, then the
interpretation of p is F.
Operator Semantics (continued)
• Conjunction:

y
y
T T
T
T
F
F
F
T
F
F
F
F
• Disjunction:
 y
y
T T
T
T F
T
F T
T
F F
F
Operator Semantics (continued)
• Implication:

y
y
T T
T
T
F
F
F
T
T
F
F
T
• Reduction:
 y
y
T T
T
T F
T
F T
F
F F
T
Evaluation
• We can easily determine for any given interpretation whether or not any
sentence is true or false under that interpretation.
• The technique is simple. We substitute true and false values for the
propositional constants and replace complex expressions with the
corresponding values, working from the inside out.
• We say that an interpretation i satisfies a sentence if and only if it is true
under that interpretation.
• As an example, consider the interpretation i show below.
pi=true
qi =false
ri =true
• We can see that i satisfies (pq)(qr).
(true false) (false true)
true (false true)
true (true true)
true true
true
Evaluation (continued)
• Now consider interpretation j defined as follows.
pj=true
qj =true
rj =false
• In this case, j does not satisfy (pq)(qr).
(true true) (true false)
true(true false)
true ( false false)
truefalse
false
• Using this technique, we can evaluate the truth of arbitrary
sentences in our language.
• The cost is proportional to the size of the sentence.
Reverse Evaluation
• Reverse evaluation is the opposite of evaluation.
We begin with one or more compound sentences
and try to figure out which interpretations satisfy
those sentences.
– One way to do this is using a truth table for the
language.
• A truth table for a propositional language is a
table showing all of the possible interpretations
for the propositional constants in the language.
– The interpretations i and j correspond to the third
and seventh rows of this table, respectively.
• Note that, for a propositional language with n
logical constants, there are n columns in the truth
tables 2n rows.
p
T
T
T
T
F
F
F
F
q
T
T
F
F
T
T
F
F
r
T
F
T
F
T
F
T
F
Validity, Satisfiability,
Unsatisfiability
• A sentence is valid if and only if it
is satisfied by every interpretation.
– p p is valid.
• A sentence is satisfiable if and only
if it is satisfied by at least one
interpretation.
– We have already seen several
examples of satisfiable
sentences.
• A sentence is unsatisfiable if and
only if it is not satisfied by any
interpretation.
– p p is unsatisfiable. No
matter what interpretation we
take, the sentence is always
false.
• In one sense, valid sentences and
unsatisfiable sentences are useless.
• Valid sentences do not rule out
any possible interpretations;
• Unsatisfiable sentences rule out
all interpretations; thus they say
nothing about the world.
• On the other hand, they are very
useful in that, they serve as the
basis for legal transformations that
we can perform on other logical
sentences.
• Note that we can easily check the
validity, satisfiability, or
unsatisfiability of a sentence by
looking at the truth table for the
propositional constants in the
sentence.
Entailment
•
•
•

We would like to know, given some sentences, whether other sentences are
or are not logical conclusions.
This relative property is known as logical entailment.
A set of sentences Dlogically entails a sentence j
 D|= j
iff every interpretation that satisfies Dalso satisfies j.
E.g.
The sentence p logically entails the sentence (p q).
Since a disjunction is true whenever one of its disjuncts is true, then (p q)
must be true whenever p is true.
On the other hand, the sentence p does not logically entail (p q).
A conjunction is true if and only if both of its conjuncts are true, and q may
be false.
Semantic Reasoning Methods
Several basic methods for determining whether a
given set of premises propositionally entails a
given conclusion.
Truth Table Method
• One way of determining whether or not a set of premises
logically entails a possible conclusion is to check the truth table
for the logical constants of the language.
• This is called the truth table method and can be formalized as
follows.
– Starting with a complete truth table for the propositional
constants, iterate through all the premises of the problem, for each
premise eliminating any row that does not satisfy the premise.
– Do the same for the conclusion.
– Finally, compare the two tables. If every row that remains in the
premise table, i.e. is not eliminated, also remains in the conclusion
table, i.e. is not eliminated, then the premises logically entail the
conclusion.
Amy’s …
Simple sentences:
Premises:
If Amy loves Pat, Amy loves Quincy.
Amy loves Pat.
lovesAmyPat  lovesAmyQuincy
loves(amy, pat)
If it Monday, Amy loves Pat or
Since we don’t have vars but
Quincy.
only constants, it the same as
ismonday 
saying lovesAmyPat.
lovesAmyPat lovesAmyQuincy
Amy loves Quincy.
loves(amy, quincy)
Same as lovesAmyQuincy
It is Monday.
ismonday
Question:
If it is Monday, does Amy love
Quincy?
I.e. is ismonday lovesAmyQuincy
entailed by the premises?
Truth table for the premises
lovesAmyPat
lovesAmyQuincy ismonday
lovesAmyPat  ismonday 
lovesAmyQuincy lovesAmyPat 
lovesAmyQuincy
T
T
T
T
T
T
T
F
T
T
T
F
T
F
T
T
F
F
F
T
F
T
T
T
T
F
T
F
T
T
F
F
T
T
F
F
F
F
T
T
Crossing out non-sat interpretations
lovesAmyPat
lovesAmyQuincy ismonday
lovesAmyPat  ismonday 
lovesAmyQuincy lovesAmyPat 
lovesAmyQuincy
T
T
T
T
T
T
T
F
T
T
T
F
T
F
T
T
F
F
F
T
F
T
T
T
T
F
T
F
T
T
F
F
T
T
F
F
F
F
T
T
Truth table for the conclusion
ismonday  lovesAmyQuincy
lovesAmyPat
lovesAmyQuincy ismonday
T
T
T
T
T
T
F
T
T
F
T
F
T
F
F
T
F
T
T
T
F
T
F
T
F
F
T
F
F
F
F
T
Crossing out non-sat interpretations
ismonday  lovesAmyQuincy
lovesAmyPat
lovesAmyQuincy ismonday
T
T
T
T
T
T
F
T
T
F
T
F
T
F
F
T
F
T
T
T
F
T
F
T
F
F
T
F
F
F
F
T
Comparing tables
• Finally, in order to make the determination of logical
entailment, we compare the two rightmost tables and notice that
every row remaining in the premise table also remains in the
conclusion table.
– In other words, the premises logically entail the conclusion.
• The truth table method has the merit that it is easy to
understand. It is a direct implementation of the definition of
logical entailment.
• In practice, it is awkward to manage two tables, especially since
there are simpler approaches in which only one table needs to
be manipulated.
Validity checking
• One of these approaches is the validity checking method. It goes as follows.
• To determine whether a set of sentences
{j1,…,jn}
• logically entails a sentence j, we form the sentence
(j1 …jn j)
• and check that it is valid.
• To see how this method works, consider the problem of Amy and Pat and
Quincy once again. In this case, we write the tentative conclusion as shown
below.
(lovesAmyPat  lovesAmyQuincy)  (ismonday lovesAmyPat lovesAmyQuincy)
ismonday  lovesAmyQuincy)
• We then form a truth table for our language with an added column for this
sentence and check its satisfaction under each of the possible interpretations
for our logical constants.
Unsatisfability Checking
• The satisfiability checking method is another single table
approach. It is almost exactly the same as the validity checking
method, except that it works negatively instead of positively.
• To determine whether a finite set of sentences {j1,…,jn}
logically entails a sentence j, we form the sentence
(j1 …jn j)
and check that it is unsatisfiable.
• Both the validity checking method and the satisfiability
checking method require about the same amount of work as the
truth table method, but they have the merit of manipulating only
one table.
A truth table…
p  q p  r p  r  q (p  q)  p  r  q
(p  r)

(p  r  q)
(p  q) 
(p  r)

(p  r  q)
p
q
r
T
T
T
T
T
T
T
T
T
T
T
F
T
F
T
T
F
T
T
F
T
F
T
T
T
F
T
T
F
F
F
F
F
T
F
T
F
T
T
T
T
T
T
T
T
F
T
F
T
T
T
T
T
T
F
F
T
T
T
T
T
T
T
F
F
F
T
T
T
T
T
T
Proofs
Intro
• Semantic methods for checking logical entailment have the merit
of being conceptually simple; they directly manipulate
interpretations of sentences.
• Unfortunately, the number of interpretations of a language grows
exponentially with the number of logical constants.
– When the number of logical constants in a propositional language
is large, the number of interpretations may be impossible to
manipulate.
• Proof methods provide an alternative way of checking and
communicating logical entailment that addresses this problem.
– In many cases, it is possible to create a “proof” of a conclusion
from a set of premises that is much smaller than the truth table for
the language;
– moreover, it is often possible to find such proofs with less work
than is necessary to check the entire truth table.
Schemata
• An important component in the treatment of proofs is the notion of
a schema. A schema is an expression satisfying the grammatical
rules of our language except for the occurrence of metavariables in
place of various subparts of the expression.
• For example, the following expression is a pattern with
metavariables jand y.
j(yj)
• An instance of a sentence schema is the expression obtained by
substituting expressions for the metavariables.
• For example, the following is an instance of the preceding schema.
p (qp)
Rules of Inference
• The basis for proof methods is the use of correct rules of
inference that can be applied directly to sentences to derive
conclusions that are guaranteed to be correct under all
interpretations.
– Since the interpretations are not enumerated, time and space can
often be saved.
• A rule of inference is a pattern of reasoning consisting of:
– one set of sentence schemata, called premises, and
– a second set of sentence schemata, called conclusions.
• A rule of inference is sound if and only if, for every instance,
the premises logically entail the conclusions.
E.g. Modus Ponens
jy
j
y
p (qr)
p
q r
rainingwet
raining
wet
(p q)r
p q
r
wet slippery
wet
slippery
• I.e. we can substitute for the metavariables
complex sentences
• Note that, by stringing together applications of
rules of inference, it is possible to derive
conclusions that cannot be derived in a single
step. This idea of stringing together rule
applications leads to the notion of a proof.
•
•
•
•
•
•
Axiom schemata
The implication introduction schema (II), together with Modus Ponens, allows us to
infer implications.
j(yj)
The implication distribution schema (ID) allows us to distribute one implication over
another.
(j(y)) ((jy) (j))
The contradiction realization schemata (CR) permit us to infer a sentence if the
negation of that sentence implies some sentence and its negation.
(yj) ((yj) y)
(yj) ((yj)y)
The equivalence schemata (EQ) capture the meaning of the operator.
(jy)(jy)
(jy)(yj)
(jy)((yj)(yj))
The meaning of the other operators in propositional logic is captured in the following
axiom schemata.
(jy)(yj)
(jy)(jy)
(jy)(jy)
The axiom schemata in this section are jointly called the standard axiom schemata
for Propositional Logic. They all are valid.
Proofs
• A proof of a conclusion from a set of premises is a sequence of
sentences terminating in the conclusion in which each item is
either (1) a premise, (2) an instance of an axiom schema, or (3)
the result of applying a rule of inference to earlier items in
sequence.
1. p q
Premise
2. q r
Premise
3. (q r)(p (q r))
II
4. p (qr)
MP : 3,2
5. (p (qr)) (( pq) (p r))
ID
6. (p q)(p r )
MP : 5,4
7. p r
MP : 6,1
Proofs (continued)
• If there exists a proof of a sentence jfrom a set Dof premises and the standard
axiom schemata using Modus Ponens, then jis said to be provable from D
(written as D|- j) and is called a theorem of D.
• Earlier, it was suggested that there is a close connection between provability and
logical entailment. In fact, they are equivalent. A set of sentences Dlogically
entails a sentence jif and only if jis provable from D.
• Soundness Theorem: If jis provable from D, then Dlogically entails j.
• Completeness Theorem: If Dlogically entails j, then jis provable from D.
• The concept of provability is important because it suggests how we can
automate the determination of logical entailment.
– Starting from a set of premises D, we enumerate conclusions from this set.
– If a sentence jappears, then it is provable from Dand is, therefore, a logical
consequence.
– If the negation of jappears, then jis a logical consequence of Dand jis
not logically entailed (unless Dis inconsistent).
– Note that it is possible that neither jnor jwill appear.
Resolution
Intro
• Propositional resolution is an extremely powerful rule of
inference for Propositional Logic. Using propositional
resolution (without axiom schemata or other rules of inference),
it is possible to build a theorem prover that is sound and
complete for all of Propositional Logic.
• What's more, the search space using propositional resolution is
much smaller than for standard propositional logic.
Clausal Form
• Propositional resolution works only on expressions in clausal form. Before
the rule can be applied, the premises and conclusions must be converted to
this form.
• A literal is either an atomic sentence or a negation of an atomic sentence. For
example, if p is a logical constant, the following sentences are both literals:
p, p
• A clause expression is either a literal or a disjunction of literals. If p and q
are logical constants, then the following are clause expressions
p, p, p q
• A clause is the set of literals in a clause expression. For example, the
following sets are the corresponding clauses: {p}, {p}, {p,q}
• Note that the empty set {} is also a clause. It is equivalent to an empty
disjunction and, therefore, is unsatisfiable.
Converting to clausal form
1. Implications (I):
j1 j2

j1 j2

j1 j2

2. Negations (N):
j

(j1 j2 ) 
(j1 j2 ) 
3. Distribution (D):
j1 (j2 j3 )
(j1 j2 ) j3
(j1 j2 ) j3
(j1 j2 ) j3
4. Operators (O):
j1 ... jn 
j1 ...jn 
j1 j2
j1 j2
(j1 j2 ) (j1 j2 )
j
j1 j2
j1 j2




(j1 j2) (j1 j3 )
(j1 j3 ) (j2 j3)
j1 (j2 j3 )
j1 (j2 j3)
{j1,...,jn}
{j1}...{jn}
Examples
I
N
D
O
g (r f )
g (r f )
g (r f )
g (r f )
{g}
{r, f}
I
N
D
O
(g (rf ))
(g (r f ))
g (r f ))
g (r f )
g (r f )
(g r) (g f )
{g,r}
{g,f}
Propositional Resolution
{j1,..., ,...,jm}
{y1,...,,...,yn}
{j1,...,jm,y1,...,yn}
E.g.
{p,q}
{p,r}
{q,r}
The idea of resolution is simple.
• Suppose we know that p is true or q is
true, and suppose we also know that p is
false or r is true.
• One clause contains p, and the other
contains p.
• If p is false, then by the first clause q must
be true.
• If p is true, then, by the second clause, r
must be true.
• Since p must be either true or false, then it
must be the case that q is true or r is true.
• In other words, we can cancel the p
literals.
Example
1. {p,q}
2. {p, q}
3. {p,q}
4. {p,q}
5. {q}
6. {q}
7. {}
Premise
Premise
Premise
Premise
1,2
3,4
5,6
Two finger (or pointer) method
• We keep two pointers (the slow and the fast);
• On each step we compare the sentences under the pointers. If
we can resolve, we add the new derived sentence at the end of
the list.
• At the start of the inference we initialize slow and fast at the top
of the list.
• As long as the two pointers point to different positions, we
leave the slow where it is and advance the fast.
• When they meet, we move the fast at the top of the list and we
move the slow one position down the list.
Example (two finger method)
1.
2.
3.
4.
5.
6.
7.
8.
{p,q}
{p,r}
{q, r}
{r}
{q,r}
{p,r}
{p}
{q}
Premise
Premise
Premise
Premise
1,2
1,3
2,4
3,4
9. {r}
10. {q}
11. {r}
12. {p}
13. {q}
14. {r}
15. {p}
16. {r}
17. {}
3,5
4,5
2,6
4,6
1,7
6,7
1,8
5,8
4,9
TFM With Identical Clause Elimination
1. {p,q}
2. {p,r}
3. {q, r}
4. {r}
5. {q,r}
6. {p,r}
7. {p}
8. {q}
9. {r}
10. {q}
11. {p}
12. {}
Premise
Premise
Premise
Premise
1,2
1,3
2,4
3,4
3,5
4,5
4,6
4,9
Another TFM example
1. {p,q}
2. {p,q}
3. {p,q}
4. {p,q}
5. {p}
6. {q}
7. {q, q}
8. {p,p}
9. {q,q}
9.5 {p,p}
10. {q}
1,2
1,3
2,3
2,3
1,4
1,4
2,4
11. {p}
12. {q}
13. {q}
14. {p}
15. {p}
16. {p,q}
17. {q, p}
18. {p,q}
19. {q,p}
20. {q}
3, 4
3,5
4,5
2,6
4,6
1,7
2,7
3,7
4,7
6,7
continued
21. {q, q}
22. {q, q}
23. {q, p}
24. {q, p}
25. {p,q}
26. {p,q}
27. {p}
28. {p, p}
29. {p, p}
30. {p,q}
7, 7
7, 7
1,8
2,8
3,8
4,8
5,8
8,8
8,8
1,9
31. {q, p}
32. {p, q}
33. {q,p}
34. {q}
35. {q,q}
36. {q,q}
37. {q,q}
38. {p}
39. {p}
40. {}
2,9
3,9
4, 9
6,9
7,9
9,9
9,9
1,10
3,10
6,10
Proof With Identical Clause Elimination
1. {p,q}
2. {p,q}
3. {p,q}
4. {p,q}
5. {p}
6. {q}
7. {q, q}
8. {p,p}
9. {q,q}
10. {q}
11. {p}
12. {}
1,2
1,3
2,3
2,3
1,4
2,4
3,4
6,10
Motivation for Tautology Elimination
A tautology is a clause with a complementary pair of literals.
1. {p,q} Premise
2. {p,p} Premise
3. {p,q} 1,2
Proof with TE and ICE
1. {p, q}
2. {p,q}
3. {p, q}
4. {p,q}
5. {p}
6. {q}
7. {q}
8. {p}
9. {}
1,2
1,3
2, 4
3,4
6,7