lecture slides
Download
Report
Transcript lecture slides
F22H1
Logic and Proof
Week 7
Clausal Form and Resolution
Isn’t English Great!
We have many ways to say the same thing
If you miss tutorials, you will do badly in the exam
Anyone who doesn’t attend tutorials will do poorly in the module
Doing well in the exam only happens if you attend tutorials
Either you attend tutorials, or you do badly in the course.
Etc …
When we formalise this in predicate logic, things are just the same:
x[misstutorials ( x) dobadly ( x)]
x[attendtuto rials ( x) dobadly ( x)]
x[dobadly ( x) attendtuto rials ( x)]
x[attendtuto rials ( x) dobadly ( x)]
Isn’t this a pain!
But this is not much fun when you have to work out if one logical
statement is the same as another, or even if one English statement
means the same as another.
Luckily there is a way to convert any predicate logic statement into a
unique form. This means that if we suspect two logical statements
may be equivalent, we can convert each into their unique form –
if they end up the same, then they are equivalent.
This process has various applications; it can be helpful in automated
reasoning (trying to prove things). Also, the unique form is quite
simple, and without quantifiers, so is easier for computer
manipulation.
Clausal Form
The unique form is called Clausal Form (since it involves a
combination of `clauses’). There are various versions, and we will
learn a common version.
A conjunction of disjunctions looks like this:
( A B) (C A D) E (F G ) ...
I.e. a list of things joined by ANDs, where each thing is either an
`atomic’ predicate (maybe negated) or a list of atomic predicates
joined by Ors.
ANY statement in predicate logic cam be converted to a such a
conjunction of disjunctions. The result is in what we call:
Conjunctive Normal Form
A Procedure for Clausal Form
There are six stages to the conversion:
1. Remove
2. De Morgan’s to move negation to atomic propositions
3. Skolemizing (gets rid of )
4. `Eliminating’ universal quantifiers
5. Distributing AND over OR
6. Arrange into clauses and maybe reorder
Let’s go through the stages with:
x[big ( x) ( wet ( x) p[blue ( p)])] q[orange(q)]
1. Remove implications (i.e. use the implication equivalence)
x[big ( x) ( wet ( x) p[blue ( p)])] q[orange(q)]
2. Use De Morgan’s to move negation to atomic propositions
x[big ( x) (wet ( x) p[blue ( p)])] q[orange(q)]
x[big ( x) (wet ( x) p[blue ( p)])] q[orange(q)]
3. Skolemizing.
We were here:
x[big ( x) (wet ( x) p[blue ( p)])] q[orange(q)
In this step, we simply replace existentially quantified variables by
constants.
Generally, if we have
we can replace this with
x[...something( x)...]
something( A34)
I.e. we invent a specific individual for whom we can say that
`something’ is true, and give it a unique name.
In this case, choosing any name that comes into my head, we get:
x[big ( x) (wet ( x) p[blue ( p)])] orange(C 3P 0)
4. Eliminate universal quantifiers
Since there are no longer any existential quantifiers,
the order of the universal quantifiers doesn’t
matter. We may as well move them all the way to
the left, like this:
xp[big ( x) (wet ( x) blue ( p))] orange(C 3P 0)
Meanwhile, it is guaranteed that every variable here
is universally quantified (we only apply this
process to propositions). So, why not just get rid
of them, and we will take them as implicit
big ( x) (wet ( x) blue ( p)) orange(C 3P 0)
Now we distribute AND over OR – i.e. we
mainly use this equivalence:
We had this:
big ( x) (wet ( x) blue ( p)) orange(C 3P 0)
In this case we’re fine already, since we don’t have an AND within
brackets.
It is already in Conjunctive normal form (CNF), we just have to tidy up:
(big ( x) blue ( x) wet ( p)) orange(C 3P 0)
Now we have removed useless brackets, predicates within clauses are
in alphabetical order, and the clauses themselves are in alphabetical
order
Skolemisation II
Actually Skolemisation is slightly more complicated. Consider this:
x[y[mother ( y, x)]]
If just replace the existential quantification with a constant we get
x[mother ( M 302, x)]]
Which says M302 is everyone’s mother.
What we should do when there are universally quantified variables in the
surrounding scope is make the existential variable a function (called a Skolem
function) of those universally quantified variables. So:
x[mother ( f 1( x), x)]]
Here, f1(k) is the Skolem function, outputting the individual who is the mother of the
input variable k.
Better do another one:
z[a( z ) x[g[b( g , z ) c( z )] p[w( p) c( x)]]]
Step 1: Remove implication
z[a( z ) x[g[b( g , z ) c( z )] p[w( p) c( x)]]]
Step 2: De Morgan’s
z[a( z ) x[g[(b( g , z ) c( z ))] p[w( p) c( x)]]]
z[a( z ) x[g[(b( g , z ) c( z ))] p[w( p) c( x)]]]
z[a( z ) x[g[(b( g , z ) c( z ))] p[w( p) c( x)]]]
We were here
z[a( z ) x[g[(b( g , z ) c( z ))] p[w( p) c( x)]]]
Next: Step 3, Skolemization:
a( R 2) x[g[(b( g , R 2) c( R 2))] p[w( p) c( x)]]
In this case there are no universally quantified
variables in the surrounding scope – I.e.
everything here is true for a single individual z,
not depending on x or g. So we don’t need a
function
a( R 2) (b( g , R 2) c( R 2)) (w( p) c( x))
Next: Step 4, eliminate universal quantifiers
Step 5: use the distributivity
equivalences to get closer to CNF
We were here:
a( R 2) (b( g , R 2) c( R 2)) (w( p) c( x))
Recall the distributivity equivalence on the previous slide. If we think
of : a (R 2) as P, b( g , R 2) as Q, and c(R 2) as R, then we
get:
(a( R 2) b( g , R 2)) (a( R 2) c( R 2)) (w( p) c( x))
We were here:
(a( R 2) b( g , R 2)) (a( R 2) c( R 2)) (w( p) c( x))
If we now think of: (a ( R 2) c( R 2)) as P, and think of
(w( p) c( x)) as the Q and R, we get:
(a( R 2) b( g , R 2))
(a( R 2) c( R 2) w( p)) (a( R 2) c( R 2) c( x))
We are now in CNF !! Let’s tidy up and finish by placing it into
some form of reproducible order
(a( R 2) b( g , R 2))
(a( R 2) c( R 2) c( p)) (a( R 2) c( R 2) w( x))
… just a slight change to get it in a kind of alphabetical order
Resolution:
how we do proofs in CNF
Converting predicate logic statements to CNF is very
convenient, because it simplifies the process of trying to
prove things automatically.
With fewer connectives and no quantifiers to worry about,
there are fewer applicable equivalences or inference rules
that can be used. In fact, we can do a great deal of automated
proof work by using just one inference rule. This is called
Resolution, and it is usually used in conjunction with
Reasoning by Contradiction (see slides for week 6)
The Resolution inference rule is basically:
If, in CNF form, one clause contains X and another clause
contains X, then we can infer the union of those clauses,
without the X and X
Resolution
To help understand, note that we will implicitly often use
AND-introduction and AND-elimination. That is, if A AND B
is true, then we know that A on its own is true, and that B on
its own is true, and vice versa.
Suppose we have these two clauses:
1) (raining sad ( Dave) slow( Bus ) ontv ( Spongebob ))
2) (happy ( Kylie ) slow( Bus ))
The resolution rule allows us to infer the following – by noting that
one of them contains something that is negated in the other. We join
the clauses, eliminating that pair.
3) (raining sad ( Dave) happy ( Kylie ) ontv ( Spongebob ))
Is it valid?
Of course it is. But it is worth thinking about it to convince yourself
Think of the simple case where you have:
1) (raining sad ( Dave))
2) raining
Clearly, resolution now gives us:
3) sad (Dave)
We could prove it in a longer way by:
{Implication rule, from (1)}
4) (raining sad ( Dave))
{follows from 2 and 4}
5) sad (Dave)
Proof by Contradiction using
resolution
Suppose we have a complex predicate logic statement, and we want
to prove that a certain clause is true.
(e.g. the statement may set out a state of a computer program, such as
the constraints on variable values for a heart monitor application, and
the clause may refer to the state of the warning indicator.
Or the statement may set out various aspects of immigration law, and
the clause express be whether a certain client is legal or not)
In automated proof, we would typically (have a program that did):
1. convert the complex statement(s) to CNF
2. assume the NEGATION of the clause
3. Apply resolution several times,
4. If and when we find a contradiction, we have proved the clause.
E.g. Suppose our statement is:
( A B D) ( B D G ) ( A G B)
And we want to see if ( A B) is true.
We start with assuming the negation of what we want to prove.
1) A B
Let’s also write down what we already know to be true
2)
( A B D) ( B D G ) ( A G B)
Resolution twice on the third clause of 2, using each part of 1, gets::
3) G
Resolution using 3 and the second clause of 2 gives:
4) ( B D )
Resolution using 4 and the first clause of 2 gives:
5) ( A B) -- this is the negation of 1), so we have a contradiction
(we could infer F from 1 and 6, and so have proved the negation
NOTE
In reality,
Resolution is sound, but not complete – we won’t necessarily
always be able to prove what we want to prove.
At any particular point, there will typically be many, many ways
that the rule could be used with the clauses that are known or
derived so far.
There is no obvious way to decide which is the best move to make
-- that’s a research issue, beyond our scope for the moment.