Transcript Unit5

Propositional Logic
Russell and Norvig: Chapter 7
Slides adapted from:
robotics.stanford.edu/~latombe/cs121/2004/home.htm
Logical Agents
Humans can know “things” and “reason”


Representation: How are the things stored?
Reasoning: How is the knowledge used?
 To solve a problem…
 To generate more knowledge…
Knowledge and reasoning are important to artificial
agents because they enable successful behaviors
difficult to achieve otherwise

Useful in partially observable environments
Can benefit from knowledge in very general forms,
combining and recombining information
Logical Agents
John saw the diamond through the window and
covered it
What does “it” refers to “diamond” or “window”
The answer is “diamond”
We reason, perhaps unconsciously, with our
knowledge of relative value
John threw the brick through the window and broke
it
What does “it” refers to “brick” or “window”
The answer is “brick”
Knowledge-Based Agent
sensors
?
environment
agent
actuators
Knowledge
base
Knowledge-Based Agents
Part II of the text presented problem-solving
agents
We saw that problem-solving agents can be
designed to learn as they interact with the
environment
They can acquire additional knowledge of the
environment as they proceed
But the kind of knowledge that they can
represent is rather limited
Knowledge-Based Agents
The knowledge-based agents that we will
study in Part III represent knowledge in very
general forms
They use logical inference to deduce new
representations
This allows a knowledge-based agent to
perform well in partially observable
environments, where problem-solving agents
have greater difficulty
Knowledge-Based Agents
Problem-solving agents address partial
observability by building contingency plans
that we saw resulted in exponential growth of
the search space
Knowledge-based agents are also better able
to adapt to changes in goals or other aspects
of the environment
In Part IV we examine logical agents in which
knowledge is certain
Knowledge-Based Agents
A proposition is either true or false
Humans can perform well with uncertain
knowledge
But logic cannot represent uncertainty well
Logic and probability can be combined to
handle these more complex situations
Knowledge-Based Agents
The key element of any knowledge-based
agent is its knowledge base (KB)
The structure of the knowledge base is a set
of sentences
Each sentence is an assertion about the
environment
The form of a sentence is independent of the
assertion and the environment
Knowledge-Based Agents
Each sentence is expressed in a formal
language – a knowledge representation
language
The syntax is rigorously defined, much like
that of a high-level programming language
There must be a way of adding new
sentences to the KB
And we need to be able to extract information
from the KB
Knowledge-Based Agents
In the AI literature, these operations are
commonly called Tell and Ask
Both operations may involve inference
Inference derives new sentences from the set
of sentences currently in the KB
For logical agents, any sentence derived
through inference must logically follow from
sentences in the KB
Knowledge-Based Agents
Frequently, before allowing the agent to
interact with the environment, its KB is
initialized with background knowledge
Fig. 7.1 is the basic algorithm for a
knowledge-based agent
For each percept, it performs three steps
First the agent transforms the current percept
into a sentence and Tells the KB
Knowledge-Based Agents
Central component of a Knowledge-Based
Agent is a Knowledge-Base

A set of sentences in a formal language
 Sentences are expressed using a knowledge
representation language
Two generic functions:

TELL - add new sentences (facts) to the KB
 “Tell it what it needs to know”

ASK - query what is known from the KB
 “Ask what to do next”
Knowledge-Based Agents
The agent then Tells the KB that it has in fact
performed the suggested action
Then, the agent Asks the KB what action is
appropriate for the current percept
The KB returns what it believes is the most
appropriate action
Knowledge-Based Agents
The agent must be able
to:





Represent states and
actions
Incorporate new percepts
Update internal
representations of the
world
Deduce hidden properties
of the world
Deduce appropriate
actions
Inference Engine
DomainIndependent
Algorithms
Knowledge-Base
DomainSpecific
Content
Knowledge-Based Agents
Knowledge-Based Agents
Note that the details of translating percepts
and actions into sentences is abstracted using
functions Make-Percept-Sentence and MakeAction-Sentence
Superficially, the structure of a knowledgebased agent is very similar to a problemsolving agent with internal state
The agent’s initial program is determined by
the background knowledge it is Told
Knowledge-Based Agents
Called a declarative approach
In a problem-solving agent, the designer
implements the desired behavior directly as
program code
Called a procedural approach
Knowledge representation, reasoning, and
learning in knowledge-based agents are
based on the theory of mathematical logic
Knowledge-Based Agents
A knowledge-based agent must be able
to

Represent background information
 States, actions, etc.

Represent new information
 Percepts, actions taken, etc.


Update internal state, deduce hidden state
Deduce appropriate actions
The Wumpus World
The Wumpus World
A simple environment, useful for discussing
knowledge-based agents
A set of connected rooms
Rooms may have pits and a wumpus (both
lethal) and gold (the goal)
The Wumpus World
The wumpus world is the problem used
throughout this chapter to illustrate the
principles
The environment is a cave consisting of
rooms connected by passageways
The wumpus is a beast that eats anyone who
enters its room
Its location is fixed, but initially unknown
The Wumpus World
One of the rooms contains a pot of gold
Agent has one arrow that it can use to kill the
wumpus
Some of the rooms contain bottomless pits
Wumpus is too large to fall into a pit, but an
agent that enters a room with a pit dies
The wumpus world is widely considered a
good problem on which to test intelligent
agents
PEAS Description
Performance measure – +1000 for picking up
the goal, –1000 for falling into a pit or being
eated by the wumpus, –1 for each action
taken, and –10 for using up the arrow
Environment – 4 × 4 grid of rooms; agent is
initially in room [1, 1] facing right; locations
of gold and wumpus are chosen randomly,
with a uniform distribution
PEAS Description
Environment – each square other than
[1, 1] has probability 0.2 of containing a pit
Actuators – agent can move forward, turn left
or right 90°, grab the gold, and shoot the
arrow in the forward direction
Game ends if either the agent dies or
executes grab in the room containing the gold
PEAS Description
If the agent moves into an (exterior) wall, the agent
remains in the same room
An arrow continues in a straight line until it kills the
wumpus or hits a wall
Sensors – agent has five sensors
 [Stench?, Breeze?, Glitter?, Bump?, Scream?]
Stench in the room containing the wumpus or
(vertically or horizontally) adjacent room
Breeze in a room adjacent to a room containing a pit
PEAS Description
Glitter in the room containing the gold
Bump if the last action caused the agent to
move into a wall
Scream if the last action was shoot and the
wumpus was killed
In discussing the problem, percepts are listed
as 5-tuples; e.g., [Stench, Breeze, None,
None, None]
The Wumpus World: Example
4
3
2
1
1
2
3
4
The Wumpus World: Example
4
3
2
1
1
2
3
4
The Wumpus World: Example
4
3
2
1
1
2
3
4
The Wumpus World: Example
4
3
2
1
1
2
3
4
The Wumpus World: Example
4
3
2
1
1
2
3
4
The Wumpus World: Example
4
3
2
1
1
2
3
4
The Wumpus World: Example
4
3
2
1
1
2
3
4
The Wumpus World: Example
4
3
2
1
1
2
3
4
Model Checking
One type of logical inference is model
checking



Generate all possible models
The KB is false in all models that generate
contradictions with sentences in the KB
For ALL models in which KB is true, is the ASKed
sentence also true? If so, KB entails the sentence
Example



“2 + x = z” is in the KB
Models are all possible value pairs for x and z
In all models in which “2 + x = z” does not hold, the
KB is false
Model Checking Example
4
3
2
i
1
1
2
3
4
Model Checking Example
i
Model Checking Example
i
Model Checking Example
i
Model Checking Example
i
Model Checking Example
i
Logic in General
A logic is a formal language for
representing and using information
Information is represented as a set of
sentences in a knowledge base
Logic
Inference is the process of deriving a
specific sentence from a KB (where the
sentence must be entailed by the KB)

KB |-i a = sentence a can be derived from
KB by procedure I
“KB’s are a haystack”


Entailment = needle in haystack
Inference = finding it
Entailment
α = β means that α entails β

If α is true in some model, β is also true in that model
Think of entire KB as one big statement that is
true or false

Each sentence in the KB ANDed together to form one
big sentence
ASK KB a question: does KB entail the sentence
“there is no pit in [1, 1]”?

If so, add it to the KB. The new sentence is produced
by logical inference
Logic
Soundness


i is sound if…
whenever KB |-i a is true, KB |= a is true
Completeness


i is complete if
whenever KB |= a is true, KB |-i a is true
If KB is true in the real world, then any
sentence a derived from KB by a sound
inference procedure is also true in the real
world
Syntax
Syntax specifies rules for construction of
well-formed sentences in a particular
language

E.g., a math domain: “2 + x = z” is wellformed, “x2+ = =” is not
Semantics
Semantics define the truth value (true or
false) of a sentence





Interprets KB’s sentences, given a state of the world
E.g., “2 + x = z” is true when x=0 and z=2, and in
many other cases
A particular state of the world determines values in
the KB’s sentences
A model is a possible set of values, corresponding to
some possible state of the world
m is a model of sentence A if A is true in m
Propositional Logic
AKA Boolean Logic
False and True
Proposition symbols P1, P2, etc are sentences
NOT: If S1 is a sentence, then ¬S1 is a sentence (negation)
AND: If S1, S2 are sentences, then S1  S2 is a sentence (conjunction)
OR: If S1, S2 are sentences, then S1  S2 is a sentence (disjunction)
IMPLIES: If S1, S2 are sentences, then S1  S2 is a sentence
(implication)
IFF: If S1, S2 are sentences, then S1  S2 is a sentence (biconditional)
Propositional Logic
P
Q
¬P
PQ
PQ PQ PQ
False False True
False False True
True
False True
False True
False
True
True
True
False False False True
False False
True
True
True
False True
True
True
Wumpus World Sentences
Let Pi,j be True if
“Pits cause breezes in
there is a pit in [i,j]
adjacent squares”
Let Bi,j be True if
there is a breeze in B1,1  (P1,2  P2,1)
[i,j]
¬P1,1
¬ B1,1
B2,1
B2,1  (P1,1  P2,1  P3,1)
A square is breezy if
and only if there is an
adjacent pit
A Simple Knowledge Base
A Simple Knowledge Base
R1: ¬P1,1
R2: B1,1  (P1,2  P2,1)
R3: B2,1 (P1,1  P2,2 
P3,1)
R4: ¬ B1,1
R5: B2,1
KB consists of
sentences R1 thru R5
R1  R2  R3  R4  R5
A Simple Knowledge Base
Every known inference algorithm for propositional logic has a worstcase complexity that is exponential in the size of the input. (co-NP
complete)
Equivalence, Validity, Satisfiability
Equivalence, Validity, Satisfiability
A sentence if valid if it is true in all models

e.g. True, A  ¬A, A  A, (A  (A  B)  B
Validity is connected to inference via the Deduction
Theorem

KB |- a iff (KB  a) is valid
A sentence is satisfiable if it is True in some model

e.g. A  B, C
A sentence is unstatisfiable if it is True in no models

e.g. A  ¬A
Satisfiability is connected to inference via the
following


KB |= a iff (KB  ¬a) is unsatisfiable
proof by contradiction
Reasoning Patterns
Inference Rules

Patterns of inference that can be applied to derive chains of
conclusions that lead to the desired goal.
Modus Ponens

Given: S1  S2 and S1, derive S2
And-Elimination


Given: S1  S2, derive S1
Given: S1  S2, derive S2
DeMorgan’s Law


Given: ( A  B) derive A  B
Given: ( A  B) derive A  B
Reasoning Patterns
And Elimination
a b
a
From a conjunction,
any of the conjuncts
can be inferred
(WumpusAhead 
WumpusAlive), WumpusAlive
can be inferred
Modus Ponens
a b a
b
Whenever sentences
of the form a  b
and a are given,
then sentence b can
be inferred
(WumpusAhead 
WumpusAlive)  Shoot and
(WumpusAhead 
WumpusAlive), Shoot can be
inferred
Example Proof By Deduction
Knowledge
S1: B22  ( P21  P23  P12  P32 )
S2: B22
rule
observation
Inferences
S3: (B22  (P21  P23  P12  P32 ))
((P21  P23  P12  P32 )  B22)
S4: ((P21  P23  P12  P32 )  B22)
S5: (B22  ( P21  P23  P12  P32 ))
S6: (P21  P23  P12  P32 )
S7: P21  P23  P12  P32
[S1,bi elim]
[S3, and elim]
[contrapos]
[S2,S6, MP]
[S6, DeMorg]
Evaluation of Deductive
Inference
Sound

Yes, because the inference rules themselves are
sound. (This can be proven using a truth table
argument).
Complete


If we allow all possible inference rules, we’re
searching in an infinite space, hence not complete
If we limit inference rules, we run the risk of
leaving out the necessary one…
Monotonic

If we have a proof, adding information to the DB
will not invalidate the proof
Resolution
Resolution allows a complete inference
mechanism (search-based) using only one
rule of inference
Resolution rule:


Given: P1  P2  P3 … Pn, and P1  Q1 … Qm
Conclude: P2  P3 … Pn  Q1 … Qm
Complementary literals P1 and P1 “cancel out”
Why it works:

Consider 2 cases: P1 is true, and P1 is false
Resolution in Wumpus World
There is a pit at 2,1 or 2,3 or 1,2 or 3,2

P21  P23  P12  P32
There is no pit at 2,1

P21
Therefore (by resolution) the pit must
be at 2,3 or 1,2 or 3,2

P23  P12  P32
Proof using Resolution
To prove a fact P, repeatedly apply resolution until either:


No new clauses can be added, (KB does not entail P)
The empty clause is derived (KB does entail P)
This is proof by contradiction: if we prove that KB  P derives
a contradiction (empty clause) and we know KB is true, then P
must be false, so P must be true!
To apply resolution mechanically, facts need to be in
Conjunctive Normal Form (CNF)
To carry out the proof, need a search mechanism that will
enumerate all possible resolutions.
CNF Example
1.
B22  ( P21  P23  P12  P32 )
2.
Eliminate  , replacing with two implications
3.
4.
(B22  ( P21  P23  P12  P32 ))  ((P21  P23  P12  P32 )  B22)
Replace implication (A  B) by A  B
(B22  ( P21  P23  P12  P32 ))  ((P21  P23  P12  P32 )  B22)
Move  “inwards” (unnecessary parens removed)
(B22  P21  P23  P12  P32 )  ( (P21  P23  P12  P32 )  B22)
4. Distributive Law
(B22  P21  P23  P12  P32 )  (P21  B22)  (P23  B22)  (P12 
B22)  (P32  B22)
(Final result has 5 clauses)
Resolution Example
Given B22 and P21 and P23 and P32 ,
prove P12
(B22  P21  P23  P12  P32 ) ; P12
(B22  P21  P23  P32 ) ; P21
(B22  P23  P32 ) ; P23
(B22  P32 ) ; P32
(B22) ; B22
[empty clause]
Evaluation of Resolution
Resolution is sound

Because the resolution rule is true in all cases
Resolution is complete


Provided a complete search method is used to find
the proof, if a proof can be found it will
Note: you must know what you’re trying to prove
in order to prove it!
Resolution is exponential

The number of clauses that we must search grows
exponentially…
Horn Clauses
A Horn Clause is a CNF clause with exactly one
positive literal





The positive literal is called the head
The negative literals are called the body
Prolog: head:- body1, body2, body3 …
English: “To prove the head, prove body1, …”
Implication: If (body1, body2 …) then head
Horn Clauses form the basis of forward and backward
chaining
The Prolog language is based on Horn Clauses
Deciding entailment with Horn Clauses is linear in the
size of the knowledge base
Reasoning with Horn Clauses
Forward Chaining


For each new piece of data, generate all new
facts, until the desired fact is generated
Data-directed reasoning
Backward Chaining



To prove the goal, find a clause that contains the
goal as its head, and prove the body recursively
(Backtrack when you chose the wrong clause)
Goal-directed reasoning
Forward Chaining
Fire any rule whose premises are satisfied in the KB
Add its conclusion to the KB until the query is found
Forward Chaining
AND-OR Graph


multiple links joined by an arc indicate conjunction – every link must be
proved
multiple links without an arc indicate disjunction – any link can be proved
Forward Chaining
Forward Chaining
Forward Chaining
Forward Chaining
Forward Chaining
Forward Chaining
Forward Chaining
Forward Chaining
Backward Chaining
Idea: work backwards from the query q:

To prove q by BC,
 Check if q is known already, or
 Prove by BC all premises of some rule concluding q
Avoid loops

Check if new subgoal is already on the goal stack
Avoid repeated work: check if new subgoal


Has already been proved true, or
Has already failed
Backward Chaining
Backward Chaining
Backward Chaining
Backward Chaining
Backward Chaining
Backward Chaining
Backward Chaining
Backward Chaining
Backward Chaining
Backward Chaining
Backward Chaining
Forward Chaining vs. Backward
Chaining
Forward Chaining is data driven



Automatic, unconscious processing
E.g. object recognition, routine decisions
May do lots of work that is irrelevant to the goal
Backward Chaining is goal driven


Appropriate for problem solving
E.g. “Where are my keys?”, “How do I start the
car?”
The complexity of BC can be much less than
linear in size of the KB
Forward and Backward
Chaining in Expert Systems
More on Rule Based Systems
Structure of Rule-Based
Systems
User Interface
Knowledge
Base
Production
rules
Inference
Engine
Recogniseact cycle
Working
Memory
Compared to
production
rules
Logic and Expert Systems
Production rules can be represented
using logic

For instance:
outlook(sunny)  temperature(high) 
going_outdoors(x)  take_water(x)


Logical facts joined together by logical
connectives form the condition part of the
rule
The implication of the formula forms the
action part of the rule
Chaining
Introduction to Chaining
There are two main types of inference in
expert systems:
1.
2.
Forward chaining – start with some facts in
working memory, keep using the rules to draw
new conclusions and take new actions
Backward chaining – start with a goal and
look for rules that will help achieve that goal,
chaining backwards until you reach initial
conditions (i.e. conditions not inferred)
Forward Chaining
So far all the actions
we’ve looked at have
involved the system
prompting some
proceedure or real world
action, such as


Rules:
1. beeping_alarm  smokey
2. hot  smokey  fire
3. fire  switch_on _ sprinklers
 print “take some water •
with you”
 open_valve
An action could instead
add new information
into the working
memory
For example:
The actions of two of the
rules in this system are to
add new facts into working
memory
Forward Chaining con’t
The previous example allows
new information to be
concluded

Rule 1 concludes that it is
smokey
Rules:
1.
2.
3.
beeping_alarm  smokey
hot  smokey  fire
fire  switch_on _ sprinklers
It makes a change to working
memory that allows a further
Working memory:
conclusion to be made

that there is a fire
beeping_alarm
Forward chaining will
hot
continue until the contents of
Rule 1 adds Smokey
the working memory match
no rules
Rule 2 adds Fire
Reason Maintenance
Some applications of forward chaining need
the facility for removing facts from working
memory
This is called reason maintenance
In the previous example, unless fire and
smokey are removed from working memory
when the fire has been extinguished the
sprinklers would always be instructed to
switch on
Backward Chaining
Given some situation, such as “the sprinklers are
switched on”, work out what initial condition/s led to
that situation
Initial conditions are facts which cannot be
inferred using any of the production rules (and for
later, cannot be gained by questioning a user)

Are the “root cause” of an event
Sometimes you may also be interested in which rules
were fired along the way


For instance you may want to know all the possible
symptoms of an illness
(similarly for forward chaining)
Backward Chaining Algorithm
Given Goal G
If G is an initial condition, stop
Else find the rule (R) whose action matches G
Set G to be the condition/s of rule R
Run for each G
Backward Chaining Simple
Example
G = “sprinklers_ on”
R = 3,
G3 = “fire”
G3 is not an initial condition
G3 = “fire”
R = 2,
G2a = “hot”
G2a is an initial condition
G2b = “smokey”
G2b is not an initial condition
G2 = “smokey”
R=1
G1 = “beeping_alarm”
G1 is an initial condition
Rules:
1. beeping_alarm  smokey
2. hot  smokey  fire
3. fire  sprinklers_on
sprinklers_on
fire
hot
and
smokey
beeping_alarm
Backward Chaining – a more
complex example
In the previous example,
each goal matched only one
rule
But supposing it matched
more than one?
In this case a tree can be
built as before, and depth
first search can be used to
find the chains of rules
Rules:
1. fire  sprinklers_on
2. flames  fire
3. fireplace  flames
4. beeping_alarm  smokey
5. hot  smokey  fire
sprinklers_on
fire
flames
fireplace
and
hot
smokey
beeping_alarm
Backward Chaining – Involving a User
Most real world expert systems ask questions
of a user to help the search
Whenever the goal cannot be found in the
action’s of the rules, a the user is asked a
question
For example MYCIN, an early medical
diagnosis system, first narrowed down the list
of possible diseases, then backward chained
from each disease in the list
Involving a User, Example
Rule base:
1.
2.
3.
high_temperature  rash  measles
high_temperature  headache  flu
lots_of_spots  rash
User interface:
Patient complains of high temperature
Patient complains of high temperature
MYCIN has forward chained and cut
the list of possible illnesses to two
measles
and
temperature_high
Q: Does the patient have lots of spots?
A: No
Q: Does the patient have a headache?
A: yes
Patient has flu
rash
lots_of_spots
flu
and
temperature_high
headache