(Complete Knowledge Assumption, Abduction, and Causal Models) of

Download Report

Transcript (Complete Knowledge Assumption, Abduction, and Causal Models) of

CSCE 580
Artificial Intelligence
Ch.5 [P]: Propositions and Inference
Sections 5.5-5.7: Complete Knowledge Assumption,
Abduction, and Causal Models
Fall 2009
Marco Valtorta
[email protected]
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering
Acknowledgment
• The slides are based on [AIMA] and other sources, including
other fine textbooks
• David Poole, Alan Mackworth, and Randy Goebel.
Computational Intelligence: A Logical Approach. Oxford,
1998
– A second edition (by Poole and Mackworth) is under
development. Dr. Poole allowed us to use a draft of it in
this course
• Ivan Bratko. Prolog Programming for Artificial Intelligence,
Third Edition. Addison-Wesley, 2001
– The fourth edition is under development
• George F. Luger. Artificial Intelligence: Structures and
Strategies for Complex Problem Solving, Sixth Edition.
Addison-Welsey, 2009
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering
Example of Clark’s Completion
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering
Negation as Failure
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering
Non-monotonic Reasoning
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering
Example of Non-monotonic Reasoning
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering
Bottom-up Negation as Failure Inference Procedure
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering
Top-down Negation as Failure Inference Procedure
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering
Top-down Negation as Failure Inference Procedure
(updated on 2009-10-29)
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering
Abduction
Abduction is a form of reasoning where assumptions are
made to explain observations.
– For example, if an agent were to observe that some
light was not working, it can hypothesize what is
happening in the world to explain why the light was not
working.
– An intelligent tutoring system could try to explain why a
student is giving some answer in terms of what the
student understands and does not understand.
– The term abduction was coined by Peirce (1839–1914)
to differentiate this type of reasoning from deduction,
which is determining what logically follows from a set of
axioms, and induction, which is inferring general
relationships from examples.
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering
Abduction with Horn Clauses and Assumables
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering
Abduction Example
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering
Another Abduction Example: a Causal Model
A causal
network
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering
Consistency-based vs. Abductive Diagnosis
Determining what is going on inside a system based on observations about the
behavior is the problem of diagnosis or recognition.
• In abductive diagnosis, the agent hypothesizes diseases and malfunctions,
as well as that some parts are working normally, in order to explain the
observed symptoms.
• This differs from consistency-based diagnosis (page 187) in that the
designer models faulty behavior as well as normal behavior, and the
observations are explained rather than added to the knowledge base.
• Abductive diagnosis requires more detailed modeling and gives more
detailed diagnoses, as the knowledge base has to be able to actually
prove the observations.
• It also allows an agent to diagnose systems where there is no normal
behavior. For example, in an intelligent tutoring system, observing what a
student does, the tutoring system can hypothesize what the student
understands and does not understand, which can the guide the action of the
tutoring system.
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering
Example of Abductive Diagnosis
In abductive diagnosis, we need to axiomatize what follows from faults as
well as from normality assumptions. For each atom that could be
observed, we axiomatize how it could be produced.
This could be seen in design terms as a way to make sure
the light is on: put both switches up or both switches down,
and ensure the switches all work. It could also be seen as
a way to determine what is going on if the agent
observed l1 is lit: one of these two scenarios must hold.
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering
Inference Procedures for Abduction
The bottom-up and top-down implementations for assumptionbased reasoning with Horn clauses (page 190) can both be
used for abduction.
• The bottom-up implementation of Figure 5.9 (page 190)
computes, in C, the minimal explanations for each atom.
Instead of returning {A: <false, A> in C}, return the set of
assumptions for each atom. The pruning of supersets of
assumptions discussed in the text can also be used.
• The top-down implementation can be used to find the
explanations of any g by generating the conflicts, and,
using the same code and knowledge base, proving g
instead of false. The minimal explanations of g are the
minimal sets of assumables collected to prove g that are
not subsets of conflicts.
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering
Inference Procedures for Abduction, ctd.
Bottom up
UNIVERSITY OF SOUTH CAROLINA
Top down
Department of Computer Science and Engineering
Causal Models
There are many decisions the designer of an agent needs to make when designing
knowledge base for a domain. For example, consider two propositions a and b,
both of which are true. There are many choices of how to write this.
• A designer could specify have both a and b as atomic clauses, treating both as
primitive.
• A designer could have a as primitive and b as derived, stating a as an atomic
clause and giving the rule b<-a.
• Alternatively, the designer could specify the atomic clause b and the rule a<-b,
treating b as primitive and a as derived.
• These representations are logically equivalent; they cannot be distinguished
logically. However, they have different effects when the knowledge base is
changed. Suppose a was no longer true for some reason. In the first and third
representations, b would still be true and in the second representation b would
no longer true.
• A causal model is a representation of a domain that predicts the results of
interventions. An intervention is an action that forces a variable to have a
particular value.
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering
Causal vs. Evidential Models
In order to predict the effect of interventions, a causal model represents how the cause
implies its effect. When the cause is changed, its effect should be changed. An
evidential model represents a domain in the other direction,from effect to cause.
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering
Another Causal Model Example
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering
Parts of a Causal Model
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering
Using a Causal Model
UNIVERSITY OF SOUTH CAROLINA
Department of Computer Science and Engineering