Transcript a not b

Semantics of Logic Programs and
Non-monotonic Reasoning
José Júlio Alferes
[email protected]
David Pearce
[email protected]
ESSLLI 2000
Birmingham, August 2000
Motivation
Three “points of view”:
• a language for knowledge representation
• a logic for nonmonotonic reasoning
• a new programming paradigm
Course topics
• Logic programming and the semantics of negation-as-failure
– stable model semantics
– well-founded semantics
• Intermediate logics and nonmonotonic extensions
– equilibrium logic
– strong negation
• Answer set programming
– smodels
– dlv
• Programming with XSB
Part I
Logic Programming Semantics
Nonmonotonic inference
• traditional logic satisfies the Tarski
conditions of consequence
• 1. A  C(A) (inclusion)
• 2. C(A) = C(C(A)) (idempotence)
• 3. B  A =>C (B) C(A) (monotony)
Nonmonotonic inference
•
•
•
•
And as inference conditions:
A 

  
Nonmonotonic inference
• Weaker conditions than monotony
• cautious monotony:
– A  B  C(A) => C(A)  C(B)
• rationality:
– A |-  and not A  { } |-  => A |- ¬ 
Types of nonmonotonic
reasoning
•
•
•
•
Default rules
Rules with exceptions
Rules with negation-as-failure
Example:
birds fly , penguins are birds
penguins don’t fly
Systems of nonmonotonic
reasoning
• Logic programming
• default logic (Reiter)
• autoepistemic logic (Moore)
Deductive basis
• Let C be a consequence operation and L a
monotonic logic (Tarski type) with consequence
CL,such that:
• CL(X)  C(X)
• CL(C(X)) = C(X) (left absorption)
• CL(X) = CL(Y) => C(X) =C(Y).(right absorption)
• Then L is said to be a deductive basis for C.
• (L is well-behaved wrt C)
LP for
Knowledge Representation
• Due to its declarative nature, LP has
become a prime candidate for Knowledge
Representation and Reasoning
• This has been more noticeable since its
relations to other NMR formalisms were
established
• For this usage of LP, a precise declarative
semantics was in order
Stable models and answer sets
• A semantics for logic
programs with
negation developed by
Michael Gelfond and
Vladimir Lifschitz
(1987-91)
Stable models and answer sets
Studied for more than 10 years but
implemented systems only available
recently
Two important implementations:
• smodels (Helsinki)
• dlv (Vienna)
Answer-set programming (ASP)
• Various types of combinatorial vproblems
have a simple representation in the language
of ASP
• Solutions are produced in the form of
answer sets
Answer-set programming (ASP)
Example: blocks world
1
2
4
3
5
5
4
3
2
1
Write a program that describes the blocks world and the
permitted transitions. An answer set corresponds to a
history (set of movements).
Answer-set programming (ASP)
Example: Hamiltonian paths
b
c
d
a
Write a program to compute the Hamiltonian paths in a
directed graph (a path that touches each node exactly once ).
Each answer-set corresponds to a path.
Language of ASP
• Rules in the style of Prolog
  
" if ”
the part  is called "head”,  "body”
• disjunction is allowed in the head:
studies_prolog(X) v studies_java(X) 
represents incomplete knowledge
Language of ASP
• There is a concept of negation-as-failure or
negation-by default
a(X)  not b(X)
"a, if b is not provable”
this negation is nonmonotonic
Default Rules
• The representation of default rules, such as
“All birds fly”
can be done via the non-monotonic operator not
flies ( A)  bird ( A), not abnormal ( A) .
bird ( P)  penguin ( P).
abnormal ( P)  penguin ( P).
bird (a).
penguin ( p).
Language of ASP
• On is allowed to use "integrity constraints”
 not b(X)c(X)
"the conjunction of c and not b is false”
We will see that this does not requires new
kinds of rules, as it is equivalent to
new_pred  not b(X)c(X) , not new_pred
Language of ASP
• There is a concept of explicit or strong
negation:
a(X)  not b(X)versus
a(X)  ~ b(X)
cf. “cross,if no train is passing”
Language
• A Normal Logic Programs P is a set of rules:
H A1, …, An, not B1, … not Bm (n,m  0)
where H, Ai and Bj are atoms
• Literal not Bj are called default literals
• When no rule in P has default literal, P is called
definite
• The Herbrand base HP is the set of all instantiated
atoms from program P.
• We will consider programs as possibly infinite sets of
instantiated rules.
2-valued Interpretations
• A 2-valued interpretation I of P is a subset
of HP
– A is true in I (ie. I(A) = 1) iff A  I
– Otherwise, A is false in I (ie. I(A) = 0)
Interpretations can be viewed as representing possible
states of knowledge.
If knowledge is incomplete, there might be in some states
atoms that are neither true nor false
3-valued Interpretations
• A 3-valued interpretation I of P is a set
I = T U not F
where T and F are disjoint subsets of HP
– A is true in I iff A  T
– A is false in I iff A  F
– Otherwise, A is undefined (I(A) = 1/2)
• 2-valued interpretations are a special case, where:
HP = T U F
Models
• Models can be defined via an evaluation function Î:
– For an atom A, Î(A) = I(A)
– For a formula F, Î(not F) = 1 - Î(F)
– For formulas F and G:
• Î((F,G)) = min(Î(F), Î(G))
• Î(F  G)= 1 if Î(F)  Î(G), and = 0 otherwise
I is a model of P iff, for all rule H  B of P:
Î(H  B) = 1
Minimal Models Semantics
• The idea of this semantics is to minimize positive
information. What is implied as true by the program is
true; everything else is false.
ableMathematician ( X )  physicist ( X )
physicist (einstein )
president ( sampaio)
{pr(s),pr(e),ph(s),ph(e),aM(s),aM(e)} is a model
Lack of information that sampaio is a physicist, should
indicate that he isn’t
The minimal model is: {pr(s),ph(e),aM(e)}
Minimal Models Semantics
D [Truth ordering] For interpretations I and J, I  J iff for
all atom A, I(A)  I(J), i.e.
I
 TJ and FI  FJ
T Every definite logic program has a least (truth ordering)
model.
D [minimal models semantics] An atom A is true in
(definite) P iff A belongs to its least model. Otherwise, A is
false in P.
TP operator
• The minimal models of a definite P can be
computed (bottom-up) via operator TP
D[TP] Let I be an interpretation of definite P.
TP(I) = {H: (H  Body)  P and Body  I}
T If P is definite, TP is monotone and continuous. Its
minimal fixpoint can be built by:
I0 = {}
and
In = TP(In-1)
T The least model of definite P is TPw({})
On Minimal Models
SLD can be used as a proof procedure for the
minimal models semantics:
– If the is a SLD-derivation for A, then A is true
– Otherwise, A is false
The semantics does not apply to normal
programs:
– p  not q has two minimal models:
{p} and {q}
There is no least model !
The idea of completion
• In LP one uses “if” but mean “iff” [Clark78]
naturalN (0).
naturalN ( s( N ))  naturalN ( N ).
• This doesn’t imply that -1 is not a natural number!
• With this program we mean:
nN ( X )   X = 0  Y : X = s(Y )  nN (Y )
• This is the idea of Clark’s completion:
Syntactically transform if’s into iff’s
Use classical logic in the transformed theory to provide the
semantics of the program
Program completion
• The completion of P is the theory comp(P) obtained by:
 Replace p(t)  j by p(X)  X = t, j
 Replace p(X)  j by p(X)  Y j, where Y are the
original variables of the rule
 Merge all rules with the same head into a single one
p(X)  j1  …  jn
 For every q(X) without rules, add q(X)  ^
 Replace p(X)  j by "X (p(X)  j)
Completion Semantics
Let comp(P) be the completion of P where not is
interpreted as classical negation:
A is true in P iff comp(P) |= A
A is false in P iff comp(P) |= not A
• Though completion’s definition is not that simple,
the idea behind it is quite simple
• Also, it defines a non-classical semantics by
means of classical inference on a transformed
theory
SLDNF proof procedure
• By adopting completion, procedurally we have:
not is “negation as finite failure”
• In SLDNF proceed as in SLD. To prove not A:
– If there is a finite derivation for A, fail not A
– If, after any finite number of steps, all derivations
for A fail, remove not A from the resolvent (i.e.
succeed not A)
• SLDNF can be efficiently implemented (cf.
Prolog)
SLDNF example
p  p.
q  not p.
a  not b.
b  not c.
q
a
 not p  p
 not b  b
No success nor
finite failure
p
p
According to completion:
comp(P) |= {not a, b, not c}
comp(P) | p, comp(P) | not p
comp(P) | q, comp(P) | not q
X
 not c
c
X
Problems with completion
• Some consistent programs may became
inconsistent: p  not p becomes p  not p
• Does not correctly deal with deductive closures
edge(a,b).
edge(c,d).
edge(d,c).
reachable(a).
reachable(A)  edge(A,B), reachable(B).
Completion doesn’t conclude not reachable(c), due to the
circularity caused by edge(c,d) and edge(d,c)
Circularity is a procedural concept, not a declarative
one
Completion Problems (cont)
• Difficulty in representing equivalencies:
abnormal(B)  irregular(B)
bird(tweety).
fly(B)  bird(B), not abnormal(B). irregular(B)  abnormal(B)
• Completion doesn’t conclude fly(tweety)!
Without the rules on the left fly(tweety) is true
An explanation for this would be: “the rules on the left cause
a loop”.
Again, looping is a procedural concept, not a
declarative one
When defining declarative semantics, procedural
concepts should be rejected
Program stratification
• Minimal models don’t have “loop” problems
• But are only applicable to definite programs
• Generalize Minimal Models to Normal LPs:
–
–
–
–
–
Divide the program into strata
The 1st is a definite program. Compute its minimal model
Eliminate all nots whose truth value was thus obtained
The 2nd becomes definite. Compute its MM
…
Stratification example
P1
pp
ab
b
P P c  not p
2 d  c, not a
P3
e  a, not d
f  not c
• Least(P1) = {a, b, not p}
• Processing this, P2 becomes:
c  true
d  c, false
• Its minimal model, together with P1 is:
{a, b, c, not d, not p}
• Processing this, P3 becomes:
e  a, true
f  false
The (desired) semantics for P is then:
{a, b ,c, not d, e, not f, not p}
Stratification
D Let S1;…;Sn be such that S1 U…U Sn = HP, all the Si are
disjoint, and for all rules of P:
A  B1,…,Bm, not C1,…,not Ck
if A  Si then:
{B1,…,Bm}  Ui j=1 Sj
{C1,…,Ck}  Ui-1 j=1 Sj
Let Pi contain all rules of P whose head belongs to
Si. P1;…;Pn is a stratification of P
Stratification (cont)
• A program may have several stratifications:
P1 a
P P2 b  a
P3 c  not a
or
a
ba
P
P2 c  not a
P1
• Or may have no stratification:
b  not a
a  not b
D A Normal Logic Program is stratified iff it admits (at
least) one stratification.
Semantics of stratified LPs
D Let I|R be the restriction of interpretation I to the atoms
in R, and P1;…;Pn be a stratification of P.
Define the sequence:
• M1 = least(P1)
• Mi+1 is the minimal models of Pi+1 such that:
Mi+1| (Uij=1 Sj) = Mi
Mn is the standard model of P
• A is true in P iff A  Mn
• Otherwise, A is false
Properties of Standard Model
Let MP be the standard model of stratified P
MP is unique (does not depend on the
stratification)
MP is a minimal model of P
MP is supported
D A model M of program P is supported iff:
A  M  (A  Body)  P : Body  M
(true atoms must have a rule in P with true body)
Perfect models
• The original definition of stratification (Apt et al.) was
made on predicate names rather than atoms.
• By abandoning the restriction of a finite number of strata,
the definitions of Local Stratification and Perfect Models
(Przymusinski) are obtained. This enlarges the scope of
application:
P1= {even(0)}
even(0)
P2= {even(1)  not even(0)}
even(s(X))  not even(X)
...
The program isn’t stratified (even/1 depends negatively on itself)
but is locally stratified.
Its perfect model is: {even(0),not even(1),even(2),…}
Problems with stratification
• Perfect models are adequate for stratified LPs
– Newer semantics are generalization of it
• But there are (useful) non-stratified LPs
even(X)  zero(X)
zero(0)
even(Y)  suc(X,Y),not even(X)
suc(X,s(X))
Is not stratified because (even(0)  suc(0,0),not even(0))  P
No stratification is possible if P has:
pacifist(X)  not hawk(X)
hawk(Y)  not pacifist(X)
This is useful in KR: “X is pacifist if it cannot be assume X is hawk,
and vice-versa. If nothing else is said, it is undefined whether X is
pacifist or hawk”
SLS procedure
• In perfect models not includes infinite failure
• SLS is a (theoretical) procedure for perfect models
based on possible infinite failure
• No complete implementation is possible (how to
detect infinite failure?)
• Sound approximations exist:
– based on loop checking (with ancestors)
– based on tabulation techniques
(cf. XSB-Prolog implementation)
Stable Models Idea
• The construction of perfect models can be done
without stratifying the program. Simply guess the
model, process it into P and see if its least model
coincides with the guess.
• If the program is stratified, the results coincide:
– A correct guess must coincide on the 1st strata;
– and on the 2nd (given the 1st), and on the 3rd …
• But this can be applied to non-stratified programs…
Stable Models Idea (cont)
• “Guessing a model” corresponds to “assuming
default negations not”. This type of reasoning is
usual in NMR
– Assume some default literals
– Check in P the consequences of such assumptions
– If the consequences completely corroborate the
assumptions, they form a stable model
• The stable models semantics is defined as the
intersection of all the stable models (i.e. what
follows, no matter what stable assumptions)
SMs: preliminary example
a  not b
b  not a
ca
cb
p  not q
q  not r
r
Assume, e.g., not r and not p as true, and all other nots as false.
By processing this into P:
a  false
b  false
ca
cb
p  false
q  true
Its least model is {not a, not b, not c, not p, q, r}
So, it isn’t a stable model:
By assuming not r, r becomes true
not a is not assumed and a becomes false
r
SMs example (cont)
a  not b
ca
p  not q
b  not a
cb
q  not r
r
Now assume not b and not q as true, and all other nots as false.
By processing this into P:
a  true
ca
p  true
b  false
cb
q  false
r
Its least model is {a, not b, c, p, not q, r}
I is a stable model
The other one is {not a, b, c, p, not q, r}
According to Stable Model Semantics:
c, r and p are true and q is false.
a and b are undefined
Stable Models definition
Let I be a (2-valued) interpretation of P. The definite
program P/I is obtained from P by:
• deleting all rules whose body has not A, and A  I
• deleting from the bodies all the remaining default literals
GP(I) = least(P/I)
M is a stable model of P iff M = GP(M).
• A is true in P iff A belongs to all SMs of P
• A is false in P iff A doesn’t belongs to any SMs of P
(i.e. not A “belongs” to all SMs of P).
Properties of SMs
 Stable models are minimal models
 Stable models are supported
 If P is locally stratified then its single stable
model is the perfect model
 Stable models semantics assign meaning to
(some) non-stratified programs
– E.g. the one in the example before
Importance of Stable Models
Stable Models were an important contribution:
– Introduced the notion of default negation (versus
negation as failure)
– Allowed important connections to NMR. Started the
area of LP&NMR
– Allowed for a better understanding of the use of LPs
in Knowledge Representation
It is considered as THE semantics of LPs by a
significant part of the community.
Default Logic
• To deal with incomplete information and
default rules, Reiter introduced the Default
Logics
• A theory D is a pair <T,D> where:
– T is a set of 1st. order formulae (certain
knowledge)
– D is a set of default rules
Default Logic (Syntax)
• Default rules are of the form:
f:y
g
where f, y and g are formulae
– f are the pre-requisites
– y are the justifications
– g are the conclusions
• Default rules with free variables are viewed as
macros standing for their ground instatiations
Meaning of defaults
f:y
g
• if f is true, and it is consistent to assume y,
then conclude g
• Rules are to be maximally applied
• The semantics has also to make clear:
– “true” where?
– “consistent” with what?
– how to add the conclusion?
Examples of default rules
bird(X) : flies(X)
flies(X)
true : ¬connection(X,Y)
¬connection(X,Y)
friend(X,Y), friend(Y,Z) : friend(X,Z)
friend(X,Z)
holds(F,S) : holds(F,res(A,S))
holds(F,res(A,S))
adult(X) : employed(X), ¬dropout(X)
employed(X)
Default Logics (Semantics)
Let D = <T,D> be a theory, and E a set of literals. GD(E) is
the smallest set such that:
• it contains all logical consequences of T
• it is closed under rules f/g where f:y/g  D and
T U E | ¬y
E is a default extension of D iff
E = GD(E).
LP and Default Theories
Let DP be the default theory obtained by transforming:
H  B1,…,Bn, not C1,…, not Cm
into:
B1,…,Bn : ¬C1,…, ¬Cm
H
There is a one-to-one correspondence between the SMs of
P and the default extensions of DP
LPs as defaults
• LPs can be viewed as sets of default rules
• Default literals are the justification:
– can be assumed if it is consistent to do so
– are withdrawn if inconsistent
• In this reading of LPs,  is not viewed as
implication. Instead, LP rules are viewed as
inference rules.
Auto Epistemic Logic
• Adds a modal operator •to denote knowledge
– Allows expressing knowledge about the agent’s
own knowledge
• Eg.
– A
• B  ¬•
C D means “if I know A is true, B is true,
and I don’t know whether C is true, then D is true”.
AEL and incomplete knowledge
• Allows (non-monotonic) completion of the
knowledge:
– concert  •
concert
“if there was a concert, I would know”. If I don’t have
evidence for concert (i.e. I don’t know concert), then
concert is false.
– bird(X)  ¬flies(X)  •
¬flies(X)
“I know all birds that do not fly”. Thus, if there is some
bird, for which I have no evidence of non-flying, I
conclude it doesn’t fly.
AEL (Moore’s Semantics)
A consistent theory T* is an expansion of the AEL theory
T iff:
T* = T U {•
F: T* |= F} U {¥
F: T* | F}
• I know everything I can conclude from my
theory.
• I don’t know everything that I cannot
conclude from my theory (i.e. that doesn’t
belong to all models of the theory)
LP and Auto-Epistemic Logic
Let TP be the AEL theory obtained by transforming:
H  B1,…,Bn, not C1,…, not Cm
into:
B1 … Bn  ¬ •
C1 …  ¬ •
Cm H
There is a one-to-one correspondence between the SMs of
P and the (Moore) expansions of TP
LPs as AEL theories
• LPs can be viewed as theories that refer to
their own knowledge
• Default negation not A is interpreted as “A
is not known”
• The LP rule symbol is here viewed as
material implication
Extended LPs
• In Normal LPs all the negative information is implicit. Though
that’s desired in some cases (e.g. the database with flight
connections), sometimes an explicit form of negation is needed
for Knowledge Representation
• “Penguins don’t fly” could be: noFly(X)  penguin(X)
• This does not relate fly(X) and noFly(X) in:
fly(X)  bird(X)
noFly(X)  penguin(X)
For establishing such relations, and representing negative
information a new form of negation is needed in LP:
Explicit negation - ~
Extended LP: motivation
• ~ is also needed in bodies:
“Someone is guilty if is not innocent”
– cannot be represented by: guilty(X)  not innocent(X)
– This would imply guilty in the absence of information
about innocent
– Instead, guilty(X)  ~innocent(X) only implies guilty(X) if
X is proven not to be innocent
• The difference between not p and ~p is essential
whenever the information about p cannot be assumed
to be complete
ELP motivation (cont)
• ~ allows for greater expressivity:
“If you’re not sure that someone is not innocent, then further
investigation is needed”
– Can be represented by:
investigate(X)  not ~innocent(X)
• ~ extends the relation of LP to other NMR formalisms.
E.g
– it can represent default rules with negative conclusions and
pre-requisites, and positive justifications
– it can represent normal default rules
ELP Language
• An Extended Logic Program P is a set of rules:
L0  L1, …, Lm, not Lm+1, … not Ln (n,m  0)
•
•
•
•
where the Li are objective literals
An objective literal is an atoms A or its explicit
negation ~A
Literals not Lj are called default literals
The Extended Herbrand base HP is the set of all
instantiated objective literals from program P
We will consider programs as possibly infinite sets of
instantiated rules.
ELP Interpretations
• An interpretation I of P is a set
I = T U not F
where T and F are disjoint subsets of HP and
~L  T  L  F (Coherence Principle)
i.e. if L is explicitly false, it must be assumed false by
default
• I is total iff HP = T U F
• I is consistent iff ¬ L: {L, ~L}  T
– In total consistent interpretations the Coherence Principle is
trivially satisfied
Answer sets
• It was the 1st semantics for ELPs [Gelfond&Lifschitz90]
• Generalizes stable models to ELPs
Let M- be a stable models of the normal P- obtained by replacing in
the ELP P every ~A by a new atom A-. An answer-set M of P is
obtained by replacing A- by ~A in MA is true in an answer set M iff A  M
A is false iff ~A  M
Otherwise, A is unknown
Some programs have no consistent answer sets:
e.g. P = {a ~a  }
Answer sets and Defaults
Let DP be the default theory obtained by transforming:
L0  L1,…,Lm, not Lm+1,…, not Ln
into:
L1,…,Lm : ¬Lm+1,…, ¬Ln
L0
where ¬ ~A is (always) replaced by A
There is a one-to-one correspondence between the
answer-sets of P and the default extensions of DP
Answer-sets and AEL
Let TP be the AEL theory obtained by transforming:
L0  L1,…,Lm, not Lm+1,…, not Ln
into:
L1 •
L1 …  Lm •
Lm 
¬•
Lm+1  …  ¬ •
Lm L0 •
L0
There is a one-to-one correspondence between the
answer-sets of P and the expansions of TP
WFS motivation
• Answer-sets (and stable models) are a good tool for
representing knowledge. However:
– its computation is NP-complete
– it doesn’t comply with various structural properties, desirable
for goal-driven implementations
– in various application domains, it is important to have efficient
implementations for answering queries (that need not compute
the whole model)
• The Well Founded Semantics is a weaker semantics
– sound wrt stable models
– with polynomial time complexity
– amenable to goal-driven implementations
Cumulativity
A semantics Sem is cumulative iff for every P:
if A  Sem(P) and B  Sem(P) then B  Sem(P U {A})
(i.e. all derived atoms can be added as facts, without changing the
program’s meaning)
This property is important for implementation:
without cumulativity, tabling methods cannot be used
Relevance
A directly depends on B if B occur in the body of some rule with head A. A
depends on B if A directly depends on B or there is a C such that A directly
depends on C and C depends on B.
A semantics Sem is relevant iff for every P:
A  Sem(P) iff A  Sem(RelA(P))
where RelA(P) contains all rules of P whose head is A or
some B on which A depends on.
Only this property allows for the usual top-down execution
of logic programs.
Problems with SMs
• Don’t provide a meaning to every program:
P = {a  not a} has no stable models
• It’s non-cumulative and non-relevant:
a  not b
b  not a
c  not a
c  not c
The only SM is {not a, c,b}
However b is not true in P U {c} (non-cumulative)
P U {c} has 2 SMs: {not a, b, c} and {a, not b, c}
b is not true in Relb(P) (non-relevance)
The rules in Relb(P) are the 2 on the left
Relb(P) has 2 SMs: {not a, b} and {a, not b}
Problems with SMs (cont)
• Its computation is NP-Complete
• The intersection of SMs is non-supported:
a  not b
b  not a
ca
cb
c is true but neither a nor b are true.
• Note that the perfect model semantics:
• is cumulative
• is relevant
• is supported
• its computation is polynomial
Well Founded Semantics
• Defined in [GRS90], generalizes SMs to 3valued models.
• Note that:
– there are programs with no fixpoints of G
– but all have fixpoints of G2
P = {a  not a}
But:
G({a}) = {} and
G({}) = {a}
• There are no stable models
G({}) = {} and G({a}) = {a}
Partial Stable Models
A 3-valued intr. (T U not F) is a PSM of P iff:
T = GP2(T)
T  G(T)
F = HP - G(T)
The 2nd condition guarantees that no atom is both true and false: T  F = {}
P = {a  not a}, has a single PSM: {}
a  not b
b  not a
c  not a
c  not c
This program has 3 PSMs:
{}, {a, not b} and {c, b, not a}
The 3rd corresponds to the single SM
WFS definition
[WF Model] Every P has a knowledge ordering (i.e. wrt
) least PSM, obtainable by the transfinite sequence:
T0 = {}
Ti+1 = G2(Ti)
Td = U<d T, for limit ordinals d
Let T be the least fixpoint obtained.
MP = T U not (HP - G(T))
is the well founded model of P.
Well Founded Semantics
• Let M be the well founded model of P:
– A is true in P iff A  M
– A is false in P iff not A  M
– Otherwise (i.e. A  M and not A  M) A is
undefined in P
WFS Properties
• Every program is assigned a meaning
• Every PSM extends one SM
– If WFM is total it coincides with the single SM
• It is sound wrt to the SMs semantics
– If P has stable models and A is true (resp. false) in the
WFM, it is also true (resp. false) in the intersection of
SMs
• WFM coincides with the perfect model in locally
stratified programs (and with the least model in definite programs)
More WFS Properties
• The WFM is supported
• WFS is cumulative and relevant
• Its computation is polynomial (on the
number of instantiated rule of P)
• There are top-down proof-procedures, and
sound implementations
– these are mentioned in the sequel
WFS for extended programs
• Generalizing WFS for ELPs in the same way as
answer-sets yields unintuitive results:
pacifist(X)  not hawk(X)
hawk(X)  not pacifist(X)
~pacifist(a)
Using the same method the WFS is: {~pacifist(a)}
Though it is explicitly stated that a is non-pacifist, not
pacifist(a) is not assumed, and so hawk(a) cannot be
concluded.
•Coherence is not satisfied...
 Coherence must be imposed
Imposing Coherence
• Coherence is: ~L  T  L  F, for objective L
• According to the WFS definition, everything is false
that doesn’t belong to G(T)
• To impose coherence, when applying G(T) simply
delete all rules for the objective complement of literals
in T
“If L is explicitly true then when computing
undefined literals forget all rules with head ~L”
WFSX definition
The semi-normal version of P, Ps, is obtained by adding not ~L to
every rule of P with head L
An interpretation (T U not F) is a PSM of ELP P iff:
T = GPGPs(T)
T  GPs(T)
F = HP - GPs(T)
The WFSX semantics is determined by the knowledge
ordering least PSM (wrt )
WFSX example
Ps: pacifist(X)  not hawk(X), not ~pacifist(X)
hawk(X)  not pacifist(X ), not ~hawk(X)
~pacifist(a)  not pacifist(a)
P: pacifist(X)  not hawk(X)
hawk(X)  not pacifist(X)
~pacifist(a)
T0 = {}
Gs(T0) = {~p(a),p(a),h(a),p(b),h(b)}
T1 = {~p(a)}
Gs(T1) = {~p(a),h(a),p(b),h(b)}
T2 = {~p(a),h(a)}
T3 = T2
The WFM is:
{~p(a),h(a), not p(a), not ~h(a), not ~p(b), not ~h(b)}
Properties of WFSX
• Complies with the coherence principle
• Coincides with WFS in normal programs
• If WFSX is total it coincides with the only
answer-set
• It is sound wrt answer-sets
• It is supported, cumulative, and relevant
• Its computation is polynomial
• It has sound implementations (cf. below)
Part II
Nonclassical Logics and
nonmonotonic extensions
Intuitionistic logic H
• H introduced by Arend Heyting in 1930
• a formalisation of the constructive reasoning of
Brouwer
• one explains the meaning of the logical connective
using a notion of proof (in place of truth)
Intuitionistic logic (cont)
• A proof of  consists in giving a proof of 
and a proof of 
• A proof of   consists in giving a proof of  or
a proof of 
• A proof of > is a construction that converts a
proof of  in a proof of 
• A proof of ¬ is a construction that converts an
hypothethical proof of  in a proof of a
contradiction ^
Intuitionistic logic (Kripke semantics)
• A Kripke frame is a pair
<W,  >, where W is a set
of worlds (states, points...)
and  is a partial order on
W.
• In each world w one
verifies a set of atoms
i(w), such that if w  w´
then i(w)  i(w´)
• i is an assignment that is
extended to all formulas
• i(u)  i(x)  i(w) y
i(u)  i(v)
w
x
i(x)
v
u
i(w)
i(u)
i(v)
Intuitionistic logic (semantics)
•   i(w) iff   i(w) and  i(w)
•    i(w) iff  i(w) or  i(w)
• >  i(w) if "w´, such that w w´, if  
i(w´) then  i(w´)
• ¬ i(w) if " w´ such that w  w´,  i(w´)
Intuitionistic logic (semantics)
• A frame <W,  > extended with an assignment i
determins a Kripke model M = <W, , i>.
• A formula is true in M if true in each world w in
W, ie.
M |= iff  i(w), " w  W
• Completeness: is a theorem of H iff M |= , for
each model M
Classical logic contains intuitionistic
logic
• Theorems of H form a sub-collection of classsical
theorems. Not valid in H are:
• 
•  
•   
•  
• 
Intermediate Logics
• Between H and classical logics there are infinitely
many logics
• Adding axioms to H one forms intermediate
logics
• Intermediate logics are complete wrt a generalised
concept of Kripke frame/model
The lattice of intermediate logics
Classical logic
Here-and-there
Correspondences
• Each intermediate logic I has modal companions S
under the Gödel (1933) translation t.
• In t(p) each subformula a of p is replaced by La.
Correspondences
|-I p  |-s
t(p)
Intermediate
logics I
Modal
companions
S
Gödel transation t
H
S4
The logic of here-and-there
• As a basis for studying the language of
answer sets and ist extensions, we look at
the logic of two worlds: here and there
a 2-dimensional logic.
There are two worlds....
Heaven and Earth....
Travel is in one direction only....
or the worlds are called here and there, and you can see
from here to there
At each world a set of atoms is
verified
{T}
{H}
and
HT
At each world w a set of atoms i(w) is
verified
{H}
h
{T}
t
i(h) = H
i(t) = T
The assignment i is extended to
formulas
{H}
h
{T}
t
a  b  i(w)  a  i(w) & b  i(w)
a  b  i(w)  a  i(w) or b  i(w)
The there world looks classical in
negation and implication, too...
{H}
h
{T}
t
a  i(t)  a  i(t)
a  b  i(t)  a  i(t) b  i(t)
But to know what is true here you
have to look there...
{T} t
{H}
h
a  i(h)  a  i(t)
a  b  i(h)  a  b  i(t) &
& (a  i(h) b  i(h))
The two worlds together with i form a model <H,T>. A formula
is true in the model if it is true at each world (iff it is true at h).
{T} t
{H}
h
<H,T> |= p  p  i(h)
For instance in the language with atoms a, b, c, in
this model M:
M
{a}
h
M |= a
M |= c
M |= c  a
{a,b}
t
M |=  b
M | a  b
M |= c  b
Or, given a theory T, in a language with atoms a,
b, c, we can look for (small) models of T, eg.
{a,b}
T
a  b
c
b
{}
{b}
{b}
The logic of here-and-there
• The logic of here-and-there will be denoted
by J
• The theorems of J are the formulas true in
all such here-and-there models
• They are a subset of the classical theorems
The logic of here-and-there
• Some theorems of J
• Some non-theorems
a    a
a  b)  (a  b)
a  b)  (a  b)
a  a
a  a
a  b)  a)  a
Minimal models
• We define a partial
ordering on models by
H,T  H’,T’ 
T=T’ & H  H’
• Given a theory T, a
model <H,T> of T is
said to be minimal if
it is minimal among
models of T under the
ordering 
Equilibrium logic
• A model <H,T> of a theory T is said to be
an equilibrium model of T if it is a minimal
model of T and H=T.
• Equilibrium logic is determined by the class
of all equilibrium models of a theory, ie. a
formula a is an equil. consequence of T iff a
is true in all equil. models of T.
Some examples
• ¬¬a has no e. model
• ¬a –> b has a single e.
model
• ¬a —> b, ¬b —> a
has 2 e. models
• consider <{},{a}>
• <{b},{b}>, for
consider <{},{a}>
• <{a},{a}>, <{b},{b}>
Why negation-by-default?
• J-consequence: p is in CnJ(T) iff for all models M
and worlds w, if M,w |= T, then M,w |= p.
• J-completions: E is a J-completion of T iff
E=CnJ(T U {¬a: a  E})
• Observation: equilibrium models correspond to Jcompletions
• Corollary: to reach equilibrium consistently add
negated sentences until complete
Examples
• ¬¬p cannot be consistently completed by
negation
• ¬¬p –> p can be completed by adding
either ¬p or ¬¬p. In this case one of the
corresponding e. models is “bigger” than
the other.
“Minimal” equilibrium logic
• taking J-completions by negated atoms
corresponds to selecting minimal
equilibrium models (ie minimal in their
verified atoms)
Intuitionistic and intermediate
logics
• Arend Heyting gave the
first formalisation H of
intuitionistic logic in
1930.
• Here the truth-tables for
“here-and-there” were also
first given.
• third value described as
“cannot be false, but not
provably true”.
• Kurt Gödel (1906-78)
used the logic J of
here-and-there in a
famous paper of 1932.
• He was Privatdozent at
the University of
Vienna.
Gödel was typically brief
• He showed that H
cannot be viewed as a
many-valued logic
• And there is an infinite
descending chain of
logics intermediate
between classical logic
and H
• J lies at the top
CCCNpqCCCqpqq
(p  q)  (((q  p)  q)  q)
• Jan Lukasiewicz
(1878-1956) first
axiomatised the logic
of here-and-there in
1938.
• He showed that
disjunction is a
definable connective.
The lattice of intermediate logics
Classical logic
Here-and-there
J as a 3-valued logic vs L
• J
• Lukasiewicz logic L
-1 0 1
-1 0 1
-1
1 1 1
-1
1 1 1
0
-1 1 1
0
0 1 1
1
-1 0 1
1
-1 0 1
J as a 3-valued logic vs L
• J
• Lukasiewicz logic L
¬
¬
-1
1
-1
1
0
-1
0
0
1
-1
1
-1
Here-and-there in Russia
• the logic of here-and-there
was later studied in Russia
• by Smetanich (1960)
• by Maksimova, 1970s
• shown to have several
“good” properties, eg
interpolation
Sobolev Math Inst, Siberian Acad Sci
“Minimal” equilibrium logic
• defining J-completions using only the
negation of atoms corresponds to choosing
equilibrium models that are minimal (ie
minimal in the set of verified atoms)
Some generalisations
adding strong negation
• David Nelson (1949) introduced strong negation
in a constructive setting. Main idea: atoms can
also be constructively falsified.
• Strong negation ~ can be used to replace
intuitionistic negation, or added as a new
connective.
• In either case the resulting logic N is a
conservative extension of H
On strong negation
• ~p –> ¬ p is a theorem, and intuitionistic (weak) negation is
definable, eg by ¬ p := p -> ~p
• system axiomatised by Vorob’ev (1952)
• ~~p <–> p is a theorem, but not (p v ~p).
• Semantics of N studied algebraically by Rasiowa 1950s, and
by Kripke models in 60s-70s, by Thomason and Gurevich
(standard completeness proof), and (algebraically again) by
Vakarelov.
• Kripke models now consist of verified and falsified atoms, or
equivalently sets of literals.
Intermediate logics with strong negation
• Vakarelov-style algebraic studies of N continued in 80s
and 90s, by Goranko, Sendlewski, Kracht.
• Strong negation with Vorob’ev axioms can be added to any
intermediate logic. The resulting logic is always a
conservative extension and complete for the same class of
frames (under the generalised valuations).
• Many properties carry over from an intermediate logic to
its ~-extension, eg fmp, tabularity, decidability,
interpolation.
Equilibrium logic with strong negation
• based on here-and-there with strong negation(studied by
Kracht, 1998). A 5-valued logic.
• equilibrium construction is the same, but at each world sets
of literals(atoms and strongly negated atoms) are verified.
• Again, minimise true literals wrt (weakly) false literals. e.
models are those whose literals are either true or weakly
false.
Examples
• Let T = ~ b; c –> b;
¬ c –> d; ¬ d –> c.
• Let T = ~ a; a –> c;
b –> a; ¬ b –> b.
• Let T = ~ b; ¬ a –> b
• Equilibrium model of
T is <{~b,d},{~b,d}>
• T is inconsistent (no
model)
• T has no e-model, by
<{~b},{~b,a}>
Axioms for strong negation
•
•
•
•
•
•
N1 ~(
N2 ~(~  ~
N3 ~(  ~  ~
N4

N5  
N6 (for atoms )
The logic N of Nelson
• intuicionistic logic H extended with strong
negation (N1-N6) is called constructive logic
(with strong negation): N.
• N is a conservative extension of H ie.
for a formula  in the language of H,
|= H  iff |= N 
Semantics of N
• Kripke models <W,, i> where now i assigns to
each world a set of literals ie. atoms a or atoms
prefixed by strong negation ~a.. Rules for ~:
• ~ (    )  i(w) iff ~ i(w) or ~ i(w)
• ~(   )  i(w) iff ~  i(w) & ~ i(w)
• ~( )  i(w) iff i(w) & ~ i(w)
• ~    i(w) iff ~ ~   i(w) iff   i(w)
Principles that fail in N
• Some principles valid for intuicionistic
negation ¬, are invalid for ~, eg. :
• modus tollens ,  – 
• contraposition  –   
Here-and-there with strong
negation
• J extended with ~ is denoted by N5
• linear models with two worlds
(‘here’ and ‘there’), ´h´ and ´t´.
• a model can be represented by a pair
<H,T>, where H,T are sets of literals, and H
 T.
N5 as a 5 valued logic
• N5
• N5
-2 -1 0 1 2
¬
~
-2
2 2
2 2 2
-2
2
2
-1
2 2
2 2 2
-1
2
1
0
2 2
2 2 2
0
2
0
1
-1 -1 0 2 2
1
-1
-1
2
-2 -1 0 1 2
2
-2
-2
Equilibrium logic with strong negation
• Based on here-and-there with strong negation(studied by
Kracht, 1998). A 5-valued logic.
• Equilibrium construction is the same, but at each world
sets of literals(atoms and strongly negated atoms) are
verified.
• Again, minimise true literals wrt (weakly) false literals. e.
models are those whose literals are either true or weakly
false.
Syntactic Method : completion
logics
• The idea: consider an intermediate logic L and a
theory T in L. In palce of the L-consequences of T,
form extensions E of T (called completions) that
are complete in the sense that E  E. the
logic L* is determined by the formulas true in all
completions of T.
Syntactic Method : completion
logics
• If L is an intermediate logic, T a theory in L. E is
an L-completion of T iff
• E=CnL(T U {¬a: a  E}).
• Define a logic L* (in general nonmonotonic) such
that CL*(T) if E for each L-completion E of
T.
Examples in J*
• ¬¬p cannot be completed consistently
adding negated sentences
• ¬¬p –> p can be completed adding either
¬p or ¬¬p. Each determines a completion.
Observations
• The logic J* coincides con with the logic of
equilibrium
• N5* coincides with N5-equilibrium
• Define J*min as J* with completions E such that
E=CnL(T U {¬a: a  E}) for an atom a.
• then J*min is the logic of equilibrium models that
are minimal in the works ‘there’.
Stable model and answer set semantics
• Stable models first defined for normal logic
programs(1988); ie for sets of formulas
a1 & ..& an & ¬b1 &...& ¬bm –> c, where a,b,c ´s
are atoms.
• Answer sets generalise (1990) to disjunctive and
extended programs, ie formulas
a1 & ..& an & ¬b1 &...& ¬bm–>c1 v..v ck,
where a,b,c ´s are literals.
Some observations
• On normal, disjunctive and extended logic
programs, stable models (resp. answer sets)
correspond (exactly) to equilibrium models.
• The logic J of here-and-there is a maximal
deductive basis for answer set inference, ie max
monotonic sublogic in which equivalent theories
have the same answer sets.
Further observations
• Stable models of a program P correspond to the Jcompletions of P. But this can be extended:
• (1) it suffices to complete by negated atoms;
• (2) J can be replaced by intuitionistic logic H.
• (3) for extended programs replace H by N
Correspondences
• Each intermediate logic I has modal companions S
under the Gödel (1933) translation t.
• In t(p) each subformula a of p is replaced by La.
Correspondences
|-I p  |-s
t(p)
Intermediate
logics I
Modal
companions
S
Gödel transation t
H
S4
Correspondences
• Do these embeddings lift to the nonmonotonic
case? Consider
E = CnJ(P  {a : a  E})
vs
E = CnS(P  {La : a  E})
Correspondences
• If P is a logic program, then Gödel embedding
extends, since completion is by negated atoms. So
eg. stable models correspond to S4 expansions.
Programs with nested expressions Lifschitz,
Tang, Turner, 1998
• Motivation: in addition to usual program rules,
allow rules such as
p <– ¬ (q, ¬r) or p <– (q –> r; s) or even
p; ¬q <– ¬~r
• Provide an adequate semantics that extends
answer sets to such cases
Nested expressions
• Step 1: consider formulas of form F –> G, where
F,G are any boolean combinations of atoms and
strongly negated atoms (literals).
• Step 2: extend the concept of program reduct
inductively to all boolean combinations of literals,
and then to implications (F –> G).
• Step 3: define answer set S in usual way, ie as
minimal model of P reduced by S.
Some facts
• Any program is (answer set) equivalent to a
program with formulas of form
a1 & ..& an & ¬b1 &...& ¬bm–>c1 v..v ck v ¬d1 v..v ¬ dl
generalised answer sets correspond to
equilibrium models
Some (LP-relevant) behaviour
•
•
•
•
Representing conditional antecedents (bodies):Can one re-write
(F –> G) –> H by ¬ (F& ¬ G) –> H ?
Yes. If P is any program, then P U (a –> b) –> c
and P U ¬ (a & ¬ b) –> c have the same
equilibrium models.
• But note that replacing (F –> G) by (¬F v G)
produces a different result.
Conservative extensions
• Introduce into a program P a new predicate by
definition. Eg consider the program P’ =
P U {F –> r} where F is a formula in language of
P, and r is a new predicate (atom).
• The extended program P’ is a conservative
extension of P. For any formula G in language of
P, P |~ G iff P’ |~ G .(Where |~ is equiibrium.
consequence).
Stable vs partial stable inference
• Note that stable
inference is supraclassical,
• but classical logic is
not a basis for stable
inference...
• ¬ a –> b and ¬b –> a
are class. equivalent
• Likewise WFinference (or inference
on P-stable models) is
supra-intuitionistic
• but H is not a basis for
WF-inference ....
• in H, ¬ a –> a derives
¬ a –> b.
P- stable models
• P-stable models can be represented as J-models,
• but do not appear to be characterisable (as
minimal models) in the logic of here-and-there.
• Suggestion: change to (extensions of) minimal
logic.
further observations
• Stable models of a program P correspond to the Jcompletions of P. But this can be extended:
• (1) it suffices to complete by negated atoms;
• (2) J can be replaced by intuitionistic logic H.
• (3) for extended programs replace H by N
Stable Models (corollary 1)
• Let P be a logic program (normal or
disjunctive)
• M is a stable model of P iff
• Th(M) = CnH(P U {¬a: a  Th(M)})
(for atoms a), or
• Th(M) = CnJ(P U {¬a: a  Th(M)})
Stable Models (corollary 2)
|= I p
inJ
termediate
logic I
H
<=> |= S t(p)
SW5
modal
companion S
tradution t
ofGödel
S4
Modal Correspondences
• Let  be a formula
a1 ... an  ¬b1 ... ¬bm  c1 v..v ck
of an LP, ´then t is the formula
•
a1...•
an •
¬b•1 ...•
¬b•m •
c1 v..v •
ck
• Compare:
Th(M) = CnJ(P U {¬a: a  Th(M)}) with
E = CnSW5(t(P) U {¬a: a  E})
Modal Correspondences (2)
• Given that T |-  iff t(T) |- t, we obtain the
result of Marek & Truszczynski:
• M is a stable model of P iff M is the set of atoms
true in an SW5-expansion of t(P).
• Th(M) = CnJ(P U {¬a: a  Th(M)}) y
• E = CnSW5(t(P) U {¬a: a  E})
Complexity
• The problem of non-satisfiability in classical logic is
polynomially reducible to the problem of verifying the
property of equilibrium for a model of a formula in N5.
• Therefore the problem of deciding if a model M of a set of
formulas is N5-equilibrium is coNP-complete.
• The proble of deciding if a theory has equilibrium models
P2-hard
• The problem of decidability in N5* is P2-hard
Computation
• A tableaux system for N5, for total models and for
verifying the property of equilibrim.
• Pearce, Guzman, Valverde, A Tableau Calculus for
Equilibrium Entailment, Tableaux 2000 (R
Dyckhoff ed), Springer, LNAI.
Computation
• Proof theoretic foundations for equilibrium logic
using the method of signed formulas
• Pearce, Guzman, Valverde, Computing
Equilibrium Entailment using Signed Formulas
Proceedings CL2000, Springer, LNAI.
Equivalent programs
• Lifschitz, Pearce, Valverde, Strongly Equivalent
Logic Programs, draft, 2000.
Part III
Answer set programming
Basic Idea
• Give a declarative description of a problem in the
form of a logic program
• Solutions to the problem are represented by
answer sets (rather than by variable
substitutions).
• Input is given as a set of facts (rather than as
input arguments).
• Generation of possible solutions (answer-sets)
• Elimination of those that are not really solutions
Answer set solvers
• Dlv :
http://www.dbai.tuwien.ac.at/proj/dlv
• smodels:
http://www.tcs.hut.fi/Software/smodels
Subset example
• Write a program that, given facts for
element(_), has an answer-set for each
subset of the elements
in_sub(X) :- element(X), not out_sub(X).
out_sub(X) :- element(X), not in_sub(X).
element(1).
element(2).
element(3).
RUN
IN
AS programming vs Prolog
in_sub(X) :- element(X), not out_sub(X).
out_sub(X) :- element(X), not in_sub(X).
element(1).
element(2).
element(3).
versus
sub_set([],_).
sub_set(S,
[_|C]) :- sub_set(S,C).
sub_set([E|S],[E|C]) :- sub_set(S,C).
?- sub_set(S,[1,2,3]).
Small subsets
• An answer-set for each subset of cardinality  1
• From the previous program, eliminate answersets with more than one member
• Eliminate an answer-set if
in_sub(X), in_sub(Y), X != Y
• Add:
foo :- in_sub(X), in_sub(Y), X != Y, not foo
RUN
Graph colouring: 3-colours
• Problem: find all colourings of a map of countries
using not more than 3 colours, such that
neighbouring countries are not given the same
colour.
• the predicate arc connects two countries.
• Use rules of dlv to generate colourings, and
integrity constraints to eliminate unwanted
solutions
• Is there a 3-colouring of France, Luxembourg,
Germany and Belgium?
Graph colouring
arc(minnesota, wisconsin).
arc(illinois, michigan).
arc(illinois, indiana).
arc(michigan, indiana).
arc(michigan, wisconsin).
arc(wisconsin, iowa).
min
arc(illinois, iowa).
arc(illinois, wisconsin).
arc(indiana, ohio).
arc(michigan, ohio).
arc(minnesota, iowa).
arc(minnesota, michigan).
wis
ill
mic
col(Country,Colour) ??
ohio
iow
ind
Graph
%auxiliary
con(X,Y) :- arc(X,Y).
con(X,Y) :- arc(Y,X).
colouring
RUN
node(N) :- con(N,_).
%generate
col(C,red) :- node(C), not col(C,blue), not col(C,green).
col(C,blue) :- node(C), not col(C,red), not col(C,green).
col(C,green) :- node(C), not col(C,blue), not col(C,red).
%eliminate
false :- con(C1,C2), col(C1,C), col(C2,C), not false.
One colouring solution
min
wis
ill
mic
ohio
iow
ind
Hamiltonian paths
• Given a graph, find all Hamiltonian paths
arc(a,b).
arc(a,d).
arc(b,a).
arc(b,c).
arc(d,b).
arc(d,c).
a
b
d
c
Hamiltonian paths
% Subsets of arcs
in_arc(X,Y) :- arc(X,Y), not out_arc(X,Y).
out_arc(X,Y) :- arc(X,Y), not in_arc(X,Y).
% Nodes
node(N) :- arc(N,_).
node(N) :- arc(_,N).
% Notion of reachable
reachable(X) :- initial(X).
reachable(X) :- in_arc(Y,X), reachable(Y).
Hamiltonian paths
% initial is one (and only one) of the nodes
initial(N) :- node(N), not non_initial(N).
non_initial(N) :- node(N), not initial(N).
:- initial(N1), initial(N2), N1 != N2.
RUN
% In Hamiltonian paths all nodes are reachable
:- node(N), not reachable(N).
% Paths must be connected subsets of arcs
% I.e. an arc from X to Y can only belong to the path if X is reachable
:- arc(X,Y), in_arc(X,Y), not reachable(X).
% No node can be visited more than once
:- node(X), node(Y), node(Z), in_arc(X,Y), in_arc(X,Z), Y != Z.
Hamiltonian paths (solutions)
a
b
d
c
{in_arc(a,d), in_arc(b,a),
in_arc(b,c), in_arc(d,c)}
in_arc(d,b)}
Generation of answer-sets
• Multiple answer-sets can also be produced via rules with
disjunction
A; ~A
• a program with n rules of this form has 2n answer-sets.
For example
p; ~p
q; ~q
has the answer-sets: {p,q}, {p,~q},{~p,q}, {~p,~q}
Generation of answer-sets
• The rule
A; ~A
can be replaced in any program by the two non-disjunctive
rules
A :- not ~A
~A :- not A
(Why?)
Generation of answer-sets
• The rule
A; not A
also produces several answer-sets. It has answer-sets
{A} and {}
for example
A ; not A
B ; not B
has as answer-sets the 4 subsets of {A,B}
Elimination of answer-sets
• Integrity constraints reduce the number of answer sets,
eliminating undesired ones.
• For example
:- L1, ..., Lm, not K1, ..., not Kn
eliminates an answer set S, iff
L1, ..., Lm  S and K1,...,Kn  S.
• This rule can be replaced by:
new :- L1, ..., Lm, not K1, ..., not Kn, not new
where new is a new predicate, not appearing elsewhere.
Why?
smodels
• Permits the representation of the combination
A; not A :in the head, putting A in parentheses:
{A}.
A list of rules of the form
Ai; not Ai :- body
can be represented in smodels by
{A1,...,An} :- body
smodels
• No strong negation. If a programs has a literal ~A
• substitute ~A by a new atom A´
• add the constraint :- A, A´
Planning
• In many planners the problem of generating
a plan is reduced to the problem of
satisfaction of a setv of propositional
formulas
• Subrahmanian & Zaniolo (1995) proposed
to reduce plan generation to the problem of
finding an answer-set of a logic progam.
Planning in the blocks world
1
2
4
3
5
5
4
3
2
1
Write a program describing the blocks world and the
permitted moves. An answer-set corresponds to a
history (set of moves)
Planning in the blocks world
• Situation: there is a robot with various arms
that can move several blocks at the same
time.
• One may not place one cube on top of
another that is being moved at the same
time.
• A block can only move if there is no other
block on top of it
Planning in the blocks world
Three basic predicates :
• block
• time
• location
a constant lasttime fixes a maximum length of
a plan.
Planning in the blocks world
We fix time and location: eg.
time(0....lasttime)
location(B):- block(B).
location(table).
Planning in the blocks world
% GENERATE
{move(B,L,T) : block(B) : location(L)}arms :time(T), T <lasttime.
% potential solutions are sets of moves not
more in number that arms up to a time not
later than lasttime
Planning in the blocks world
% DEFINE
% effect of moving a block
on(B,L,T+1) :% inertia
on(B,L,T+1) :% Single location
on´(B,L1,T) :-
Planning in the blocks world
% TEST
% on´is the negation of on
% two blocks cannot be directly on top of a block
% one can only move a block if it is ‚free‘
% one cannot move a block on to another block
being moved at the same time
Planning in the blocks world
example
const arms = 2
const lasttime = 3
block(1...6)
%define
on(1,2,0). on(2,table,0). on(3,4,0).
on(4,table.0). on(5,6,0). on(6,table,o).
Planning in the blocks world
goal
Initial state
1
2
3
4
5
6
3
2
1
6
5
4
Planning in the blocks world
•
•
•
•
•
•
•
•
%TEST
(eliminate actions that do not lead to the goal)
:- not on(3,2,lasttime)
:- not on(2,1,lasttime)
:- not on(1,table,lasttime)
:- not on(6,5,lasttime)
:- not on(5,4,lasttime)
:- not on(4,table,lasttime)
Planning in the blocks world
• Execution of the plan is a set of “move“ literals
• move(1,table,0), move(3,table,0), move(2,1,1),
move(5,4,1), move(3,2,2), move(6,5,2),
Planning in the blocks world
• Comments
• the problem is more sophisticated than eg. the
Yale-shooting-problem (it has simultaneous
actions)
• the movement of a block has eg two effects: an on
predicate becomes true, another false. One is
described explicitly by the DEFINE rule, other is
indirect and implicit. This problem is not treated in
STRIPS.
Planning in the blocks world
• Comments
• actions are also executed implicitly.
(eg. That two blocks cannot be moved onto the same
block)
Diagnosis
a test beam is directed onto a scintillating crystal whose light
emission is measured by an avalanche photodiode (APD).
The measurement is then read with some readout
electronics.
Alternatively to the beam reading, the APD can receive a test
pulse signal, which allows to check the correct functioning
of the APD independently from the crystal.
• Write a program to diagnose malfunctioning parts:
Diagnosis
Test Beam
APD
crystal
Test pulse injector
Read Out
Diagnosis
Language:
Constants:a apd, b beam, c crystal, t testpulse_injector
r readout
Represent the system as a graph of its units:
Primitive Predicate:
connected(X,Y)
Diagnosis
Defined predicate: good_path(X,Y)
good_path is a transitive relation that holds between two
items that are connected and not bad.
Defined readout constants:
• testpulse_readout_ok
• beam_readout_ok
readouts are OK if there is a good path between the
readouts and the source
Diagnosis
Assume that there is a fault in the system and
either the crystal or the apd is bad.
Write a dlv program to describe the system.
Diagnosis
• Observations (formulate these as constraints)
• testpulse_readout is not OK
• beam_readout not OK
• beam and testpulse_readouts not OK
• testpulse_readout OK
• beam_readout OK
• beam and testpulse_readouts OK
Diagnosis
• Find the equilibrium models of the program.
• Find the equilibrium models corresponding to the 6
observation states.
Banana (planning)
A monkey is in a room with a chair and a banana
which is fixed to the ceiling. The monkey cannot
reach the banana unless it stands on the chair; it is
simply too high up. The chair is now at a position
different from the place where the banana is hung,
and the monkey itself initially is at again a
different place.
Banana (planning)
At each discrete point in time, the monkey performs
one of the following for actions: it walks, it moves
the chair (while doing this, it also moves through
the room), it climbs up the chair, or it does
nothing. #int is again a built-in predicate which is
true exactly if its argument is an integer value.
Banana (planning)
After climbing up the chair, it is on it. If is is
already on it, it cannot climb up any further.
While on the chair, it cannot walk around. If
it was on the chair earlier, it will be there in
the future.
Banana (planning)
• If the chair is not moved, it will stay at the
same place. If the monkey moves the chair,
it changes its position.
Banana (planning)
The monkey is somewhere in the room. (For
simplicity, only three positions are
possible.)
Banana (planning)
The monkey cannot change its position
without moving. The monkey cannot stay at
the same place if it moves. It cannot climb
up the chair if it is somewhere else. It
cannot move the chair if it is somewhere
else.
Banana (planning)
Initially, the monkey and the chair are at
different positions.
Banana (planning)
• The monkey can only reach the banana if it
stands on the chair and the chair is below
the banana. If it can reach the banana, it will
eat it, and this will make it happy. Our goal
is to make the monkey happy.
Banana (planning)
• The step rules collect all the things we can
derive from the situation and build a
consistent plan. (There is no step rule for
the action idle since we are not interested in
it.)
Banana (planning)
walk(Time) v move_chair(Time) v
ascend(Time) v idle(Time) :- #int(Time).
Banana (planning)
After climbing up the chair, it is on it. If is is
already on it, it cannot climb up any further.
While on the chair, it cannot walk
around. If it was on the chair earlier, it will be
there in the future.
Banana (planning)
monkey_motion(T) :- walk(T).
monkey_motion(T) :- move_chair(T).
stands_on_chair(T2) :- ascend(T), T2 = T + 1.
:- stands_on_chair(T), ascend(T).
:- stands_on_chair(T), monkey_motion(T).
stands_on_chair(T2) :- stands_on_chair(T), T2 = T +1.
Banana (planning)
• If the chair is not moved, it will stay at the
same place. If the monkey moves the chair,
it changes its position.
Banana (planning)
chair_at_place(X, T2) :- chair_at_place(X,
T1), T2 = T1 + 1, not move_chair(T1).
chair_at_place(Pos, T2) :- move_chair(T1),
T2 = T1 + 1, monkey_at_place(Pos, T2).
Banana (planning)
The monkey is somewhere in the room. (For
simplicity, only three positions are
possible.)
Banana (planning)
monkey_at_place(monkey_starting_point, T) v
monkey_at_place(chair_starting_point, T) v
monkey_at_place(below_banana, T) :- #int(T).
Banana (planning)
The monkey cannot change its position
without moving. The monkey cannot stay at
the same place if it moves. It cannot climb
up the chair if it is somewhere else. It
cannot move the chair if it is somewhere
else.
Banana (planning)
:- monkey_at_place(Pos1, T2), monkey_at_place(Pos2, T1),
T2 = T1 + 1, Pos1 != Pos2, not monkey_motion(T1).
:- monkey_at_place(Pos, T2), monkey_at_place(Pos, T1), T2 = T1 + 1,
monkey_motion(T1).
:- ascend(T), monkey_at_place(Pos1, T), chair_at_place(Pos2, T), Pos1
!= Pos2.
:- move_chair(T), monkey_at_place(Pos1, T), chair_at_place(Pos2, T),
Pos1 != Pos2.
Banana (planning)
Initially, the monkey and the chair are at
different positions.
Banana (planning)
monkey_at_place(monkey_starting_point, 0).
chair_at_place(chair_starting_point, 0).
Banana (planning)
• The monkey can only reach the banana if it
stands on the chair and the chair is below
the banana. If it can reach the banana, it will
eat it, and this will make it happy. Our goal
is to make the monkey happy.
Banana (planning)
can_reach_banana :- stands_on_chair(T),
chair_at_place(below_banana, T).
eats_banana :- can_reach_banana.
happy :- eats_banana.
:- not happy.
Banana (planning)
• The step rules collect all the things we can
derive from the situation and build a
consistent plan. (There is no step rule for
the action idle since we are not interested in
it.)
Banana (planning)
step(N, walk, Destination) :- walk(N),
monkey_at_place(Destination, N2), N2 = N + 1.
step(N, move_chair, Destination) :- move_chair(N),
monkey_at_place(Destination, N2), N2 = N + 1.
step(N, ascend, " ") :- ascend(N).
Set maximum integer to 3 (or more) -N=3 and create plan.
Banana (planning)
{chair_at_place(chair_starting_point,0),
monkey_at_place(monkey_starting_point,0),
monkey_at_place(chair_starting_point,1),
monkey_at_place(below_banana,2),
monkey_at_place(below_banana,3),
walk(0), move_chair(1), ascend(2), idle(3),
chair_at_place(chair_starting_point,1),
chair_at_place(below_banana,2),
chair_at_place(below_banana,3),
Banana (planning)
monkey_motion(0), monkey_motion(1),
step(0,walk,chair_starting_point),
step(1,move_chair,below_banana),
step(2,ascend," "),
stands_on_chair(3), can_reach_banana,
eats_banana, happy}
“Clique” : an example in smodels
Problem: describe a large clique: a subset V of the
vertices of a graph such that:
• every two vertices in V are connected by an edge
• V contains a minimum of j elements
“Clique” : an example in smodels
Predicates:
vertex(X)
joined(X,Y)
edge(X,Y)
in(X) (vertices in V)
“Clique” : an example in smodels
A cardinality restriction can be represented in
smodels by
j {in(X): vertex(X)} (a minimum of j elements)
where {in(X): vertex(X)} is the set of atoms in(X)
such that vertex(X)
“Clique” : an example in smodels
if vertex(0,...,5), then {in(X): vertex(X)} is the
conjunction
{in(0),in(1),...in(5)}.
%Define
joined(X,Y) :- edge(X,Y)
joined(X,Y) :- edge (Y,X)
“Clique” : an example in smodels
%TEST
:- in(X), in(Y), X/=Y, not joined(X,Y),
vertex(X), vertex(Y).
Example: const j=3, vertex(0...5),
edge(01,), edge(1,2). edge(2,0). edge(3,4)
edge(4,5). edge(5,3). edge(4,0). edge(2,5)
“Clique” : an example in smodels
In smodels one can hide atoms in the answer set that
are not relevant for the solution, eg.
hide vertex(X)
hide edge(X,Y)
hide joined(X,Y)
So with the given example, smodels produces:
in(2) in(1) in(0)
Part IV
WFS Programming
AS vs. Prolog programming
• AS-programming is adequate for:
– NP-complete problems
– situation where the whole program is relevant
for the problem at hands
If the problem is polynomial, why using
such a complex system?
If only part of the program is relevant for
the desired query, why computing the
whole model?
AS vs. Prolog (cont.)
• For such problems top-down, goal-driven
mechanisms seem more adequate
• This type of mechanisms is used by Prolog
– Solutions come in variable substitutions rather
than in complete models
– The system is activated by queries
– No global analisys is made: only the relevant
part of the program is visited
Problems with Prolog
• Its declarative semantics is the completion
– All the problems of completion are inherited by
Prolog
• According to SLDNF, termination is not
guaranteed, even for Datalog programs (i.e.
programs with finite ground version)
• No explicit negation operator is available
WFS programming
• Prolog programming style, but with the WFS
semantics
• Requires:
A new proof procedure (different from SLDNF),
complying with WFS, and with explicit negation
The corresponding Prolog-like implementation:
XSB-Prolog
SLX:
Proof procedure for WFSX
• SLX (SL with eXplicit negation) is a top-down procedure
for WFSX
– Here we only present an AND-trees characterization
– The proof procedure details are in [AP96]
• Is similar to SLDNF
– Nodes are either successful or failed
– Resolution with program rules and resolution of default
literals by failure of their complements are as in SLDNF
• In SLX, failure doesn’t mean falsity. It simply means
non-verity (i.e. false or undefined)
Success and failure
• A finite tree is successful if its root is successful,
and failed if its root is failed
• The status of a node is determined by:
– A leaf labeled with an objective literal is failed
– A leaf with true is successful
– An intermediate node is successful if all its children are
successful, and failed otherwise (i.e. at least one of its
children is failed)
Negation as Failure?
• As in SLS, to solve infinite positive recursion,
infinite trees are (by definition) failed
• Can a NAF rule be used?
YES
True of not A succeeds if true-or-undefined of A fails
True-or-undefined of not A succeeds if true of A fails
This is the basis of SLX. It defines:
T-Trees for proving truth
TU-Trees for proving truth or undefinedness
T and TU-trees
• They differ in that literals involved in recursion
through negation, and so undefined in WFSXp, are
failed in T-Trees and successful in TU-Trees
a  not b
b  not a
T
a6
not b
6
TU
b
T
not a
a6
not b
6
TU
b
not a
…
Explicit negation in SLX
• ¬-literals are treated as atoms
• To impose coherence, the semi-normal program is
used in TU-trees
a  not b
b  not a
¬a
b
a6
¬a
not a
not6b not6¬a
true
b
a6
¬a
not a
not b not6¬a
true
…
Explicit negation in SLX (2)
• In TU-trees: L also fails if ¬L succeeds true
• I.e. if not ¬L fail as true-or-undefined
not a
c  not c
b  not c
¬b
ab
¬a
a6
6
b6 not ¬a
¬b
not c not6¬b
true
c6
c
c6
c
c6
not c
not c
not c
not c
not c
6
6
6
…
T and TU-trees definition
T-Trees (resp TU-trees) are AND-trees labeled by literals,
constructed top-down from the root by expanding nodes with the
rules
Nodes labeled with objective literal A
If there are no rules for A, the node is a leaf
Otherwise, non-deterministically select a rule for A
A  L1,…,Lm, not Lm+1,…, not Ln
In a T-tree the children of A are L1,…,Lm, not Lm+1,…, not Ln
In a TU-tree A has, additionally, the child not ¬A
Nodes labeled with default literals are leafs
Success and Failure
All infinite trees are failed. A finite tree is successful if its root is
successful and failed if its root is failed. The status of nodes is
determined by:
A leaf node labeled with true is successful
A leaf node labeled with an objective literal is failed
A leaf node of a T-tree (resp. TU) labeled with not A is successful if all TU-trees
(resp. T) with root A (subsidiary tree) are failed; and failed otherwise
An intermediate node is successful if all its children are successful; and failed
otherwise
After applying these rules, some nodes may remain
undetermined (recursion through not). Undetermined nodes in T-trees
(resp.TU) are by definition failed (resp. successful)
Properties of SLX
• SLX is sound and (theoretically) complete wrt
WFSX.
• If there is no explicit negation, SLX is sound
and (theoretically) complete wrt WFS.
• See [AP96] for the definition of a refutation
procedure based on the AND-trees
characterization, and for all proofs and details
Infinite trees example
s  not p, not q, not r
p  not s, q, not r
q  r, not p
r  p, not q
s
p6
q
not s not r
r
not p
p not q
not p not q not r
q
r
6
not p
p not q
q
not s not r
WFM is
{s, not p, not q, not r}
r
6
p not q
q
not s not r
r
not p
Negative recursion example
s  true
q  not p(0), not s
p(N)  not p(s(N))
s
q
not q
6
not p(0) not s
6
WFM = {s, not q}
p(0)
p(1)
p(2)
not p(1)
not p(2)
not p(3)
6
6
not p(0)
p(0)
p(1)
p(2)
not p(1)
not p(2)
not p(3)
6
6
6
…
6
6
true
…
Guaranteeing termination
• The method is not effective, because of loops
• To guarantee termination in ground programs:
Local ancestors of node n are literals in the path from n to the
root, exclusive of n
Global ancestors are assigned to trees:
• the root tree has no global ancestors
• the global ancestors of T, a subsidiary tree of leaf n of T’, are the
global ancestors of T’ plus the local ancestors of n
• global ancestors are divided into those occurring in T-trees and
those occurring in TU-trees
Pruning rules
• For cyclic positive recursion:
Rule 1
If the label of a node belongs to its local ancestors, then the node
is marked failed, and its children are ignored
For recursion through negation:
Rule 2
If a literal L in a T-tree occurs in its global T-ancestors then it is
marked failed and its children are ignored
Pruning rules (2)
Rule 1
Rule 2
L
L
L
…
L
Other sound rules
Rule 3
If a literal L in a T-tree occurs in its global TU-ancestors then it is
marked failed, and its children are ignored
Rule 4
If a literal L in a TU-tree occurs in its global T-ancestors then it is
marked successful, and its children are ignored
Rule 5
If a literal L in a TU-tree occurs in its global TU-ancestors then it
is marked successful, and its children are ignored
Pruning examples
a  not b
b  not a
¬a
b
a6
¬a
not a
not b not6¬a
true
b
6
c  not c
b  not c
¬b
ab
not a
c6
a6
Rule 2
¬a
6
b6 not ¬a
¬b
not c not 6¬b
true
not c
6
Rule 3
Non-ground case
• The characterization and pruning rules apply to
allowed non-ground programs, with ground queries
• It is well known that pruning rules do not generalize
to general programs with variables:
p(X)
p(X)  p(Y)
p(a)
p(Y)
p(Z)
What to do?
• If “fail”, the answers are incomplete
• If “proceed” then loop
Tabling
• To guarantee termination in non-ground programs,
instead of ancestors and pruning rules, tabulation
mechanisms are required
– when there is a possible loop, suspend the literal and try
alternative solutions
– when a solution is found, store it in a table
– resume suspended nodes with new solutions in the table
– apply an algorithm to determine completion of the
process, i.e. when no more solutions exist, and fail the
corresponding suspended nodes
Tabling example
p(X)  p(Y)
p(a)
p(X)
1) suspend p(Y)
2) resume
Y=a
Table for p(X)
X=a
X=_
X=a
• SLX is also implemented with
tabulation mechanisms
• It uses XSB-Prolog tabling
implementation
• SLX with tabling is available
with XSB-Prolog Version 2.0
• Try it at:
http://www.cs.sunysb.edu/~sbprolog/xsb-page.html
Tabling (cont.)
• If a solution is already stored in a table, and
the predicate is called again, then:
– there is no need to compute the solution again
– simply pick it from the table!
• This increases eficiency. Sometimes by one
order of magnitude.
Fibonacci example
fib(6,X)
fib(1,1).
X=11
fib(2,1).
fib(X,F) :fib(X-1,F1), fib(X-2,F2),
F is F1 + F2.
Table for fib
Q F
2 1
1 1
3 3
4 4
5 7
6 11
fib(5,Y)
Y=7
fib(4,A)
fib(3,B)
B=3
fib(2,C)
fib(1,D)
C=1
D=1
fib(4,H)
H=4
fib(3,F)
A=4
F=3
fib(2,E)
E=1
Linear rather than
exponential
XSB-Prolog
• Can be used to compute under WFS
• Prolog + tabling
– To using tabling on, eg, predicate p with 3
arguments:
:- table p/3.
– To table all the needed predicates:
:- auto_table.
XSB Prolog (cont.)
• Table are used from call to call until:
abolish_all_table,abolish_table_pred(P/A)
• WF negation can be used via tnot(Pred)
• Explicit negation via -Pred
• The answer to query Q is yes if Q is either true
or undefined in the WFM
• The answer is no if Q is false in the WFM of
the program
Distinguishing T from U
• After providing all answers, tables store
suspended literals due to recursion through
+ Residual Program
negation
If the residual is empty then True
If it is not empty then Undefined
The residual can be inspected with:
get_residual(Pred,Residual)
Residual program example
:- table a/0.
:- table b/0.
:- table c/0.
:- table d/0.
a
c
b
d
::::-
b, tnot(c).
tnot(a).
tnot(d).
d.
| ?no
| ?RA =
no
| ?RB =
no
| ?RC =
no
| ?no
| ?-
a,b,c,d,fail.
get_residual(a,RA).
[tnot(c)];
get_residual(b,RB).
[];
get_residual(c,RC).
[tnot(a)];
get_residual(d,RD).
Transitive closure
:- auto_table.
edge(a,b).
edge(c,d).
edge(d,c).
reach(a).
reach(A) :edge(A,B),reach(B).
|?- reach(X).
X = a;
no.
|?- reach(c).
no.
|?-tnot(reach(c)).
yes.
• Due to circularity
completion cannot
conclude not reach(c)
• SLDNF (and Prolog)
loops on that query
• XSB-Prolog works
fine
Transitive closure (cont)
:- auto_table.
edge(a,b).
edge(c,d).
edge(d,c).
reach(a).
reach(A) :edge(A,B),reach(B).
:- auto_table.
edge(a,b).
edge(c,d).
edge(d,c).
reach(a).
reach(A) :reach(B), edge(A,B).
Instead one could have written
Declarative semantics closer to operational
Left recursion is handled properly
The version on the right is usually more efficient
Grammars
• Prolog provides “for free” a right-recursive
descent parser
• With tabling left-recursion can be handled
• It also eliminates redundancy (gaining on
efficiency), and handle grammars that loop
under Prolog.
Grammars example
:- table expr/2, term/2.
expr
expr
term
term
prim
prim
-->
-->
-->
-->
-->
-->
expr, [+], term.
term.
term, [*], prim.
prim.
[‘(‘], expr, [‘)’].
[Int], {integer(Int)}.
This grammar loops in Prolog
XSB handles it correctly, properly associating * and + to the left
Grammars example
:- table expr/3, term/3.
expr(V) --> expr(E), [+], term(T), {V is E + T}.
expr(V) --> term(V).
term(V) --> term(T), [*], prim(P), {V is T * P}.
term(V) --> prim(V).
prim(V) --> [‘(‘], expr(V), [‘)’].
prim(Int) --> [Int], {integer(Int)}.
With XSB one gets “for free” a parser based on a variant of
Earley’s algorithm, or an active chart recognition algorithm
Its time complexity is better!
Finite State Machines
Tabling is well suited for Automata Theory implementations
:- table rec/2.
rec(St) :- initial(I), rec(St,I).
rec([],S) :- is_final(S).
rec([C|R],S) :- d(S,C,S2), rec(R,S2).
q0
a
a
q1
a
b
q2
q3
initial(q0).
d(q0,a,q1).
d(q1,a,q2).
d(q2,b,q1).
d(q1,a,q3).
is_final(q3).
Dynamic Programming
• Strategy for evaluating subproblems only
once.
– Problems amenable for DP, might also be for
XSB.
• The Knap-Sack Problem:
– Given n items, each with a weight Ki (1  i  n), determine
whether there is a subset of the items that sums to K
The Knap-Sack Problem
Given n items, each with a weight Ki (1  i  n), determine
whether there is a subset of the items that sums to K.
:- table ks/2.
ks(0,0).
ks(I,K) :- I > 0, I1 is I-1, ks(I1,K).
ks(I,K) :- I > 0,
item(I,Ki), K1 is K-Ki, I1 is I-1, ks(I1,K1).
There is an exponential number of subsets. Computing this with
Prolog is exponential.
There are only I2 possible distinct calls. Computing this with
tabling is polynomial.
Knowledge Bases
The taxonomy:
•
•
•
•
•
Mammal are animal
Bats are mammals
Birds are animal
Penguins are birds
Dead animals are animals
•
•
•
•
•
Normally animals don’t fly
Normally bats fly
Normally birds fly
Normally penguins don’t fly
Normally dead animals don’t fly
The elements:
Pluto is a mammal
Joe is a penguin
Tweety is a bird
Dracula is a dead bat
The preferences:
Dead bats don’t fly though bats do
Dead birds don’t fly though birds do
Dracula is an exception to the above
In general, more specific information is preferred
The taxonomy
Definite rules
Default rules
Negated default rules
flies
animal
bird
mammal
penguin
joe
dead animal
bat
tweety
pluto
dracula
Taxonomy representation
Taxonomy
animal(X)  mammal(X)
mammal(X)  bat(X)
animal(X)  bird(X)
bird(X)  penguin(X)
deadAn(X)  dead(X)
Facts
mammal(pluto).
bird(tweety). deadAn(dracula).
penguin(joe). bat(dracula).
Default rules
¬flies(X)  animal(X), adf(X), not flies(X)
adf(X)  not ¬adf(X)
flies(X)  bat(X), btf(X), not ¬flies(X)
btf(X)  not ¬btf(X)
flies(X)  bat(X), bf(X), not ¬flies(X)
bf(X)  not ¬bf(X)
¬flies(X)  penguin(X), pdf(X), not flies(X)
pdf(X)  not ¬pdf(X)
¬flies(X)  deadAn(X), ddf(X), not flies(X)
ddf(X)  not ¬ddf(X)
Explicit preferences
¬btf(X)  deadAn(X), bat(X), r1(X)
¬btf(X)  deadAn(X), bird(X), r2(X)
¬r1(dracula)
Implicit preferences
¬adf(X)  bat(X), btf(X)
¬bf(X)  penguin(X), pdf(X)
r1(X)  not ¬r1(X)
r2(X)  not ¬r2(X)
¬r2(dracula)
¬adf(X)  bird(X), bf(X)
Bibliography (1)
• The first part of this course is covered in detail in:
– [AP96] J.J. Alferes and L.M. Pereira. Reasoning with Logic
Programming. Springer LNAI 1111, 1996
• See references there for the original definitions
• For more on semantics of normal LPs see:
– [PP90] H. Przymusinska and T. Przymusinski. Semantic issues in
deductive databases and logic programs. In R. Banerji ed, Formal
Techniques in AI, a Sourcebook. North Holland, 1990
• For more on derivation procedures:
– [AB94] K. Apt and R. Bol. Logic Programming and Negation: A
survey. In Journal of L.P., 1994
Bibliography (2)
• For more on LP for KRR:
– [BG94] C. Baral and M. Gelfond. Logic Programming and Knowledge
Representation. In Journal of L.P., 1994
• For relation with NMR and structural properties:
– [BDK97] G. Brewka, J. Dix and K. Konolige. Nonmonotonic
Reasoning: an overview. CSLI Publications, 1997
• For more on tabling and XSB-Prolog see the following page,
and references therein:
– http://xsb.sourceforge.net
Bibliography (3)
• For more on answer-set programming:
– V. Lifschitz. Answer-set programming and plan generation (draft).
Available at: http://net.cs.utexas.edu/users/vl/papers.html
– Christoph Koch. The DLV tutorial. Available at:
http://chkoch.home.cern.ch/chkoch/dlv/dlv_tutorial.html