The Logic of Intelligence

Download Report

Transcript The Logic of Intelligence

The Logic of Intelligence
Pei Wang
Department of Computer and Information Sciences
Temple University
Artificial General Intelligence
Mainstream AI treats “Intelligence” as a
collection of problem-specific and
domain-specific parts
Artificial General Intelligence (AGI) takes
“Intelligence” as a general-purpose
capability that should be treated as a
whole
AGI research still includes different research
objectives and strategies
Artificial Intelligence and Logic
“Intelligence” can be understood as
“rationality” and “validity” --- “do the
right thing”
In general, “logic” is the study of valid
reasoning, or the regularity in thinking
Therefore, an AI system may be built
according to a logic, by converting
various thinking processes into
reasoning processes
Reasoning System
A reasoning system typically consists of the
following major components:
 a formal language
 a semantic theory
 a set of inference rules
 a memory structure
 a control mechanism
The first three are usually called a “logic”
Traditional Theories
 Language and inference rules: first-order
predicate calculus
 Semantics: model theory
 Memory: relational or object-oriented
data structures and database
 Inference control: theory of computation
(algorithm, computability, and
computational complexity)
Problems of Traditional Theories
 Uncertainty: fuzzy concepts, changing meanings
and truth values, plausible results, conflicting
evidence, nondeterministic inference process, …
 Semantic justification of non-deductive
inference: induction, abduction, analogy, …
 Counter-intuitive results: sorites paradox,
implication paradox, confirmation paradox,
Wason’s selection task, …
 Computability and complexity: termination
problem, combinatorial explosion, …
Proposed Solutions
 non-monotonic logic
 mental logic
 paraconsistent logic
 mental model
 relevance logic
 case-based reasoning
 probabilistic logic
 Bayesian network
 fuzzy logic
 neural network
 inductive logic
 genetic algorithm
 temporal logic
 heuristic algorithm
 modal logic
 learning algorithm
 situation calculus
 anytime algorithm
 possible world theory
……
Common Root of the Problems
The traditional theories were developed in
the study of the foundation of
mathematics, while the problems appear
outside math
The logic of mathematics may be different
from the logic of cognition
In mathematical reasoning, the knowledge
and resources are assumed to be sufficient
(with respect to the tasks)
Different Types of Systems
 “Pure-axiomatic system”: the system’s
knowledge and resources are assumed to
be sufficient
 “Semi-axiomatic system”: certain aspects
(but not all) of the knowledge and
resources are assumed to be sufficient
 “Non-axiomatic system”: the knowledge
and resources of the system are assumed
to be generally insufficient
NARS (Non-Axiomatic Reasoning System)
NARS uses a formal logic (language,
semantics, inference rules) and is
implemented in a computer system
NARS is fully based on the assumption of
insufficient knowledge and resources,
in the sense of being a finite, real
time, open, and adaptive system
NARS is different from traditional
theories in all major components
Inheritance Based Representation
S  P : there is an
inheritance relation
from term S to term P
S is a specialization of P
P is a generalization of S
bird
Inheritance is reflexive and transitive
animal
Extension and Intension
For a given term T,
its extension TE = {x | x  T}
its intension TI = {x | T  x}
TI
TE
Theorem:
(S  P)  (SE  PE)  (PI  SI)
Therefore, “Inheritance” means
“inheritance of extension/intension”
T
Evidence
Positive evidence of S  P :
{x | x  (SE  PE)  (PI  SI)}
Negative evidence of S  P :
{x | x  (SE – PE)  (PI – SI)}

S

Amount of evidence:
positive: w+ = | SE  PE | + | PI  SI |
negative: w– = | SE – PE| + | PI – SI|
total:
w = w+ + w– = | SE | + | PI |

P

Truth Value
In NARS, the truth value of a statement is a
pair of numbers, and measures the
evidential support to the statement.
S P [f, c]
S
f: frequency, w+/w
c: confidence, w / (w +1)
[f, c]
P
Experience-Grounded Semantics
The truth value of a statement is defined
according to certain “idealized
experience”, consisting of a set of
binary inheritance statements
The meaning of a term is defined by its
extension and intension, according to
certain “idealized experience”
So meaning and truth-value changes
according to the system’s experience
Syllogistic Inference Rules
A typical syllogistic inference rule takes a
pair of premises with a common term,
and produces a conclusion
The truth value of the conclusion is
calculated by a truth-value function
Different combinations of premises
trigger different rules (with different
truth-value functions)
To Design a Truth-value Function
1. Treat all involved variables as Boolean (binary)
variables
2. For each value combination in premises, decide
the values in conclusion
3. Build Boolean functions among the variables
4. Extend the functions to real-number:
not(x) = 1 – x
and(x, y) = x * y
or(x, y) = 1 – (1 – x) * (1 – y)
Deduction
M  P [f1, c1]
S  M [f2, c2]

S  P [f, c]
bird  animal [1.00, 0.90]
robin  bird [1.00, 0.90]
M
S
f = f 1 * f2
c = c1 * c2 * f1 * f2

P
robin  animal [1.00, 0.81]
Induction
M  P [f1, c1]
M  S [f2, c2]
f = f1
c = f2 * c1 * c2 / (f2 * c1 * c2 + 1)

S  P [f, c]
swan  bird
[1.00, 0.90]
swan  swimmer [1.00, 0.90]
M

S
P
bird  swimmer [1.00, 0.45]
Abduction
P  M [f1, c1]
S  M [f2, c2]

S  P [f, c]
M
S
f = f2
c = f1 * c1 * c2 / (f1 * c1 * c2 + 1)
seabird  swimmer [1.00, 0.90]
gull  swimmer [1.00, 0.90]

P
gull  seabird [1.00, 0.45]
Revision
f1 * c1 * (1 - c2) + f2 * c2 * (1 - c1)
S  P [f1, c1]
S  P [f2, c2]
f = 
c1 * (1 - c2) + c2 * (1 - c1)

S  P [f, c]
c1 * (1 - c2) + c2 * (1 - c1)
c = 
c1 * (1 - c2) + c2 * (1 - c1) + (1 - c2) * (1 - c1)
S
P
bird  swimmer [1.00, 0.62]
bird  swimmer [0.00, 0.45]

bird  swimmer [0.67, 0.71]
Other Inference Rules
analogy
M  P [f1, c1]
S  M [f2, c2]
union
 P  M [f1, c1]
S  M [f2, c2]
S  P [f, c]

(S  P)  M [f, c]
implication
B  C [f1, c1]
A  B [f2, c2]

A  C [f, c]
Other Relations and Inheritance
An arbitrary statement R(a, b, c) can be rewritten
as inheritance relations with compound terms:
 (*, a, b, c)  R
“The relation among a, b, c is a kind of R.”
 a  (/, R, _, b, c)
“a is such an x that satisfies R(x, b, c).”
 b  (/, R, a, _, c)
“b is such an x that satisfies R(a, x, c).”
 c  (/, R, a, b, _)
“c is such an x that satisfies R(a, b, x).”
Memory as a Belief Network
The knowledge of the system is a network of beliefs
among terms. A term with all of its beliefs is a concept
gull [1.00, 0.90]
swimmer
robin
[1.00, 0.90]
feathered_creature
Cbird
[1.00, 0.90]
[1.00, 0.90]
crow
bird
swan
Inference Tasks
NARS accepts several types of inference tasks:
 Knowledge to be absorbed
 Questions to be answered
 Goals to be achieved
A task is stored in the corresponding concepts
To process each task means letting it interacts
with the available beliefs in the concept
This process usually generates new tasks,
beliefs, and concepts, recursively
Inference Process
NARS runs by repeating the following cycle:
1. Choose a concept within the memory
2. Choose a task within the concept
3. Choose a belief within the concept
4. Use inference rules to produce new tasks
5. Return the used items to memory
6. Add the new tasks into the memory and
provide an answer if available
Control Strategy
NARS maintains priority distributions among
data items, uses them to make choice, and
adjusts them after each step
Factors influence priority:
 quality of the item
 usefulness of the item in history
 relevance of the item to the current
context
Architecture and Working Cycle
Design and Implementation
The conceptual design of NARS has been
described in a series of publications
Most parts of the design have been
implemented in several prototypes, and
the current version is open source in Java
Working examples exist as proof of concept,
and only cover single-step inference or
short inference processes
The project is on-going, though has produced
novel and interesting results
Unified Solutions
 The truth value uniformly represents various





kinds of uncertainty
The truth value depends on both positive and
negative evidence
The non-deductive inference rules is justified
according to the semantics
The meaning of a term is determined by its
experienced relations with other terms
With syllogistic rules, the premises and
conclusions must be semantically related
The inference processes in NARS does not
follow predetermined algorithms
Conclusions
It is possible to build a reasoning system
that adapts to its environment, and
works with insufficient knowledge and
resources
Such a system provides a unified solution
to many problems in A(G)I
There is a logic of intelligence, though it
is fundamentally different from the
logic of mathematics