Transcript jan17
CS 4100 Artificial Intelligence
Prof. C. Hafner
Class Notes Jan 17, 2012
Outline
• Finish python examples
• Finish reflex agent discussion
• Knowledge-based agent
– Requirements
•
•
•
•
•
KB agent as a computer program
Example of KB agent: the wumpus world
Entailment and Inference
Propositional logic (review)
Propositional KB for the wumpus world
Vacuum-cleaner world (review)
• Percepts: location and contents, e.g., [A,Dirty]
• Actions: Left, Right, Suck, NoOp
• Production rules:
[A, Dirty] Suck
[B, Dirty] Suck
[A, Clean] Right
[B, Clean] Left
Reflex Agent’s Abilities
• Consider the “reflex” vacuum agent
– Lacks explicit knowledge of the environment
– Lack memory of its own past behavior
– Lacks knowledge of the effects of what its actions
• The environment may behave reasonably, but the
agent does not have any machinery for
understanding that concept & making predictions
• Note: what do we mean by an environment tat
behaves reasonably???
– According to OUR COMMON SENSE !!!!
“Reasonable” behavior of the environment ?
1. If dirty and action ≠ suck, that square will be dirty
the next time it is perceived
2. If action is suck or nop then the next percept is the
same location
3. If action is left or right then the next percept is the
expected location
(note: what if we are at B and go right??)
• What kind of agent could include this kind of
common sense knowledge and take advantage of it?
Consider extensions of the vacuum agent
• Consider an environment with three squares: A, B, C,
and the same percepts and actions – specify the
production rules
• What about a grid-shaped environment (say, 2 by 2)?
– Would need additional actions: (up down left right).
• Define a declarative representation of:
– The world
– The agent’s history
• Consider how that knowledge would be used
Evaluating An Agent’s Performance
• Consider a vacuum agent trying to be rational under a
performance measure that assigns a cost to actions
and a reward for keeping the room clean
– How to represent this knowledge ? How it use it?
– Would it need knowledge of its own prior actions (?)
Consider a vacuum agent with parameters:
Ci = cost of each action i
P = penalty for each dirty square at each time step
Strategy to minimize total cost + total penalties
NOTE: what else would we want to know ?????
Elements of a Knowledge-based agent
(the knowledge maintenance part)
World
Knowledge
Base
Updated World
Knowledge
Base
Percept
We need 3 things:
• A formal language for expressing knowledge declaratively
• “knowledge representation”
• A knowledge base design to express what is known
• “knowledge engineering” or “ontology”
• Algorithms to use and update the KB
• “automated inference”
Ontology Design (“Knowledge Engineering”)
General world knowledge: how
someone becomes POTUS
Facts that happen to be true
In current world state (dynamic)
Obama is POTUS
New percepts are added to lower box
Example
World Knowledge Base
General knowledge includes family relations such as:
a female with the same parents is a sister
a male with the same parents is a brother
parents of your parents are your grandparents
a male child of your brother or sister is your nephew
Facts include: Sam and Mary have a male child named Max
New Percept: Sam and Mary have a female child named Sarah
Updated Knowledge
Sam and Mary have a female child named Sarah
Max has a sister named Sarah
Sarah has a brother named Max
Percept: Mary has a father named Tom -- ??
Percept: Max has a son named Simon -- ??
KB agent as a computer program
• The declarative approach: beliefs are represented as a
set of sentences in a formal logic
• A set of sentences are stored in a database (called a
knowledge base).
• The sentences are believed to be true by the agent.
Other beliefs of the agent can be inferred from these.
• Trade-off in choosing a representation language:
– Expressiveness of language v. Tractability of inference
– Horn clauses logic sentences
Formal logic basis of automated reasoning
• If beliefs include both p and p q, then the agent
can infer q. (This logical inference rule is called
modus ponens). A generalized version of this is the
Resolution Rule.
• Example:
– T1. raining ground is wet
– T2. ground is wet ground is slippery
– P1 raining
Infer: ground is wet
Infer: ground is slippery
This is called “chaining” and by chaining we can produce
complex reasoning sequences
Wumpus World PEAS description
• Performance measure
– gold +1000, death -1000
– -1 per step, -10 for using the arrow
• Environment
–
–
–
–
–
–
–
Squares adjacent to wumpus are smelly
Squares adjacent to pit are breezy
Glitter iff gold is in the same square
Shooting kills wumpus if you are facing it
Shooting uses up the only arrow
Grabbing picks up gold if in same square
Releasing drops the gold in same square
• Actions: Left turn, Right turn, Forward, Grab, Release, Shoot
• Sensors: Stench, Breeze, Glitter, Bump, Scream
Exploring a wumpus world 1
Exploring a wumpus world 2
Exploring a wumpus world 2a
Exploring a wumpus world 3,4
Exploring a wumpus world 4a
Exploring a wumpus world 5
Exploring a wumpus world 5a
Exploring a wumpus world 6
What did we just do?
• Analyze and describe graphically what knowledge the
agent needs to represent and how it evolves.
• The first step in building a formal framework for this
domain.
Logic in general
• Logics are formal languages for representing
information such that conclusions can be drawn
• Syntax defines the acceptable sentences in the
language (called wffs or well formed formulas)
• Semantics define the "meaning" of sentences
– i.e., how to decide truth of a sentence in a world model
• E.g., the language of arithmetic
– x+2 > y is a sentence; x2+y > is not a sentence (syntax)
– x+2 > y is true iff the number x+2 is greater than the
number y
– x+2 > y is true in a world where x = 7, y = 1
– x+2 > y is false in a world where x = 1, y = 7
Entailment and Inference
• Entailment means that one thing follows from another:
KB ╞ α
• Knowledge base KB entails sentence α if and only if α is
true in all world models where KB is true
– E.g., the KB containing “the Giants won” and “the Reds
won” entails “The Giants won and the Reds won”
– E.g., x+y = 4 entails 4 = x+y
– Entailment is a relationship between sentences that is
based on their meaning (semantics)
Remember: KB is a database of what the agent knows
Inference
• KB ├i α = sentence α can be derived from KB by an
algorithmic procedure i. It is a relationship between
sentences based on their syntax.
• Soundness: i is sound if whenever KB ├i α, it is also true that KB╞ α
• Completeness: i is complete if whenever KB╞ α, it is also true that
KB ├i α
• Preview: later we will define a logic (first-order logic) which is
expressive enough to say many things of interest, and for which
there exists a sound and complete inference procedure.
• That is, the procedure will answer any question whose answer
follows from what is known by the KB.
Proof Theory and Model Theory for Logic
Symbolic
Expressions
Inference
Sentences
Sentences
Denote
Denote
World (Model)
Facts
Facts
Entailment
Entailment and Inference
• Entailment means that one thing follows from another:
KB ╞ α
• Knowledge base KB entails sentence α if and only if α is
true in all world models where KB is true
– E.g., the KB containing “the Giants won” and “the Reds
won” entails “The Giants won and the Reds won”
– E.g., x+y = 4 entails 4 = x+y
– Entailment is a relationship between sentences that is
based on their meaning (semantics)
Remember: KB is a database of what the agent knows
Inference
• KB ├i α = sentence α can be derived from KB by an
algorithmic procedure I . It is a relationship between
sentences based on their syntax.
• Soundness: i is sound if whenever KB ├i α, it is also true that KB╞ α
• Completeness: i is complete if whenever KB╞ α, it is also true that
KB ├i α
• Preview: later we will define a logic (first-order logic) which is
expressive enough to say many things of interest, and for which
there exists a sound and complete inference procedure.
• That is, the procedure will answer any question whose answer
follows from what is known by the KB.
Propositional logic: Syntax
Propositional logic is the simplest logic – illustrates
automated reasoning algorithms
–
–
–
–
–
–
–
The proposition symbols P1, P2 . . etc are (atomic) sentences
Compound sentences:
If S is a sentence, S is a sentence (negation)
If S1 and S2 are sentences, S1 S2 is a sentence (conjunction)
If S1 and S2 are sentences, S1 S2 is a sentence (disjunction)
If S1 and S2 are sentences, S1 S2 is a sentence (implication)
If S1 and S2 are sentences, S1 S2 is a sentence
(biconditional)
Examples
• Propositions: Red, Yellow, Round, Oblong, Apple,
Banana
• Red ^ Round
• Yellow ^ ~Round
• Oblong v Apple
Propositional logic: Semantics
A model assigns true/false for each proposition symbol
E.g.
P1,2
P2,2
P3,1
false true false
With 3 symbols, 8 possible models can be enumerated
Rules for evaluating truth of compound sentences in model m:
S
S1 S2
S1 S2
S1 S2
S1 S 2
is true iff
is true iff
is true iff
is true iff
i.e., is false iff
is true iff
S is false
S1 is true and S2 is true
S1is true or S2 is true
S1 is false or S2 is true
S1 is true and S2 is false
S1S2 is true andS2S1 is true
Simple recursive process evaluates an arbitrary sentence, e.g.,
P1,2 (P2,2 P3,1) = true (true false) = true true = true
Semantic Reasoning: Truth Tables
Discuss “material implication” [the == > connective]
For a given set of proposition symbols P1 . . . PN,
the maximum number of models is 2N The number of
models of a given set of sentences is at most equal to
2number of proposition symbols. (However, it may be less –
why???)
Logical equivalence
(can be proved using truth tables)
• Two sentences are logically equivalent} iff true in same
models: α ≡ ß iff α╞ β and β╞ α
•
•