Introduction to AI

Download Report

Transcript Introduction to AI

Review
ECE457 Applied Artificial Intelligence
Spring 2008
Lecture #14
What Is On The Final


Everything that has important! written
next to it on the slides
Everything that I said was important
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 2
What Might Be On The Final

Anything in the slides


Except “What Is Not On The Final”
Anything in the required readings in the
textbook
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 3
What Is Not On The Final

Examples of real applications











Dune II
Traveling-wave tube
IBM Deep Blue
Pathfinder network
Weighted Naïve Bayes Classifier
Helicopter flight control
Fuzzy robot navigation
Neural network pixel classifier
WordNet
Additional material on website
Writing/debugging code
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 4
Practice Material


Examples and exercises in slides
Problems at the end of each chapter in
the textbook
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 5
Material Allowed at the Exam


Pen or pencil, eraser, calculator
Not allowed:




Books, notes
Phones, blackberries, laptops, PDAs, iPods,
iPhones, iAnything, computers built into
glasses like in Mission Impossible, or
anything else electronic
Talking to other students, writing notes,
sign language, smoke signals, semaphores
Cheating in general
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 6
Summary of Course

Lecture 1: Introduction to AI


Types of agents
Properties of the environment






Fully observable vs. partially observable
Deterministic vs. stochastic vs. strategic
Episodic vs. sequential
Static vs. dynamic vs. semi-dynamic
Discrete vs. continuous
Single agent vs. cooperative vs. competitive
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 7
Lecture 1: Introduction to AI


Define the properties of the environment for these
problems:
Robot soccer


Internet shopping (without eBay-style bidding)


Partially observable, deterministic, episodic, static, discrete,
single-agent
Autonomous Mars rover


Partially observable, stochastic, sequential, dynamic,
continuous, multi-agent
Partially observable, stochastic, sequential, dynamic,
continuous, single-agent
Theorem-solving assistant to a mathematician

Fully observable, deterministic, sequential, semi-dynamic,
discrete, cooperative multi-agent
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 8
Summary of Course

Lecture 2: Uninformed Search



Well-defined problem
Properties of search algorithms
Uninformed search






Breath-first search
Uniform-cost search
Depth-first search
Depth-limited search
Iterative deepening search
Repeated states
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 9
Lecture 2: Uninformed Search


You have a search tree with a branching
factor of b and a maximum depth of m. The
depth of the shallowest goal node is d. You
are considering searching the tree using
either a depth-first search agent or a breathfirst search agent.
Which one will have the best space
complexity? Explain.



Depth-First has the best space complexity.
BFS = O(bd+1) DFS = O(bm)
Depth-first search explores an entire branch then
deletes it from memory, while breath-first search
has to keep all expanded nodes in memory.
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 10
Lecture 2: Uninformed Search


You have a search tree with a branching
factor of b and a maximum depth of m. The
depth of the shallowest goal node is d. You
are considering searching the tree using
either a depth-first search agent or a breathfirst search agent.
Which one will have the best time
complexity? Explain.



Breath-First has the best time complexity.
BFS = O(bd+1) DFS = O(bm)
Breath-First checks level by level, and is
guaranteed not to search beyond depth d. Depthfirst explores branch by branch down to the
maximum depth m.
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 11
Lecture 2: Uninformed Search


A 3-foot-tall monkey is in a room where some
bananas are suspended from the 8-foot-high
ceiling. He would like to get the bananas as
quickly as possible. The room contains two
stackable, movable, climbable 3-foot-high
crates.
Write this situation as a well-defined problem.




Initial state
Action
Goal test
Cost
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 12
Lecture 2: Uninformed Search

Initial state


Action


Stack crates, un-stack crates, move crate, climb
on crate, climb off crate, walk, grab bananas (if
high enough)
Goal test


Monkey has no bananas, monkey is on the floor,
crates are on the floor, bananas are hanging from
the ceiling
Does the monkey have the bananas?
Cost


Each action costs 1
Path cost is the sum of actions
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 13
Summary of Course

Lecture 3: Informed Search

Informed search




Greedy best-first search
A* search
Heuristic functions
Iterative improvement


Hill Climbing
Simulated Annealing
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 14
Lecture 3: Informed Search

Given the following tree, find the optimal path
to the goal G using A* search. The value of
the heuristic h is specified for each node. The
costs of the edges are specified on the tree.
Assume that children of a node are placed
into the list in a left-to-right order, and that
nodes of equal priority are extracted (for
expansion) from the list in FIFO order.


Write a number inside the node indicating the
order in which the nodes are expanded from the
start node S, i.e. 1, 2, ….
For each node generated, write the total cost f in
the appropriate location on the graph.
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 15
Lecture 3: Informed Search

Find the optimal path to the goal G using A* search,
specifying the order in which nodes are expanded and
the f-value of all generated nodes.
S
1
1
h=4
f=4
3
h=2
2 h=3
f=4
2
5
2
h=2
f=5
1
1
3
4 f=5
1
2
G
h=1
f=3
f=6
h=4
2
2
6 h=0
f=5
3
h=2
f=10
f=10
h=5
f=8
h=4
f=7
h=3
f=8
h=3
1
1
f=7
h=1
f=
h=5
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 16
Summary of Course

Lecture 4: Constraint Satisfaction
Problems



Constraints
Defining a CSP
CSP search





Backtracking search
Conflict-directed backjumping
Heuristics
Forward checking
AC-3 algorithm
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 17
Lecture 4: CSP

Using the most-constrained-variable CSP
heuristic, colour the adjacent map using the
colours Blue, Red and Green. Show your
reasoning at each step of the algorithm.
Proceed in the following manner:

B
A

C
After assigning a colour to a region, and before
choosing the next region to colour, apply the
forward checking algorithm and show its results.
Then choose the next region to colour using the
most-constrained-variable heuristic, etc.
At each step, show the domains of each region
and justify the choice of the next region to colour.
E
F
ECE457 D
Applied Artificial Intelligence
R. Khoury (2008)
Page 18
Lecture 4: CSP



B
A
C
E
F
Variables marked * have been assigned
 A* = {Green}
A* = {Green}
B* = {Red}
B* = {Red}
C* = {Red}
C = {Red, Blue}
D = {Red, Blue, Green} D = {Red, Blue}
E* = {Blue}
E = {Blue}
F* = {Green}
F = {Blue, Green}
 A* = {Green}
A* = {Green}
B* = {Red}
B* = {Red}
C* = {Red}
C = {Red}
D = {Red, Blue, Green} D* = {Red} or
D* = {Blue}
E* = {Blue}
E* = {Blue}
F = {Green}
F* = {Green}
ECE457 D
Applied Artificial Intelligence
R. Khoury (2008)
Page 19
Summary of Course

Lecture 5: Game Playing





Payoff functions
Minimax algorithm
Alpha-Beta pruning
Non-quiescent positions & horizon effect
Expectiminimax
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 20
Lecture 5: Game Playing

Consider the following game tree. The
payoff value of each leaf is written
under that node.


Apply the Minimax algorithm to obtain the
value of each non-leaf node.
Apply Alpha-Beta Pruning to the game
tree. Find which nodes will be pruned. For
each one, identify and explain the value of
alpha and beta to show why it is pruned.
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 21
Lecture 5: Game Playing
H. is pruned. At node
F, beta is 8 (from B)
and after checking G
alpha is 9
1.[-, ] MAX A
9.[8, ]
2.[-, ]MIN
B
I
6.[-, 8]
3.[-, ]
4.[4, ] MAX
5.[8, ]
7.[-, 8]
C
10.[8, ]
14.[8, 2]
M is pruned. At node
I, alpha is 8 (from A)
and after checking J
beta is 2.
F 8.[9, 8]
11.[-, ]
J 12.[-2, ] M
13.[2, ]
D
E
G
H
K
L
N
O
4
8
9
-1
-2
2
-8
5
A
B
C
F
I
J
M
8
8
8
9
2
2
5
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 22
Summary of Course

Lecture 6: Logical Agents


Language, syntax, semantics
Propositional logic



Inference with truth tables
Inference with Resolution


Propositional symbols and logical connectives
Conversion to CNF
Inference with Modus Ponens



Horn clauses
Forward chaining
Backward chaining
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 23
Summary of Course

Lecture 7: First-Order Logic

First-Order Logic





Inference with propositionalization
Inference with Generalized Modus Ponens


Constants, predicates, functions
Universal and existential quantifiers
Converting English sentences
Unification algorithm
Inference with Resolution

Conversion to CNF
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 24
Lecture 7: First-Order Logic


Represent the following sentences in FOL
using: Take(s,c,t), Pass(s,c,t), Score(s,c,t),
Student(s), French, Greek, Spring2001
Some students took French in spring 2001


Every student who takes French passes it


x,y Student(x)  Take(x,French,y) 
Pass(x,French,y)
Only one student took Greek in Spring 2001


x Student(x)  Take(x,French,Spring2001)
x Student(x)  Take(x,Greek,Spring2001) 
y[Student(y)y≠x  Take(y,Greek,Spring2001)]
The best score in Greek is always higher than
the best score in French

t x y Score(x,Greek,t) > Score(y,French,t)
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 25
Lecture 7: First-Order Logic







Convert this FOL sentences to Conjunctive
Normal Form. Show all steps of the
conversion.
x [y F(y)  G(x,y)]  y G(y,x)
x ¬[y F(y)  G(x,y)]  y G(y,x)
x y ¬[F(y)  G(x,y)]  y G(y,x)
x y ¬F(y)  ¬G(x,y)  y G(y,x)
x y ¬F(y)  ¬G(x,y)  z G(z,x)
x ¬F(C1)  ¬G(x,C1)  G(C2,x)
¬F(C1)  ¬G(x,C1)  G(C2,x)
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 26
Lecture 7: First-Order Logic


Find the most general unifier, if it exists.
p = F(A,B,B)
q = F(x,y,z)


p = F(y,G(A,B))
q = F(G(x,x),y)


Fail: x cannot be both A and B
p = F(G(y),y)
q = F(G(x),A)


 = {x/A, y/B, z/B}
 = {x/A, y/A}
p = F(G(y),y)
q = F(x,x)

Fail: occur check error
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 27
Lecture 7: First-Order Logic

Given the following KB:







Faster(x,y)  Faster(y,z)  Faster(x,z)
Pig(x)  Slug(y)  Faster(x,y)
Buffalo(x)  Pig(y)  Faster(x,y)
Slug(Slimm)
Pig(Pat)
Buffalo(Bill)
Is Bill faster than Slimm, using forward chaining

Slug(Slimm), Pig(Pat), Pig(x)  Slug(y)  Faster(x,y),
 = {x/Pat, y/Slimm}


Pig(Pat), Buffalo(Bill), Buffalo(x)  Pig(y)  Faster(x,y),
 = {x/Bill, y/Pat}


Faster(Pat,Slimm)
Faster(Bill,Pat)
Faster(Pat,Slimm), Faster(Bill,Pat),
Faster(x,y)  Faster(y,z)  Faster(x,z),
 = {x/Bill, y/Pat, z/Slimm}
 Faster(Bill,Slimm)
ECE457 Applied
Artificial Intelligence
R. Khoury (2008)
Page 28
Lecture 7: First-Order Logic

Given the following KB:






Slimy(x)  Creepy(x)  Slug(x)
Pig(x)  Slug(y)  Faster(x,y)
Slimy(Slimm)
Creepy(Slimm)
Pig(Pat)
Is Pat faster than Slimm, using backward chaining


Faster(Pat,Slimm)
Pig(x)  Slug(y)  Faster(x,y),  = {x/Pat, y/Slimm}


Pig(Pat) in KB, Slug(Slimm)
Slimy(x)  Creepy(x)  Slug(x),  = {x/Slimm}

Slimy(Slimm) in KB, Creepy(Slimm) in KB
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 29

Given the following KB:







Ruler(Caesar)
Person(Marcus)

Assasinate(Marcus, Caesar)
Pompeian(Marcus)
¬Pompeian(x1)  Roman(x1)
¬Roman(x2)  Loyal(x2,Caesar)  Hate(x2, Caesar)
¬Person(x3)  ¬Ruler (x4)  ¬Assasinate(x3, x4)  ¬Loyal(x3,x4)
Lecture 7: First-Order Logic
Does Marcus hate Caesar, using resolution

¬Hate(Marcus, Caesar), ¬Roman(x2)  Loyal(x2,Caesar)  Hate(x2, Caesar),
 = {x2/Marcus}


¬Roman(Marcus)  Loyal(Marcus,Caesar), ¬Pompeian(x1)  Roman(x1),
 = {x1/Marcus}


¬Ruler (Caesar)  ¬Assasinate(Marcus, Caesar)
¬Ruler (Caesar)  ¬Assasinate(Marcus, Caesar), Assasinate(Marcus, Caesar),
no substitution


¬Person(Marcus)  ¬Ruler (Caesar)  ¬Assasinate(Marcus, Caesar)
¬Person(Marcus)  ¬Ruler (Caesar)  ¬Assasinate(Marcus, Caesar),
Person(Marcus), no substitution


Loyal(Marcus,Caesar)
Loyal(Marcus,Caesar), ¬Person(x3)  ¬Ruler (x4)  ¬Assasinate(x3, x4) 
¬Loyal(x3,x4),  = {x3/Marcus. X4/Caesar}


Loyal(Marcus,Caesar)  ¬Pompeian(Marcus)
Loyal(Marcus,Caesar)  ¬Pompeian(Marcus), Pompeian(Marcus), no substitution


¬Roman(Marcus)  Loyal(Marcus,Caesar)
¬Ruler (Caesar)
¬Ruler (Caesar), Ruler(Caesar) , no substitution

ECE457 Applied Artificial Intelligence
Contradiction!
R. Khoury (2008)
Page 30
Summary of Course

Lecture 8: Uncertainty





Marginalization
Bayes’ Theorem
Chain rule
Independence and conditional
independence
Naïve Bayes Classifier
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 31
Lecture 8: Uncertainty

You tested positive for a disease. The test’s results
are accurate 99% of the time. However, the disease
only strikes 1 out of 10000 people. What’s the
probability that you have the disease?




P(+|disease) = 0.99
P(-|¬disease) = 0.99
P(disease) = 0.0001
P(disease|+) = P(+|disease)P(disease)/P(+)
P(+) = P(+|disease)P(disease) + P(+|¬disease)P(¬disease)
P(+) = 0.99*0.0001 + 0.01*0.9999 = 0.010098
P(disease|+) = 0.99 * 0.0001 / 0.010098 = 0.009804
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 32
Lecture 8: Uncertainty

P(S|C,T,O) = P(S)*
P(C|S)*P(T|S)*P(O|S)


P(S) = 0.5
P(C=red|S) = 0.6
P(C=yellow|S) = 0.4
P(T=sport|S) = 0.8
P(T=SUV|S) = 0.2
P(O=domestic|S) = 0.4
P(O=imported|S) = 0.6
P(S|red,SUV,domestic)=
P(S)*P(red|S)*P(SUV|S)*
P(domestic|S)
= 0.5*0.6*0.2*0.4
= 0.024
ECE457 Applied Artificial Intelligence
C
Red
Red
Red
T
O
S
Sports Domestic Stolen
Sports Domestic Not
Sports Domestic Stolen
Yellow Sports Domestic Not
Yellow Sports Imported Stolen
Yellow SUV
Imported Not
Yellow
Yellow
Red
Red
SUV
SUV
SUV
Sports
R. Khoury (2008)
Imported
Domestic
Imported
Imported
Stolen
Not
Not
Stolen
Page 33
Summary of Course

Lecture 9: Probabilistic Reasoning



Bayesian Network
Connections and D-Separation
Inference
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 34
Lecture 9: Probabilistic Reasoning
A

B
C
D
E
F

Consider this Bayesian network.
Write the factored expression for
the joint probability distribution
P(A, B, C, D, E, F) which is
represented by this network.


P(A, B, C, D, E, F) =
P(E|C)P(F|D)P(D|B)P(C|A,B)P(A)P(B)
Which variables are independent (dseparate) of C if:



B is known.
A is known.
D and E are both know.
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
D and F
None
F
Page 35
Lecture 9: Probabilistic Reasoning
A

B
C
D
E
F
Given the following values, what is
the posterior probability of F given
that B is true?


P(D|B) = 0.8
P(D|B) = 0.4
P(F|D) = 0.75
P(F|D) = 0.6
P(F|B) = dP(F,d|B)
P(F|B) = dP(F|d)P(d|B)
P(F|B) = P(F|D)P(D|B) +
P(F|D)P(D|B)
P(F|B) = 0.75*0.8 + 0.6*0.2 = 0.72
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 36
Summary of Course

Lecture 10: Decision Making

Maximum Expected Utility




Decision network
Optimal policy


Utility
Expected utility
Computing the optimal policy
Value of information
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 37
Lecture 10: Decision Making
D
A
B
U
C
A P(B)
F 0.7
C
F
P(D)
0.5
T 0.4
T
0.8
ECE457 Applied Artificial Intelligence
B C D U
F F F 0.6
F F T 0.2
F T F 1
F T T 0.4
T
T
T
R. Khoury (2008)
T
F
F
T
T
F
T
F
T
0.8
0.2
0.7
0.1Page 38
Lecture 10: Decision Making

First decision: C







Decision A has already been made, and random event B is
already known
Random event D is not known
Case #1: random event B happened (i.e. B is true)
EU( c | B ) = d P(d|c)U(B,c,d)
EU( C | B ) = P(D|C)U(B,C,D) + P(D|C)U(B,C,D)
EU( C | B ) = 0.8 * 0.1 + 0.2 * 0.7
EU( C | B ) = 0.22
EU( C | B ) = P(D|C)U(B,C,D) +
P(D|C)U(B,C,D)
EU( C | B ) = 0.5 * 0.2 + 0.5 * 0.8
EU( C | B ) = 0.5
Rational decision, if B is true, is to not do action C
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 39
Lecture 10: Decision Making





Case #2: random event B is false
EU( c | B ) = d P(d|c)U(B,c,d)
EU( C | B ) = P(D|C)U(B,C,D) +
P(D|C)U(B,C,D)
EU( C | B ) = 0.8 * 0.4 + 0.2 * 1
EU( C | B ) = 0.52
EU( C | B ) = P(D|C)U(B,C,D) +
P(D|C)U(B,C,D)
EU( C | B ) = 0.5 * 0.2 + 0.5 * 0.6
EU( C | B ) = 0.4
Rational decision, if B is false, is to do action C
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 40
Lecture 10: Decision Making

Second decision: A





The decision of A will affect the probability of random event
B being true or false
The decision to do C given whether or not B is true has
already been made
Case #1: We do action A
EU( A ) = b,d P(b|A)P(d|c)U(b,c,d)
EU( A ) = P(B|A)P(D|C)U(B,C,D) +
P(B|A)P(D|C)U(B,C,D) +
P(B|A)P(D|C)U(B,C,D) +
P(B|A)P(D|C)U(B,C,D)
EU( A ) = 0.4 * 0.5 * 0.2 + 0.4 * 0.5 * 0.8 +
0.6 * 0.8 * 0.4 + 0.6 * 0.2 * 1
EU( A ) = 0.512
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 41
Lecture 10: Decision Making





Case #2: We don’t do action A
EU( A ) = b,d P(b|A)P(d|c)U(b,c,d)
EU( A ) = P(B|A)P(D|C)U(B,C,D) +
P(B|A)P(D|C)U(B,C,D) +
P(B|A)P(D|C)U(B,C,D) +
P(B|A)P(D|C)U(B,C,D)
EU( A ) = 0.7 * 0.5 * 0.2 + 0.7 * 0.5 * 0.8 +
0.3 * 0.8 * 0.4 + 0.3 * 0.2 * 1
EU( A ) = 0.506
Rational decision is to do action A
Optimal policy



*A = T
*C (B) = F
*C (B) = T
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 42
Summary of Course

Lecture 11: Introduction to Learning

For all learning algorithms









Training data
Objective of learning
Evaluation
General algorithm
Precision and recall
Overfitting and n-fold cross-validation
K-Means
Q-Learning
Exploration function
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 43
Summary of Course

Lecture 12: Introduction to Soft Computing

Fuzzy logic



Artificial neural networks



Fuzzy sets, fuzzy membership functions, membership
degree
Fuzzy rules
Artificial neuron
Perceptron network
Genetic algorithms



Individuals
Operators: crossover, mutation, selection
Search algorithm
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 44
Summary of Course

Lecture 13: Introduction to Ontologies


Objects, Categories, Relations, Attributes
Inheritance

Problems
ECE457 Applied Artificial Intelligence
R. Khoury (2008)
Page 45