Introduction to AI

Download Report

Transcript Introduction to AI

Artificial Intelligence
and Searching
CPSC 315 – Programming Studio
Spring 2009
Project 2, Lecture 1
Adapted from slides of
Yoonsuck Choe
Artificial Intelligence

Long-standing computational goal


Field of AI very diverse




Turing test
“Strong” AI – trying to simulate thought itself
“Weak” AI – trying to make things that behave intelligently
Several different approaches used, topics studied
Sometimes grouped with other fields


Robotics
Computer Vision
Topics in Artificial Intelligence

Problem Solving
Reasoning
Theorem Proving
Planning
Learning
Knowledge Representation
Perception
Agent Behavior
etc.

Common theme: reason about domain knowledge








Domain Knowledge

To perform a task, systems need a representation of
the domain



Symbolic
 Explicit representation of domain objects, concepts, and
attributes
 E.g. Rules, frames, schemas
Sub-symbolic
 Distributed representation of objects, concepts, and
attributes in the world
 E.g. neural nets
There are also representations that blend these two
depending on their use
Frames & Rules

Frames



Represents declarative and behavioral information
 Like objects in E-R diagram or OO code
Reasoning through inheritance of attributes and behaviors
 Single and multiple inheritance
 Class-instance and prototype inheritance
Rules



Often of the form “If A then B” (A → B)
Reasoning through associative property
 A → B and B → C means A → C
Often combined with other representation of objects
Planning

Actions often represented as preconditions and
post-conditions

CookFood
pre: haveRawFood AND haveCookingDevice
post: haveCookedFood AND NOT haveRawFood

BuyRawFood
pre: atGroceryStore AND money>=5
post: haveRawFood AND money = money – 5

IncreaseRetirement (n)
pre: money>=n
post: retirement = retirement + n AND money = money – n

Assume features not mentioned in post-condition are not
modified by action
Planning

Forward chaining




From current state to decision (data driven)
Often used in open-ended domains (e.g. design) and
domains where new data becomes available over time
Identifies potential action sequences
 A utility (“goodness”) function used to select among
possible paths (could be lowest cost in design)
Backward chaining


From goal to actions (goal driven)
Used in domains with fixed number of outcomes (e.g.
diagnosis)
 Hypothesis/test method identifies possible diagnoses
 Tests to discriminate between diagnoses are then identified
Planning

How to select among actions when more than
one is available

Priority Order



Often times implemented implicitly through order of
actions in list
Could have priority ranks, but then again have to
choose when more than one in the same rank are
available
Precision of Context


Number of preconditions often used to infer more
specialized action
More specialized is assumed better
Schemas & Case-based
Reasoning

Schemas



Represents normal sequences of actions/events
Case-based reasoning:
reuse solution from prior case for current context
 Identify appropriate schema requires similarity assessment
 Revise/adapt case to match current context
 Perhaps save new case as schema for future action
Our legal system includes case-based reasoning because
rule-based reasoning is fragile
 many unanticipated exceptions
 too many potential exceptions to be encoded
Schemas

Consider when entering a new restaurant

Restaurant schema 1









Enter restaurant
Get in line
Order at counter
Pay for food
Wait for food
Take food to table
Eat food
Take trash to trashcan
Leave restaurant
Schemas

Consider when entering a new restaurant

Restaurant schema 1









Enter restaurant
Get in line
Order at counter
Pay for food
Wait for food
Take food to table
Eat food
Take trash to trashcan
Leave restaurant

Restaurant schema 2









Enter restaurant
Ask for table
Wait to be seated
Order food from waiter
Wait for food
Eat food
Get bill
Pay bill & leave tip
Leave restaurant
Determining Similarity

How to identify similar contexts

Similar situation


Number of attributes in common
Can be weighted to indicate relative importance of
attributes


Does it look like McDonald’s or Christopher’s?
Similar process


Number of common actions preceding current state
Can be weighted as a function of time to emphasize
recent actions

Did you just give your car to the valet?
Game Playing and Search

Game playing a long-studied topic in AI


Seen as a proxy for how more complex reasoning can be
developed
Search


Understanding the set of possible states, and finding the
“best” state or the best path to a goal state, or some path
to the goal state, etc.
“State” is the condition of the environment
 e.g. in theorem proving, can be the state of things known


By applying known theorems, can expand the state, until reaching
the goal theorem
Should be stored concisely
Really Basic
State Search Example

Given a=b,b=c,c=d, prove a=d.
a=b, b=c, c=d
a=b, b=c, c=d
a=c
a=b, b=c, c=d
b=d
a=b, b=c, c=d
b=d, a=d
Operators

Transition from one state to another





Fly from one city to another
Apply a theorem
Move a piece in a game
Add person to a meeting schedule
Operators and states are both usually limited
by various rules


Can only fly certain routes
Only valid moves in game
Search



Examine possible states, transitions to find
goal state
Interesting problems are those too large to
explore exhaustively
Uninformed search


Systematic strategy to explore options
Informed search

Use domain knowledge to limit search
Game Playing


Abstract AI problem
Nice and challenging properties




Usually state can be clearly, concisely represented
Limited number of operations (but can still be large)
Unknown factor – account for opponent
Search space can be huge
 Limit response based on time – forces making good
“decisions”
 e.g. Chess averages about 35 possible moves per turn,
about 50 moves per player per game, or 35100 possible
games. But, “only” 1040 possible board states.
Types of games


Deterministic vs. random factor
Known state vs. hidden information
Examples
Deterministic
Perfect Info
Chess, Checkers, Monopoly,
Othello, Go
Backgammon
Imperfect Info Stratego,
Bridge?
Chance
Poker, Scrabble
Bridge?
Game Playing


In upcoming lectures, we will discuss some of
the basic methods for performing search
Project will focus on a deterministic game
with perfect information