Wednesday in Week 11: my own slides

Download Report

Transcript Wednesday in Week 11: my own slides

Introduction to AI
&
AI Principles (Semester 1)
WEEK 11
John Barnden
Professor of Artificial Intelligence
School of Computer Science
University of Birmingham, UK
Review of the term
Search
Evaluation forms
REVIEW
of the term
Review: Topics I Have Covered
Nature of AI: aims, applications, branches, issues.
“Intelligence” and its connection to “stupidity”.
Expert AI versus Everyday (“Common-Sense”) AI.
Why everyday AI is difficult.

Language processing, vision, planning, common-sense reasoning
 Search (and more on this today).
Review: My Topics, contd
Why planning, common-sense reasoning, language
processing, etc. may need representation.
Why natural language is problematic for this … while also
having many strengths.
What we need to represent: entities (incl. situations,
feelings, …), properties, relationships, groups, propositional
structure, generalization/quantification, …
Review: My Topics, contd
 Taster of logic.

Captures entities, properties, relations, extreme forms of
quantification, basic forms of propositional structure. Can also
handle groups of entities.

Aims of logic: clarity and simplicity compared to NL; systematic,
sound reasoning; general applicability; common format for
comparison.
Intro to semantic networks (and frames).
Production systems.
Review: Guest Lectures
Chess, Computer games
Learning, Neural networks
Evolutionary computing
Vision
Robotics, Agents
Philosophy
Review: General Themes
Uncertainty, vagueness, conflict, missing info, diversity of
info.
Hence: satisficing, graceful degradation, heuristic
processing (i.e., using rules of thumb).
Context-sensitivity; relativity to agents’ purposes.
Task variability, learning, adaptation, repair (e.g., of plans).
Representation.
Reasoning.
Search (more in a minute).
SEARCH
You should get the details from readings:
Callan book: Chapter 3
John Bullinaria’s slides (week 8)
REVIEW: Towards “Search”
In planning, one can mentally “search” through possible
states of the world you could get to, or that would be useful
to get to, by imagining doing actions.
(FORWARDS SEARCH) If I do this, then that would happen,
and then if I do this, that would come about, or if instead I did
this then that would happen, … … … … … … …
OR
(BACKWARDS SEARCH) To get such and such a (sub-)goal
state, I could perhaps do this action from such and such another
state, and to get to that state I could perhaps do so-and-so, or
alternatively I could have done such and such … … … …
REVIEW: Towards Search, contd.
What order to investigate the actions possible in or towards
any given state? Investigate all or just some? All in a bunch,
or at different points in the search?
Follow a line of investigation as far as you can, and then
hop back to a choice point if not getting anywhere?
Any limit on the number of states investigated, or on how
far you follow any given line?
How can you measure how promising a state is?
How to take care of unexpected world conditions or
changes, or unexpected effects of your own actions?
New on Search
We have encountered search in at least the following forms,
apart from in planning/navigation:

Move choice in chess

Evolutionary computing

Intersection search in semantic networks

The overall operation of production systems

Common-sense reasoning tasks such as choosing a good set of
presents to give to people.
Search, contd.
Search is a matter of investigating a set of possibilities, of
any sort, by going from one to the other by means of certain
specified operations.
Potentially important across all AI areas.
The “possibilities” are usually called “states”. They
typically describe states of some world (real or artificial)
outside the search, or some objects, or designs for
something, etc. etc.
Search, contd.
 A search problem consists of the following pre-defined things:

A set of states (infinitely many, possibly ).

A set of possible operations (finitely many, usually) that can transform states
into other states, and a specification of their effects on states.

When operations represent things the agent could do in some environment
represented by the states: a way of determining the cost of actually applying any
particular sequence of operations starting from a particular state.
Cost usually found by adding up the costs of individual operations along the
sequence. The cost is often just the sequence length.

A particular initial state, and either a particular goal state (or listed set of
alternative goal states), or a goal condition: a specification of what goal state
looks like.

In some cases, very many states, perhaps even all, count as goals: the question
then is of getting to a best possible goal, according to some goal evaluation
function. (This is the situation assumed by the “Hill Climbing” search strategy.)
Search, contd.
Possible aims in a search problem:

When only a goal condition is given: discover one or more specific
goals (and perhaps a best possible goal). E.g.: a particular design
for a building.

When either a specific goal state or a goal condition is given:
discover one or more solution paths: each being a sequence of
operations going from the initial state to a goal state.
This is the more typical case. Applies to planning, for example.

In addition: usually want reasonably good solution paths in terms
of length or cost; and may even want the optimal solution path(s).
Summary of Bullinaria’s Slides
 Mainly about different overall search strategies, concerned with what
order to look at states in, giving different “shapes” to the search
process. The slides concentrate on forwards search.
 Uninformed search strategies: the strategy does not use any
knowledge other than the successor function (the algorithm for
working out what effect an operation has on a state), the initial state,
the goals and the cost function.
 Informed search strategies: also use some heuristic algorithm that
estimates how promising an operation is at a given state, or how
promising a given state is from the point of view of reaching a goal, or
how good as a goal a state is. The algorithm typically uses additional
knowledge about the domain.
The algorithm is often in the form of a heuristic function: this
estimates the remaining least possible cost from the state to a goal.
Bullinaria’s Slides, contd.
Uninformed strategies: centred on

Depth-first search

Breadth-first search (guarantees shortest solution paths)

Iterative deepening: compromise between depth-first and breadthfirst.
Informed strategies:

Best-first search (uses heuristic function), & special case:
A* search (can guarantee optimal [=lowest cost] solution paths)

Hill Climbing.