chapter 1 - Blog Bina Darma

Download Report

Transcript chapter 1 - Blog Bina Darma

CHAPTER 1
Introduction to
Artificial Intelligence
What is “Artificial Intelligenc
- Problems that are easy for
humans but hard for
computers?
- A set of techniques?
(Logic, probability, utility, etc.)
- Is it science or engineering?
- Machines that think like
humans?
- Machines that act like
humans?
- Machines that act rationally?
Thinking Like Humans?
• The Cognitive Science approach:
1960s ``cognitive revolution'': informationprocessing model
replaced prevailing orthodoxy of behaviorism
• Scientific theories of internal activities of the brain
What level of abstraction? “Knowledge'' or “circuits”?
Cognitive science: Predicting and testing
behavior of human subjects (top-down)
Cognitive neuroscience: Direct identification
from neurological data (bottom-up)
Both approaches now distinct from AI
• Problems:
Very hard to evaluate model accuracy
Doesn’t necessarily lead to high-performing
systems
Acting Like Humans?
• Turing (1950) ``Computing machinery and intelligence''
- ``Can machines think?'' → ``Can machines behave
intelligently?''
- An operational test for intelligent behavior: the
Imitation Game
- Predicted that by 2000, a 30% chance of fooling a lay
person for
5 minutes
- Suggested most of the major components of AI:
knowledge, reasoning, language understanding, learning
• Problems:
- What’s the point? Are humans really the best
standard?
- Turing test is not reproducible or amenable to
mathematical analysis
Acting Rationally?
• Rational action: doing the “right thing”
- The right thing: maximize goal achievement, given
available info
- Doesn't necessarily involve thinking
- Thinking is in the service of rational action
- Entirely dependent on goals!
- Irrational ≠ insane, irrationality is sub-optimal action
- Rational ≠ successful
• Our focus here: rational agents
- Systems which make the best possible decisions
given goals, evidence, and constraints
- In the real world, usually lots of uncertainty, lots of
complexity
- Usually, we’re just approximating rationality
• “Computational rationality” a better title for this cour
Designing Rational Agents
•
•
•
•
An agent is an entity that perceives and acts
This course is about designing
rational agents
Abstractly, an agent is a function from percept
histories to actions:
• For any given class of environments and tasks,
we seek the agent (or class of agents) with the
best performance
• Computational limitations make perfect
rationality unachievable
• So we want the best program for given machine
resources
Adjacent Fields
• Philosophy:
Logic, methods of reasoning
Foundations of learning, language, rationality
• Mathematics
Formal representation and proof
Algorithms, computation, (un)decidability,
(in)tractability
Probability and statistics
• Psychology
Adaptation
Phenomena of perception and motor control
Experimental techniques (psychophysics, etc.)
• Economics: formal theory of rational decisions
• Linguistics: knowledge representation, grammar
• Neuroscience: physical substrate for mental activity
• Control theory: homeostatic systems, stability, simple
agent designs
A Short History
•
•
•
•
•
•
•
•
•
•
•
•
•
1940-1950: Early days
1943: McCulloch & Pitts: Boolean circuit model of brain
1950: Turing's ``Computing Machinery and Intelligence'‘
1950-70: Excitement: Look, Ma, no hands!
1950s: Early AI programs, including Samuel's checkers
program, Newell & Simon's Logic Theorist, Gelernter's
Geometry Engine
1956: Dartmouth meeting: ``Artificial Intelligence'' adopted
1965: Robinson's complete algorithm for logical reasoning
1970-88: Knowledge-based approaches
1969—79: Early development of knowledge-based systems
1980—88: Expert systems industry booms
1988—93: Expert systems industry busts: “AI Winter”
1988-: Statistical approaches: “AI Spring”?
Resurgence of probability, focus on uncertainty
General increase in technical depth
2000-: Where are we now?
Unintentionally Funny Stories
• One day Joe Bear was hungry. He asked his friend Irving Bird
where some honey was. Irving told him there was a beehive in
the oak tree. Joe walked to the oak tree. He ate the beehive. The
End.
• Henry Squirrel was thirsty. He walked over to the river bank
where his good friend Bill Bird was sitting. Henry slipped and
fell in the river. Gravity drowned. The End.
• Once upon a time there was a dishonest fox and a vain crow.
One day the crow was sitting in his tree, holding a piece of
cheese in his mouth. He noticed that he was holding the piece
of cheese. He became hungry, and swallowed the cheese. The
fox walked over to the crow. The End.
[Shank, Tale-Spin System, 1984]
Natural Language
• Speech technologies
Automatic speech recognition (ASR)
Text-to-speech synthesis (TTS)
Dialog systems
• Language processing technologies
Machine translation:
Aux dires de son président, la commission serait en
mesure de le faire . According to the president, the
commission would be able to do so .
Il faut du sang dans les veines et du cran . We must blood
in the veines and the courage .There is no backbone , and
no teeth .
Information extraction
Information retrieval, question answering
Text classification, spam filtering, etc…
Robotics
• Robotics
Part mech. eng.
Part AI
Reality much harder than
simulations!
• Technologies
Vehicles
Rescue
Soccer!
Lots of automation…
• In this class:
We ignore mechanical
aspects
Methods for planning
Methods for contr
Logic
• Logical systems
- Theorem provers
- NASA fault diagnosis
- Question answering
• Methods:
- Deduction systems
- Constraint satisfaction
- Satisfiability solvers
Game Playing
• May, '97: Deep Blue vs. Kasparov
- First match won against world-champion
- ``Intelligent creative'' play
- 200 million board positions per second!
- Humans understood 99.9 of Deep Blue's moves
- Can do the same now with a big PC cluster
• Open question:
- How does human cognition deal with the
- search space explosion of chess?
- Or: how can humans compete with computers at all??
• 1996: Kasparov Beats Deep Blue
- “I could feel - I could smell - a new kind
intelligence across the table.”
• 1997: Deep Blue Beats Kasparov
- “Deep Blue hasn't proven anything.”
Decision Making
• Many applications of AI: decision making
- Scheduling, e.g. airline routing, military
- Route planning, e.g. mapquest
- Medical diagnosis, e.g. Pathfinder
system
- Automated help desks
- Fraud detection
Defining “Rational” Action
I. ra·tion·al (răsh'ә-nәl) adj.
1. Having or exercising the ability to
reason.
2. Of sound mind; sane.
3. Consistent with or based on reason;
logical: rational
behavior. See synonyms at logical
American Heritage Dictionary
II. For each possible percept sequence, a rational agent should
select an action that is expected to maximize its performance
measure, given the evidence provided by the percept
sequence and whatever built-in knowledge the agent has.
Russell & Norvig, p.36
III. Given its goals and prior knowledge, a rational agent should:
1. Use the information available in new observations to update its
knowledge, and
2. Use its knowledge to act in a way that is expected to achieve its goals in
the world
Defining tasks: PEAS
• Given its goals and prior knowledge, a rational agent should:
1. Use the information available in new observations to update its
knowledge, and
2. Use its knowledge to act in a way that is expected toachieve its
goals in the world
Formalizing “Rational” Action
•
Given its goals and prior knowledge, a rational agent should:
1. Use the information available in new
observations to update its knowledge, and
2. Use its knowledge to act in a way that is
expected to achieve its goals in the world
•
How do we represent knowledge about the world?
- Logical knowledge bases
- Probabilistic models
- New: Hybrid models!
•
How can we formally represent performance measures, or
equivalently, agent goals and preferences?
- Utility theory, loss functions
•
How do we update our knowledge from our percepts?
- Logical inference, probabilistic reasoning
•
How do we compute the expected performance of alternative actions?
- Probabilistic reasoning and decision theory
What’s Next?
• To design general rational agents, we’ll need theories of
logic, probability, and utility
- Difficult material - wait a few weeks
• Let’s start with search techniques. Why?
– An important subproblem of many AI problems:
• Searching for sequences of actions that maximize expected future
performance (planning, policy search)
• Searching in our knowledge base for the possible future
consequences of actions (logical/probabilistic inference)
• Searching for models of the world that fit our observations (machine
learning)
- Doesn’t require much background
- Search techniques were one of the successes of early
AI research
- With search, you can build a prettty good Pac-Man
agent!
Search Problem: Route Planning
Search Problem: Route Planning
Search Problem Definition
• A search problem is defined by four items:
Initial state: Arad
Successor function: S(x) = set of action–state pairs:
S(Arad) = {<Arad → Zerind, Zerind>, … }
Goal test: can be
- explicit: x = Bucharest
- implicit: Checkmate(x)
Path cost: (additive)
- e.g., sum of distances, number of actions
executed, etc.
- c(x, a, y) is the step cost, assumed to be ≥ 0
• A solution is a sequence of actions leading from the
initial state to a goal state
Example: Vacuum World
Example: Vacuum World
• Can represent problem as a state graph
- Nodes are states
- Arcs are actions
- Arc weights are costs
Example: 8-Puzzle
What are the states?
What are the actions?
What states can I reach from the start state?
What should the costs be?
Example: N-Queens
What are the states?
What is the start?
What is the goal?
What are the actions?
What should the costs be?
Example: Assembly
What are the states?
What is the goal?
What are the actions?
What should the costs be?
A Search Tree
Search trees:
Represent the branching paths through a state graph.
Usually much larger than the state graph.
Can a finite state graph give an infinite search tree?
States vs. Nodes
Problem graphs have problem states
- Have successors, step costs
Search trees have search nodes
- Have parents, children, depth, path cost, etc.
- Expand uses successor function to create
new search tree nodes
- The same problem state may be in multiple
search tree nodes
Tree Search
A Search Graph
How do we get from S to G? And what’s the smallest possible number of
transitions?
Depth First Search (DFS)
Search Algorithm Properties
•
•
•
•
Complete? Guaranteed to find solution if one exists?
Optimal? Guaranteed to find the least cost path?
Time complexity?
Space complexity?
Variables:
DFS
Breadth First Search (BFS)