Chapter 1 Introducti..
Download
Report
Transcript Chapter 1 Introducti..
Introduction
Introduction
• What is AI?
• The foundations of AI
• A brief history of AI
• The state of the art
2
What is AI? - definition
• Intelligence: “ability to learn, understand and
think” (Oxford dictionary)
• AI is the study of how to make computers make
things which at the moment people do better.
• Examples: Speech recognition, Smell, Face,
Object, Intuition, Inferencing, Learning new skills,
Decision making, Abstract thinking
3
AI Definitions
What is AI ?
• A broad field which means different things to different
people
• Concerned with getting computers to do tasks that require
human intelligence
There are many tasks which we might
reasonably think require intelligence which
computers do without even thinking
However
There are many tasks that people do without
thinking which are extremely difficult to
automate
Complex
Arithmetic
Recognizing a
Face
What is AI?
2. Thinking humanly
3. Thinking rationally
1. Acting humanly
4. Acting rationally
• The top row is concerned with thought processes and
reasoning
• Second row address behavior
5
Do human behavior is rational?
• A system is rational, if it does the right thing,
given what it knows
• We can distinguish human and rational
behavior and say, human are not rational, ie.,
Irrational (emotionally unstable)
• However, we are not perfect
• Ex: Not everyone gets same grade (A) on the
exam
AI Definitions
What is AI ?
Definitions organized into four categories
Think like human
The exciting new effort to make
computers think … machines with
minds, in the full and literal
sense. [Haugeland 85].
Think Rationally
The study of the computations that
make it possible to perceive, reason,
and act. [Winston, 1992]
Act humanly
The study of how to make
computers do things at which, at
the moment, people are better.
Act rationally
The branch of computer science that
is concerned with the automation of
intelligent behavior. [Luger and
[Rich & Knight, 1991]
Stubblefield, 1993]
1. Systems that act like humans
• The overall behaviour of the system
should be human like.
• It could be achieved by observation.
8
1. Acting Humanly: The Turing Test
• Alan Turing (1912-1954)
• “Computing Machinery and Intelligence”
(1950)
Imitation Game
Human
Human Interrogator
AI System
9
Turing Test
?
• You enter a room which has a computer terminal.
• You have a fixed period of time to type what you want
into the terminal, and study the replies.
• At the other end of the line is either a human being or a
computer system.
• If it is a computer system, and at the end of the period
you cannot reliably determine whether it is a system or
a human, then the system is deemed to be intelligent.
Turing Test
computer
person
tester
11
Recognizing AI
• The Turing test is one criterion, but it’s controversial
– It’s an imitation game, in which a human interrogator is
isolated in a room, with teletype connections to an
unseen human and an unseen computer
– The interrogator asks the same questions of the human
and the machine and tries to determine which is which
– If the interrogator can’t tell them apart, the computer is
intelligent
• Positive features of the Turing test
– It’s objective and unbiased
– It’s independent of how the machine operates
– There are no agreed upon alternative tests
Recognizing AI
• Negative features of the Turing test
– It overlooks aspects of intelligence such as perception
and mobility
– It overlooks human intelligence
– It can be gimmicky and detract from serious AI research
efforts
• What is the amount of knowledge that a machine would need to
pass Turing test ?
What sort of Functionality Is Needed?
• To act humanly?
–
–
–
–
–
–
Natural language processing
Knowledge Representation
Automated Reasoning
Machine Learning
Computer Vision
Robotics
14
Act Like Human
Computer would need:
1. Natural Language Processing Communication.
2. Knowledge Representationstore info before and during
interrogation.
3. Automated Reasoning answer questions and draw new
conclusions.
4. Machine learning adapt to new circumstances.
5. Computer vision to perceive objects
6. Robotics to manipulate objects and move about
Turing test
• Turing test avoided physical interaction between the
interrogator and the computer
• Turing test involves first four aspects (1-4)
• Total Turing test includes a video signal so that the
interrogator can test the perceptual abilities as well as to pass
physical objects
• Total Turing test involves last two aspects (5 – 6)
2. Think Like Human
The Cognitive Modeling approach
• To develop a program that think like human, the way the human think
should be known.
• Knowing the precise theory of mind ( how human think?) expressing the
theory as a computer program.
• GPS (General Problem Solver) [ by Newell & Simon, 1961]
• Concerned with comparing the trace of its reasoning steps to traces of
human subjects solving the same problem rather than correctly solve
problems
2. Thinking Humanly: Cognitive Modelling
• Not content to have a program correctly solving a problem.
• More concerned with comparing its reasoning steps to traces of human
solving the same problem.
Cognitive science:
• Computer models from AI + Experimental techniques from psychology
Construction of human mind working theories
• Requires testable theories of the workings of the
human mind
• Requires experimental investigation of actual humans or animals
• Note: We will not pursue human mind theory here as we have only a
computer for experimentation
18
2. Systems that think like humans
• Most of the time it is a black box where we are not
clear about our thought process.
• One has to know functioning of brain and its
mechanism for possessing information.
• It is an area of cognitive science.
– The stimuli are converted into mental representation.
– Cognitive processes manipulate representation to build new
representations that are used to generate actions.
• Neural network is a computing model for processing
information similar to brain.
19
3. Thinking Rationally: Laws of Thought
• Aristotle was one of the first to attempt to
codify “right thinking”, i.e., irrefutable
reasoning processes.
20
3. Think Rationally
The Law of Thought Approach
Aristotle and his syllogism ( right thinking) :
always gave correct conclusions given correct premises
Ex:
•
•
•
Socrates is a Man.
%Fact
All men are Mortal.
% Rule if X is a Man then X is Mortal.
Therefore Socrates is Mortal. % Inference
These laws of thoughts initiated the field of LOGIC.
Formal logic provides a precise notation and rules for representing and
reasoning with all kinds of things in the world.
3. Think Rationally
• Two main obstacles
1. Not easy to translate an informal knowledge into a formal logic.
Ex: when the knowledge is less than 100% certain
2. There is a difference between solving a problem “in principle” and
solving it “in practice”
Ex: It is usually the case that problems with few hundred facts
– Can exhaust the computational power of any computer.
– So, it is required to have some guidance as to which reasoning steps
to try first
Note: Both obstacles apply while building any computational reasoning
systems.
– Thus the need for heuristics.
4. Acting Rationally
The Rational Agent Approach
• Agent
• agent is something that acts
•
•
•
•
•
Computer agents
Operate autonomously
perceive their environments
Persist over a prolonged time period
Adapt to change, create and pursue goals
• Rational agent
• Acts so as to achieve the best outcome or when there is
uncertainty, the best expected outcome
4. Acting Rationally
•
•
•
•
Laws of thought approach vs. Act rationally
Laws of thought approach
Emphasis on correct inference
Making correct inferences is part of being rational agent
• Act rationally = reason logically to the conclusion that a
given action will achieve one’s goals
and then
act on that conclusion
4. Acting Rationally
• Correct inference is not always == rationality
• Ex: even if there is no correct thing to do, something must be done
• Hence, it means that there are also ways of acting rationally that
cannot be said to involve inference
• Ex: reflex actions ( acting rationally without involving inference)
• Fast action is required than careful deliberation
4. Acting Rationally
Two main advantages of rational agent approaches
1.
More general than “the laws of thought” approach ( correct inference is
one of several possible mechanism to achieve rationality)
2.
More amenable to scientific development than approaches based on
human behavior/thought.
Rationality vs. human behavior
• The rationality is mathematically well defined, general (many agent design
have been generated to achieve it)
• Human behavior adapted for one specific environment, defined by sum
total of all the things that humans do
4. Acting Rationally
• We focus on general principles of rational agents and on
components for constructing them
• perfect rationality
• Always doing the right thing is not feasible in complicated
environments because of high computational demands
• But a good starting point for analysis
• It simplifies the problem
• Limited rationality
• Acting appropriately when there is not enough time to do all
•
the computations
Not in the syllabus
27
The foundations of AI
The foundations of AI
Philosophy
Knowledge Rep., Logic, Foundation of
AI (is AI possible?)
Maths
Search, Analysis of search algos, logic
Economics
Expert Systems, Decision Theory,
Principles of Rational Behavior
Behavioristic insights into AI programs
Psychology
Neuroscience (Brain
Science)
Learning, Neural Nets
Control theory and
Cybernetics
Information Theory & AI, Entropy,
Robotics
Computer Sc. & Engg. Systems for AI
The Foundations of AI
1. Philosophy (423 BC - present):
•
•
•
•
Can formal rules be used to draw valid conclusions?
How does the mind arise from a physical brain?
Where does knowledge come from?
How does knowledge lead to action?
- Logic, methods of reasoning.
- Mind as a physical system.
- Foundations of learning, language, and rationality.
30
The Foundations of AI
2. Mathematics (c.800 - present):
• What are the formal rules to draw valid conclusions?
• What can be computed?
• How do we reason with uncertain information?
• Three areas developed– Logic, Computation and Probability
31
The Foundations of AI
2. Mathematics (c.800 - present):
• A) Formal representation and proof
• Development of Formal logic
– 1) Propositional or Boolean logic
– 2) Development of First-Order logic by extending the Boolean logic to include objects
and relations
• B) Algorithms, computation, decidability, tractability
• First algorithm was developed - Euclid’s algorithm for computing GCD
• C) Probability – Baye’s rule is the underlying approach for uncertain
reasoning in AI systems.
32
The Foundations of AI
3. Economics :
• How should we make decisions so as to
maximize payoff?
• How should we do this when others may not
go along?
• How should we do this when the pay off may
be far in the future ?
33
The Foundations of AI
3. Economics :
•
•
•
•
Decision theory
Game theory
Operations Research
Markov Decision Processes
34
The Foundations of AI
4. Neuroscience:
•
•
•
•
The study of nervous system, particularly the brain
The exact way in which the brain enables thought is unknown
However, it does enable thought has the evidence
“A strong blow to the head can lead to mental incapacitation”
• Neurons – Brain consists of nerve cells or neurons
• There is no theory on how an individual memory is stored
• A collection of simple cells can lead to thought, action,
consciousness that “brains cause minds”
35
The Foundations of AI
4. Neuroscience:
• The parts of a nerve cell or neuron.
• Each neuron consists of a cell body or soma, that contains a
cell nucleus.
• Branching out from the cell body are a number of fibers called
dendrites and a single long fiber called the axon.
• A neuron makes connections with 10 to 100,000 other
neurons at junctions called synapses.
• Signals are propagated from neuron to neuron by a
complicated electrochemical reaction.
• The signals control brain activity in the short term and also
enable long-term changes in the connectivity of neuron.
• These mechanisms are thought to form the basis for learning
36
in the brain.
The Foundations of AI
4. Neuroscience:
The parts of a nerve cell or neuron.
37
The Foundations of AI
4. Neuroscience:
A comparison of resources available at IBM BLUE GENE Supercomputer , a PC of 2008
and the human brain
• Note: The brain’s numbers are fixed whereas the supercomputer’s numbers
increase by a factor of 10 every 5 year or so
• The PC lags behind on all metrics except cycle time
38
The Foundations of AI
4. Neuroscience:
• Brains and digital computers have somewhat different
properties.
• computers have a cycle time that is a million times faster than
a brain.
• The brain makes up for that with far more storage and
interconnection than even a high-end personal computer
• The largest supercomputers have a capacity that is similar to
the brain's.
• Even with a computer of virtually unlimited capacity, we still
would not know how to achieve the brain's level of
intelligence
39
The Foundations of AI
4. Neuroscience:
• How do the brain works?
– Early studies (1824) relied on injured and abnormal people
to understand what parts of brain work
• More recent studies use accurate sensors to correlate brain
activity to human thought
– The measurement of intact brain activity began in 1929 with the
invention by Hans Berger of the electroencephalograph (EEG).
– The recent development of functional magnetic resonance imaging
(fMRI) is giving neuroscientists detailed images of brain activity
– Individual neurons can be stimulated electrically, chemically, or even
optically allowing neuronal input output relationships to be mapped.
• Despite these advances, we are still a long way from
understanding how cognitive processes actually work.
40
The Foundations of AI
5. Psychology (1879 - present):
• How do humans and animals think and act?
- Adaptation.
- Phenomena of perception and motor control.
- Experimental techniques.
41
The Foundations of AI
6. Computer Engineering:
• How can we build an efficient computer?
•
•
•
•
For artificial intelligence to succeed, we need two things:
intelligence and an artifact.
The computer has been the artifact of choice
Each generation of computer hardware has brought an
increase in speed and capacity and a decrease in price
42
The Foundations of AI
6. Computer Engineering:
• In 2005 power dissipation problems led to multiply the
number of CPU cores rather than the clock speed.
• Current expectations are that future increases in power will
come from massive parallelism
– Convergence with the properties of the brain.
• The software side of computer science has supplied AI with:
– The operating systems, programming languages, and tools needed to
write modem programs
43
The Foundations of AI
7. Control theory and cybernetics:
• How can artifacts operate under their own control?
– Previous Assumption: Only living things could modify their behavior in
response to changes in the environment
– Machines can modify their behavior in response to the environment
(sense/action loop)
• Ex: Water-flow regulator, steam engine governor, thermostat
• The theory of stable feedback systems (1894)
• Build
systems
that
transition
state to goal state with minimum energy
• In
1950,
control
theory
could
linear systems
•
from
only
AI largely rose as a response to this shortcoming
initial
describe
44
The Foundations of AI
8. Linguistics (1957 - present):
- How does language relate to thought?
• Knowledge representation: the study of how to put
knowledge into a form that a computer can reason with
• Grammar
• Speech demonstrates human intelligence
– Analysis of human language reveals thought taking place in
ways not understood in other settings
• Language and thought are believed to be tightly
intertwined
45
The Foundations of AI
8. Linguistics (1957 - present):
• Modern linguistics and AI intersect in a hybrid
field called computational linguistics or
natural language processing
•
46
Summary - The Foundations of AI
• Philosophy
• Mathematics
• Economics
• Neuroscience
• Psychology
• Computer
engineering
• Control theory
• Linguistics
Logic, methods of reasoning, mind as physical
system foundations of learning, language,
rationality
Formal representation and proof algorithms,
computation, (un)decidability, (in)tractability,
probability
utility, decision theory
physical substrate for mental activity
phenomena of perception and motor control,
experimental techniques
building fast computers
design systems that maximize an objective
function over time
knowledge representation, grammar
A Brief History of AI
A Brief History of AI
• The gestation of AI (1943 - 1956):
- 1943: McCulloch & Pitts: Boolean circuit model of brain.
- 1950: Turing’s “Computing Machinery and Intelligence”.
- 1956: McCarthy’s name “Artificial Intelligence” adopted.
• Why was it necessary for AI to become a separate field ?
• Why couldn't all the work done in AI have taken place under the name of
control theory or operations research or decision theory, which have
objectives similar to those of AI?
• Why isn't AI a branch of mathematics?
• The first answer is that AI from the start embraced the idea of duplicating
human faculties such as creativity, self-improvement, and language use.
• None of the other fields were addressing these issues.
• The second answer is methodology. AI is the only one of these fields that
is clearly a branch of computer science (USING COMPUTER SIMULATIONS)
• AI is the only field to attempt to build machines that will function
autonomously in complex, changing environments
49
A Brief History of AI
• Early enthusiasm, great expectations (1952 - 1969):
• Early successful AI programs:
– Newell & Simon’s Logic Theorist
– Newell & Simon’s General Problem Solver (GPS) to imitate human
problem solving
• Solved puzzles and considered sub goals and possible actions similar to which humans
approached the same problems.
• Thus, GPS was probably the first program to embody the "thinking humanly" approach
– Newell & Simon’s Physical symbol system hypothesis:
• which states that "a physical symbol system has the necessary and sufficient means for
general intelligent action."
• Any system (human or machine) exhibiting intelligence must operate by manipulating
data structures composed of symbols.
50
A Brief History of AI
• Early enthusiasm, great expectations (1952 1969):
• Early successful AI programs:
–
–
–
–
Samuel’s Checkers
Gelernter’s Geometry Theorem Prover
Robinson’s complete algorithm for logical reasoning.
At 1958 in MIT AI Lab McCarthy defined the highlevel language Lisp
•
which was the dominant AI programming language for the next
30 years.
51
A Brief History of AI
• A dose of reality (1966 - 1973):
• The first kind of difficulty arose because most early programs knew
nothing of their subject matter
• They succeeded by means of simple syntactic manipulations
• Early machine translation efforts to speed up the translation of Russian
scientific papers in the wake of the Sputnik launch in 1957.
• It was thought initially that simple syntactic transformations based on the
grammars of Russian and English, and word replacement from an
electronic dictionary, would suffice to preserve the exact meanings of
sentences.
• The fact is that accurate translation requires background knowledge in
order to resolve ambiguity and establish the content of the sentence.
52
A Brief History of AI
• A dose of reality (1966 - 1973):
• The second kind of difficulty was the intractability of
many of the problems
• Most of the early AI programs solved problems by trying
out different combinations of steps until the solution was
found.
• This strategy worked initially with limited objects and
hence very few possible actions and very short solution
sequences.
• Before the theory of computational complexity was
developed, it was widely thought that "scaling up" to
larger problems was simply a matter of faster hardware
and larger memories.
53
A Brief History of AI
• Knowledge-based systems (1969 - 1979):
• Uses more powerful, domain-specific knowledge
• Allows larger reasoning steps and
• Can more easily handle typical expertise.
• Ex1 :1969: DENDRAL by Buchanan et al.
– generated all possible structures consistent with the formula of the
molecule
– The significance of DENDRAL was that it was the first successful
knowledge-intensive system:
– its expertise derived from large numbers of special-purpose rules.
54
A Brief History of AI
• Knowledge-based systems (1969 - 1979):
• Ex2: 1976: MYCIN by Shortliffle
• Expert systems in medical diagnosis
• With about 450 rules, MYCIN was better than junior doctors.
•
Two major differences from DENDRAL.
– First, unlike the DENDRAL rules, no general theoretical model existed
from which the MYCIN rules could be deduced.
– They had to be acquired from extensive interviewing of experts, who
in turn acquired them from textbooks, other experts, and direct
experience of cases.
– Second, the rules had to reflect the uncertainty associated with
medical knowledge.
– MYCIN incorporated a calculus of uncertainty called certainty factors
which seemed (at the time) to fit well with how doctors assessed the
impact of evidence on the diagnosis.
55
A Brief History of AI
• AI becomes an industry (1980 - present):
- Expert systems industry booms.
- 1981: Japan’s 10-year Fifth Generation project to build intelligent
computers.
56
A Brief History of AI
• The return of NNs (1986 - present):
- Mid 80’s: Back-propagation learning algorithm reinvented.
- 1988: Resurgence of probability.
-
57
A Brief History of AI
• AI adopts the scientific method (1987 - present):
• In terms of methodology, AI has finally come firmly
under the scientific method.
• To be accepted, hypotheses must be subjected to
rigorous empirical experiments
– and the results must be analyzed statistically for their
importance
– It is now possible to replicate experiments by using
shared repositories of test data and code.
58
A Brief History of AI
• The emergence of intelligent agents (1995- present)
• One of the most important environments for
intelligent agents is the Internet.
• AI systems have become so common in Web-based
applications that the "-bot'' suffix has entered
everyday language.
• Internet tools, such as search engines, recommender
systems, and Web site aggregators use AI
technologies
59
A Brief History of AI
• The availability of very large data sets( 2001-present)
• In the 60-year history of computer science, the emphasis has
been on the algorithm as the main subject of study.
• But recent work in AI suggests that for many problems, it makes
more sense to worry about the data
• This is true because of the increasing availability of very large data
sources
• Examples:
• trillions of words of English, billions of images from the Web, or
billions of base pairs of genomic sequences
60
A Brief History of AI
• The availability of very large data sets( 2001-present)
• Example 2:
• Hays and Efros (2007) discuss the problem of filling in holes in a
photograph.
• Suppose you use Photoshop to mask out an ex-friend from a group
photo, but now you need to fill in the masked area with something
that matches the background.
• Hays and Efros defined an algorithm that searches through a
collection of photos to find something that will match.
• They found the performance of their algorithm was poor when they
used a collection of only ten thousand photos
• but excellent performance when they grew the collection to two
million photos.
61
A Brief History of AI
• The availability of very large data sets( 2001present)
• Work like this suggests the "knowledge bottleneck'' in AI
• The problem of how to express all the knowledge that a
system needs
• may be solved in many applications by learning methods
rather than hand-coded knowledge engineering
62
Summary - Abridged history of AI
•
•
•
•
1943
1950
1956
1952—69
• 1950s
• 1965
• 1966—73
•
•
•
•
•
1969—79
1980-1986-1987-1995--
McCulloch & Pitts: Boolean circuit model of brain
Turing's "Computing Machinery and Intelligence"
Dartmouth meeting: "Artificial Intelligence" adopted
Look, Ma, no hands! – long list of Xs and a belief that “ a
machine can never do X”
Early AI programs, including Samuel's checkers
program, Newell & Simon's Logic Theorist,
Gelernter's Geometry Engine
Robinson's complete algorithm for logical reasoning
AI discovers computational complexity
Neural network research almost disappears
Early development of knowledge-based systems
AI becomes an industry
Neural networks return to popularity
AI becomes a science
The emergence of intelligent agents
State of the art
• What can AI do today?
• A few examples of artificial intelligence systems that exist
today
• 1. Robotic vehicles:
• A driverless robotic car named STANLEY sped through the
rough terrain of the Mojave dessert at 22 mph, finishing the
132-mile course first to win the 2005 DARPA Grand Challenge.
• STANLEY is fitted with cameras, radar, and laser range finders
to sense the environment and on board software to command
the steering braking, and acceleration (Thrun, 2006).
• The following year CMU's Boss won the Urban Challenge,
safely driving in traffic through the streets of a closed Air
Force base, obeying traffic rules and avoiding pedestrians and
other vehicles
State of the art
• What can AI do today?
• A few examples of artificial intelligence systems that exist
today
• 2. Speech recognition:
• A traveler calling United Airlines to book a flight
• can have the entire conversation guided by an automated
speech recognition and dialog management system.
State of the art
• What can AI do today?
• A few examples of artificial intelligence systems that exist
today
• 3. Autonomous planning and scheduling:
• A hundred million miles from Earth, NASA's Remote Agent
program became the first on-board autonomous planning
program
• to control the scheduling of operations for a spacecraft
State of the art
• What can AI do today?
• A few examples of artificial intelligence systems that exist
today
• 4. Game playing:
• IBM’s Deep Blue defeated the reigning world chess champion
Garry Kasparov in 1997
• Human champions studied Kasparov's loss
• were able to draw a few matches in subsequent years
• but the most recent human-computer matches have been
won convincingly by the computer.
State of the art
• What can AI do today?
• 5. Spam fighting:
• Each day, learning algorithms classify over a billion messages
as spam
• saving the recipient from having to waste time deleting
• for many users it could comprise 80% or 90% of all messages,
if not classified away by algorithms.
• Because the spammers are continually updating their tactics
• it is difficult for a static programmed approach to keep up and
learning algorithms work best
State of the art
• What can AI do today?
• 6. Logistics planning:
• During the 1991 Gulf War, US forces deployed an AI logistics
planning and scheduling program
• that involved up to 50,000 vehicles, cargo, and people
• Parameters are starting points, destinations, routes, and
conflict resolution
• 7. Robotics:
• The iRobot Corporation has sold over two million Roomba
robotic vacuum cleaners for home use.
State of the art
• What can AI do today?
• 8. Machine Translation:
• A computer program automatically translates from Arabic to
English
• The program uses a statistical model built from examples of
Arabic-to-English translations and from examples of English
text
• There are two trillion words (Brants et at., 2007).
• None of the computer scientists on the team speak Arabic
State of the art
• 9. No hands across America (driving autonomously 98% of the
time from Pittsburgh to San Diego)
• 10. Proverb solves crossword puzzles better than most
humans
State of the art
• Science, engineering, and mathematics of AI is therefore
required