INTRODUCTION TO ARTIFICIAL INTELLIGENCE
Download
Report
Transcript INTRODUCTION TO ARTIFICIAL INTELLIGENCE
INTRODUCTION TO ARTIFICIAL
INTELLIGENCE
Massimo Poesio
LECTURE 1: Intro to the course, History
of AI
ARTIFICIAL INTELLIGENCE:
A DEFINITION
• The branch of Computer Science whose aim is
to develop machines able to display intelligent
behavior
• Strong AI: developing machines all but
undistinguishable from human beings
• Weak AI: developing SIMULATIONS of human
intelligence
AI BY EXAMPLE: GAMES
AI BY EXAMPLE: ROBOCUP
AI BY EXAMPLE: LANGUAGE
AI BY EXAMPLE: LANGUAGE
A BRIEF HISTORY OF AI
• Forerunners, I: logic and ontologies
• Forerunners, II: mechanical machines / robots
• The beginning of AI: Turing, Dartmouth,
Games, Search
• The role of Knowledge in Human Intelligence
• The role of Learning
• Modern AI
ARISTOTLE
• Aristotle developed the first theory of
knowledge and reasoning – his ideas
eventually developed into modern
– LOGIC
– ONTOLOGIES
ARISTOTLE: SYLLOGISM
(Prior Analytics)
The first attempt to develop a precise method for
reasoning about knowledge: identify VALID REASONING
PATTERNS, or SYLLOGISMS
BARBARA:
A: All animals are mortal
A: All men are animals.
A: Therefore, all men are mortal.
DARII:
A: All students in Fil., Logica & Informatica take Intro to AI
I: Some students in Filosofia take Fil., Logica & Informatica.
I: Therefore, some student in Filosofia takes Intro to AI.
FIRST ONTOLOGIES
LOGIC: BEYOND ARISTOTLE
• Ramon Lull (13th Century): first mechanical
devices for automatic reasoning (Lull’s disks)
• Leibniz (17th Century): Encoding for syllogisms
• Boole (19th Century): Boolean Algebra
• Frege (1879): Predicate calculus
BOOLEAN ALGEBRA
FORERUNNERS, 2
TURING
• Alan Turing is the father of AI
• He was the first to imagine machines able of
intelligent behavior
• And devised an intelligence test, the TURING
TEST: replace the question “Can a machine be
endowed with intelligence” with the question:
– Can a machine display such human-like behavior
to convince a human observer that it is a human
being?
DARTMOUTH
• In 1956 a group of researchers including J.
McCarthy, M. Minsky, C. Shannon, N.
Rochester organized a workshop at Dartmouth
to study the possibility of developing machine
intelligence
THE BEGINNINGS OF AI (1956-1966)
• Early AI researchers identified intelligence
with the kind of behavior that would be
considered intelligent when displayed by a
human, and tried to develop programs that
reproduced that behavior
• Examples:
– Chess
– Theorem proving
HEURISTIC SEARCH
• This early research
focused on the
development of SEARCH
ALGORITHMS (A*) that
would allow computers to
explore a huge number of
alternatives very
efficiently
THE SUCCESS OF EARLY AI
In 1997, the chess-playing program DEEP BLUE developed by IBM
researchers led by Feng-hsiung Hsu, beat the chess world
champion Gary Kasparov over six games
EARLY AI RUNS INTO TROUBLE (19661973)
• Soon however researchers realized that these
methods could not be applied to all problems
requiring intelligence, and that there were a
number of ‘simple’ problems that could not be
handled with these methods at all
– Example of the first: machine translation (the
ALPAC report)
– Example of the second: natural language, vision
COMMONSENSE KNOWLEDGE IN
LANGUAGE UNDERSTANDING
• Winograd (1974):
– The city council refused the women a
permit because they feared violence.
– The city council refused the women a
permit because they advocated violence
AI KEY DISCOVERIES, 1
• Performing even apparently simple tasks like
understanding natural language requires lots
of knowledge and reasoning
THE ‘KNOWLEDGE YEARS’ (1969-1985)
• Development of knowledge representation
techniques
• Development of EXPERT SYSTEMS
• Development of knowledge-based techniques
for
– Natural Language Understanding
– Vision
KNOWLEDGE REPRESENTATION
METHODS
• Logic is the older formalization of reasoning
• It was natural to think of logic as providing the
tools to develop theories of knowledge and its
use in natural language comprehension and
other tasks
• Great success in developing theorem provers
• But AI researchers quickly realized that the
form of logic required was not valid deduction
FROM LOGIC TO AUTOMATED REASONING
• Starting from the ‘50s AI researchers
developed techniques for automatic theorem
proving
• These techniques are still being developed
and have been used to prove non-trivial
theorems
RESOLUTION THEOREM PROVING
All Cretans are islanders.
All islanders are liars.
Therefore all Cretans are liars.
∀X C(X) implies I(X)
∀X I(X) implies L(X)
Therefore, ∀X C(X) implies L(X)
¬C(X) ∨ I(X)
¬I(Y) ∨ L(Y)
¬C(X) ∨ L(Y)
HIGH PERFORMANCE THEOREM
PROVING
• There are now a number of very efficient
theorem provers that can be used to
demonstrate sophisticated mathematical
theorems
– Otter
– Donner & Blitzen
THE FOUR-COLOR PROBLEM
• Conjecture: given a plane
divided in regions, it is
possible to color the regiones
in such a way that two
adjacent regions are always of
different colors using no more
than 4 colors
• This conjecture was
demonstrated by an automatic
theorem prover in 1997
AI KEY DISCOVERIES, 2
• Neither commonsense nor ‘expert’ reasoning
involve only valid inferences from certain
premisses:
– Commonsense reasoning often involves jumping
to plausible conclusions
– Expert reasoning involves making decisions with
uncertainty
COMMONSENSE KNOWLEDGE IN
LANGUAGE UNDERSTANDING
• Winograd (1974):
– The city council refused the women a
permit because they feared violence.
– The city council refused the women a
permit because they advocated violence
DIAGNOSTIC REASONING IN EXPERT
SYSTEMS
IF patient has loss of weight AND
anorexia AND
spleen is palpable AND
fever
THEN
patient MAY have Hodgkin’s disease
BEYOND VALID INFERENCE
• As a result of discovery number 2, AI
researchers quickly got involved with
developing better theories of commonsense
reasoning and uncertain reasoning
– Drawing inspiration from psychology (work on
semantic networks, scripts)
– Developing new inference formalisms
SOME RESULTS FROM COGNITIVE PSYCHOLOGY:
SEMANTIC NETWORKS
• Collins & Quillian, 1969: knowledge appears to be
organized around objects and in a taxonomic way
– A canary is yellow
– A canary has feathers
– A canary eats food
• Haviland & Clark 1974, Sanford & Garrod 1979:
‘associated’ knowledge available when concepts are
mentioned
– I looked around the house.
– The lounge was very spacious.
AN EXAMPLE OF SEMANTIC NETWORK
COMMONSENSE REASONING: SCRIPTS
(SCHANK & ABELSON, 1977)
John entered the restaurant.
The menu had changed, so he ordered one
of the new dishes.
The bill was very steep.
AN EXAMPLE OF SCRIPT
‘KNOWLEDGE-BASED’ SYSTEMS FOR
LANGUAGE UNDERSTANDING: SHRDLU
Winograd, 1974
A DIALOGUE WITH SHRDLU
Person: PICK UP A BIG RED BLOCK.
Computer: OK.
Person: GRASP THE PYRAMID.
Computer: I DON'T UNDERSTAND WHICH PYRAMID YOU MEAN.
Person (changing their mind): FIND A BLOCK WHICH IS TALLER THAN THE ONE YOU
ARE HOLDING AND PUT IT INTO THE BOX.
Computer: BY "IT", I ASSUME YOU MEAN THE BLOCK WHICH IS TALLER THAN THE
ONE I AM HOLDING.
Computer: OK.
Person: WHAT DOES THE BOX CONTAIN?
Computer: THE BLUE PYRAMID AND THE BLUE BLOCK.
Person: WHAT IS THE PYRAMID SUPPORTED BY?
Computer: THE BOX.
Person: HOW MANY BLOCKS ARE NOT IN THE BOX?
Computer: FOUR OF THEM.
Person: IS AT LEAST ONE OF THEM NARROWER THAN THE ONE WHICH I TOLD YOU
TO PICK UP?
Computer: YES, THE RED CUBE.
EXPERT SYSTEMS
• Beginning with the ’70s, AI researchers
developed systems that could be used to
diagnose
– Illnesses (MYCIN, DXPLAIN)
– Mechanical problems
• DELTA-CATS1, General Electric, Bonissone et al 1984
• IDEA, Centro Ricerche Fiat, ~1993
– etc
EXPERT SYSTEMS
A COMMONSENSE ENCYCLOPEDIA:
CYC
• A project initiated in 1984 by Doug Lenat. The goal: to
encode all of commonsense knowledge
• Changed the underlying formalism several times.
– These days: a logic-based representation
• Two versions available:
– OpenCyc (http://www.opencyc.org/)
• 50 000 concepts, 300 000 facts
• Can be downloaded / on the Web
– ResearchCyc (http://research.cyc.com/)
• 300 000 concepts, 3 million facts
KNOWLEDGE IN CYC
"Bill Clinton belongs to the class of US Presidents“
(#$isa #$BillClinton #$UnitedStatesPresident)
“All trees are plants”
(#$genls #$Tree-ThePlant #$Plant)
“Paris is the capital of France".
(#$capitalCity #$France #$Paris)
COMMONSENSE REASONING
• Modelling commonsense inference required
the development of entirely new paradigms
for inference beyond classical logic
• Non-monotonic reasoning
• Probabilistic models
AI RUNS INTO TROUBLE, AGAIN
• The CYC project started in 1984, and by common
opinion is nowhere near finished
– Hand-coding of commonsense FACTS is unfeasible
– (We will get back to this point later when talking
about socially constructed knowledge)
• Work on lower-level tasks such as speech
perception revealed the impossibility of handcoding commonsense RULES and assigning them
priorities
SPEECH
KEY AI DISCOVERIES, 3
• A theory of intelligence requires a theory of
how commonsense knowledge and cognitive
abilities are LEARNED
THE MACHINE LEARNING YEARS
(1985-PRESENT)
• The development of methods for learning
from evidence started even before Dartmouth
• But machine learning has now taken center
stage in AI
CYBERNETICS
• McCulloch, Pitts (1943): first artificial neurons
model (based on studies of real neurons)
KNOWLEDGE REPRESENTATION IN THE BRAIN
MODELS OF LEARNING BASED ON THE
BRAIN: THE PERCEPTRON
LEARNING TO CLASSIFY OBJECTS
ARTIFICIAL INTELLIGENCE TODAY
• Artificial intelligence as a science:
– Artificial intelligence vs. Cognitive Science
• Artificial Intelligence as a technology
AI INDUSTRY: GOOGLE
AI INDUSTRY: ROBOTICS
CONTENTS OF THE COURSE
• Knowledge Representation
– A reminder about logic
– Ontologies
– Semantic networks
• Machine Learning
– A reminder about statistics
– Supervised learning
– Unsupervised learning
• Putting it all together: Natural Language
– A task that requires to use both knowledge
representation and machine learning
PRACTICAL INFORMATION
• 60 hours / 12 credits
• Timetable:
– Mondays, 10-12 and 16-18 (lectures)
– Tuesdays, 12-14 (labs / tutorials)
• Prerequisites
– The ideal student would have the background provided by
the three-year course in Filosofia and Informatica (some
background in linguistic, logic, and statistics; some
experience with programming)
• Evaluation
– A project to be presented at the exam
• Web site: http://clic.cimec.unitn.it/massimo/Teach/AI
READING MATERIAL
• Required:
– The course slides, available from the Web Site
– Other material downloadable from the Website
• Recommended readings:
– Russel and Norvig, Artificial Intelligence: A Modern Approach (2nd ed), Prentice-Hall
– Bianchini, Gliozzo, Matteuzzi, Instrumentum vocale: intelligenza artificiale e linguaggio,
Bononia
• Supplementary on specific sub-areas of AI:
– KR:
•
•
John F. Sowa, Knowledge Representation, Brooks / Cole
Blackburn, Bos, Representation and Inference for Natural language, CSLI
– ML:
•
Mitchell – Machine Learning – Prentice-Hall
READINGS
• This lecture:
– http://en.wikipedia.org/wiki/Timeline_of_artificial_intelligence
– John F. Sowa, Knowledge Representation, Brooks / Cole, ch. 1