Philosophy and History of AI
Download
Report
Transcript Philosophy and History of AI
CMSC 671
Fall 2010
Class #14– Wednesday, October 20
1
Today’s class
• History of AI
– Key people
– Significant events
• Future of AI
– Where are we going
• Philosophy of AI
– Can we build intelligent machines?
• If we do, how will we know they’re intelligent?
– Should we build intelligent machines?
• If we do, how should we treat them…
• …and how will they treat us?
• Class debate
– “Robot be Good”; Asimov’s Three Laws; Google cars
2
History of AI
Chronology of AI; Russell & Norvig Ch. 26
3
Key people (AI prehistory)
• George Boole invented propositional logic (1847)
• Karel Capek coined the term “robot” (1921)
• Isaac Asimov wrote many sf books and essays (I, Robot (1950)
introduced the Laws of Robotics – if you haven’t read it, you should!)
• John von Neumann: minimax (1928), computer architecture (1945)
• Alan Turing: universal machine (1937), Turing test (1950)
• Norbert Wiener founded the field of cybernetics (1940s)
• Marvin Minsky: neural nets (1951), AI founder, blocks world, Society
of Mind
• John McCarthy invented Lisp (1958) and coined the term AI (1957)
• Allen Newell, Herbert Simon: GPS (1957), AI founders
• Noam Chomsky: analytical approach to language (1950s)
4
Key people (early AI history)
•
•
•
•
Hubert and Stuart Dreyfus: anti-AI specialists
Ed Feigenbaum: DENDRAL (first expert system, 1960s)
Terry Winograd: SHRDLU (blocks world, 1960s)
Roger Schank: conceptual dependency graphs, scripts
(1970s)
• Shakey: mobile robot (SRI, 1969)
• Doug Lenat: AM, EURISKO (math discovery, 1970s)
• Ed Shortliffe, Bruce Buchanan: MYCIN (uncertainty
factors, 1970s)
5
Key events: Genesis of AI
• Turing test, proposed in 1950 and debated ever since
• Neural networks, 1940s and 1950s, among the earliest
theories of how we might reproduce intelligence
• Logic Theorist and GPS, 1950s, early symbolic AI
• Dartmouth University summer conference, 1956,
established AI as a discipline
• Early years: focus on search, learning, knowledge
representation
• Development of Lisp, late 1950s
6
Key events: Adolescence of AI
• The movie 2001: A Space Odyssey (1968) brought AI to the public’s
attention
• Early expert systems: DENDRAL, Meta-DENDRAL, MYCIN
• Arthur Samuels’s checkers player, Doug Lenat’s AM and EURISKO
systems, and Werbos’s and Rumelhart’s backpropagation algorithm
held out hope for the ability of AI systems to learn
• Hype surrounding expert systems led to an inevitable decline in interest
in the mid to late 1980s, when it was realized they couldn’t do
everything
• Hype surrounding neural networks in the late 1980s led to similar
disappointment in the 1990s
• Roger Schank’s conceptual dependency theory and Doug Lenat’s Cyc
started to address problems of common-sense reasoning and
representation
• Hans Berliner’s heuristic search player defeated the world
backgammon champion in 1979
7
Key events: AI adulthood (barely)
• Many commercial expert systems introduced, especially in
the 1970s and 1980s
• Fuzzy logic and neural networks used in controllers,
especially in Japan and Europe
• Recent developments and areas of great interest include:
– Bayesian reasoning and Bayes nets
– Ontologies, knowledge reuse, and knowledge acquisition
– Mixed-initiative systems that combine the best of human and
computer reasoning
– Multi-agent systems, Internet economies, intelligent agents
– Autonomous systems for space exploration, search and rescue,
hazardous environments
8
What do AI researchers do?
• Subject headings from AAAI-10 conference proceedings:
–
–
–
–
–
–
–
–
–
–
Constraints, Satisfiability, and Search
Knowledge-Based Information Systems
Knowledge Representation and Reasoning
Machine Learning
Multiagent Systems
Multidisciplinary Topics
Natural Language Processing
Reasoning about Plans, Processes, and Actions
Reasoning under Uncertainty
Robotics
35 papers
3 papers
23 papers
49 papers
42 papers
8 papers
7 papers
17 papers
10 papers
4 papers
–
–
–
–
–
–
–
–
–
–
Short Papers (miscellaneous)
AI and Bioinformatics Special Track
AI and the Web Special Track
Challenges in AI Special Track (position papers)
Integrated Intelligence Special Track
Physically Grounded AI Special Track
New Scientific and Technical Advances in Research
Senior Member Papers
Student Abstracts
Doctoral Consortium
3 papers
4 papers
31 papers
4 papers
10 papers
11 papers
12 papers
3 papers
23 papers
15 papers
9
Are we there yet?
• Great strides have been made in knowledge representation
and decision making
• Many successful applications have been deployed to (help)
solve specific problems
• Key open areas remain:
–
–
–
–
–
–
Incorporating uncertain reasoning
Real-time deliberation and action
Perception (including language) and action (including speech)
Lifelong learning / knowledge acquisition
Common-sense knowledge
Methodologies for evaluating intelligent systems
10
Philosophy of AI
Alan M. Turing, “Computing Machinery and
Intelligence”
John R. Searle, “Minds, Brains, and Programs”
11
Philosophical debates
• What is AI, really?
– What does an intelligent system look like?
– Does an AI need—and can it have—emotions, consciousness,
empathy, love?
• Can we ever achieve AI, even in principle?
• How will we know if we’ve done it?
• If we can do it, should we?
12
Turing test
• Basic test:
–
–
–
–
Interrogator in one room, human in another, system in a third
Interrogator asks questions; human and system answer
Interrogator tries to guess which is which
If the system wins, it’s passed the Turing Test
• The system doesn’t have to tell the truth (obviously…)
13
Turing test objections
• Objections are basically of two forms:
– “No computer will ever be able to pass this test”
– “Even if a computer passed this test, it wouldn’t be intelligent”
• Chinese Room argument (Searle, 1980), responses, and
counterresponses
– Robot reply
– Systems reply
14
“Machines can’t think”
• Theological objections
• “It’s simply not possible, that’s all”
• Arguments from incompleteness theorems
– But people aren’t complete, are they?
• Machines can’t be conscious or feel emotions
– Reductionism doesn’t really answer the question: why can’t
machines be conscious or feel emotions??
• Machines don’t have Human Quality X
• Machines just do what we tell them to do
– Maybe people just do what their neurons tell them to do…
• Machines are digital; people are analog
15
“The Turing test isn’t meaningful”
• Maybe so, but…
If we don’t use the Turing test, what
measure should we use?
• Very much an open question…
16
Ethical concerns: Robot behavior
•
•
•
How do we want our intelligent systems to behave?
How can we ensure they do so?
Asimov’s Three Laws of Robotics:
1. A robot may not injure a human being or, through inaction, allow a
human being to come to harm.
2. A robot must obey orders given it by human beings except where
such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection
does not conflict with the First or Second Law.
17
Ethical concerns: Human behavior
• Is it morally justified to create intelligent systems with these
constraints?
– As a secondary question, would it be possible to do so?
• Should intelligent systems have free will? Can we prevent
them from having free will??
• Will intelligent systems have consciousness? (Strong AI)
– If they do, will it drive them insane to be constrained by artificial
ethics placed on them by humans?
• If intelligent systems develop their own ethics and morality,
will we like what they come up with?
18