AI - Philosophy and Ethics

Download Report

Transcript AI - Philosophy and Ethics

Exam 2
• This Friday (Dec 2nd) – in class
• You may have 8x11.5in hand-written cheat
sheet of notes (write your name + submit with
your exam)
• We’ll review practice exam on Wed.
Topics since last exam
•
•
•
•
•
•
•
•
•
•
Genetic Algorithms
Decision Trees
Probabilistic Inference
Bayesian Inference
Markov Models (Hidden + Regular)
Clustering
Perceptron Neural Networks
Multi-Layer Neural Networks
Propositional Logic/First Order Logic
Planning
Exam 2 Topics
•
•
•
•
•
•
•
•
•
•
Genetic Algorithms
Decision Trees
Probabilistic Inference
Bayesian Inference
Markov Models (Hidden + Regular)
Clustering
Perceptron Neural Networks
Multi-Layer Neural Networks
Propositional Logic/First Order Logic
Planning
Wrap-Up Planning with FOL
• Recall Blocks World:
– 3 blocks on a table
– At most 1 block can fit on top of another
– Robot can pick up one block and move it to table
or on top of another block
Planning a Solution
Encode in
Planning
programming
language
(PDDL)
Use FOL to
represent
knowledge
Constants:
Table, A, B, C
Predicates:
Block(b)
CanHold(x)
On(b, x)
Actions:
Move(b, x, y)
…
(:objects Table A B C)
(:predicates (Block ?b)
(CanHold ?x)
(On ?b ?x)
Search for a
solution
(plan)
START
Move(B, C, Table)
Move(A,B, C)
STATE
(:action move :parameters
(?b ?x ?y)
…
…
GOAL
Planning a Solution
Encode in
Planning
programming
language
(PDDL)
Use FOL to
represent
knowledge
Constants:
Table, A, B, C
Predicates:
Block(b)
CanHold(x)
On(b, x)
Actions:
Move(b, x, y)
…
(:objects Table A B C)
(:predicates
(Block ?b)
(CanHold ?x)
(On ?b ?x))
(:action move
:parameters (?b ?x ?y))
…
Search for a
solution
(plan)
START
Move(B, C, Table)
Move(A,B, C)
STATE
…
GOAL
Planning a Solution
Encode in
Planning
programming
language
(PDDL)
Use FOL to
represent
knowledge
Constants:
Table, A, B, C
Predicates:
Block(b)
CanHold(x)
On(b, x)
Actions:
Move(b, x, y)
…
(:objects Table A B C)
(:predicates
(Block ?b)
(CanHold ?x)
(On ?b ?x))
(:action move
:parameters (?b ?x ?y))
…
Search for a
solution
(plan)
START
Move(B, C, Table)
Move(A,B, C)
STATE
…
GOAL
Artificial Intelligence: Ethics
Ethics in Scientific
Research/Innovation
Scientific research where ethics play a role?
–
–
–
–
–
–
Stem cell research
Cloning/genetically modified food
Nuclear technology
Medical research (e.g. animal welfare)
Bio-warfare
…
Ethics in Scientific
Research/Innovation
Scientific research where ethics play a role?
–
–
–
–
–
–
Stem cell research
Cloning/genetically modified food
Nuclear technology
Medical research (e.g. animal welfare)
Bio-warfare
…
New technologies may have unintended negative side effects
Ethical Arguments against AI [WEF]
1. Unemployment
2. Wealth inequality
3. Mistakes/unintended consequences
4. Bias in AI
5. Privacy issues in AI
6. AI will dominate humanity
Unemployment
• AI causes unemployment?
• “50% of jobs will be replaced by AI”
-- Moshi Varde (Rice University)
Jobs at risk from AI
[Oxford ’13]
Unemployment
• AI causes unemployment?
• AI does work that people can’t do/don’t want to do
because of time/cost (spam filtering; fraud detection in
credit card transactions)
• AI may create more jobs than it has eliminated
• We may not know yet which jobs
• AI has created higher paying jobs
Unemployment
• AI causes unemployment?
• AI does work that people can’t do/don’t want to do
because of time/cost (spam filtering; fraud detection in
credit card transactions)
• AI may create more jobs than it has eliminated
• We may not know yet which jobs
• AI has created higher paying jobs
Wealth Inequality
AI causes wealth inequality?
• In 2014: revenue from three largest
companies in Detroit ~ revenue from three
largest companies in Silicon Valley
– In Silicon Valley: 10 times fewer employees
Wealth Inequality
AI causes wealth inequality?
• AI may lead to cheaper healthcare, education,
food, etc.
Mistakes/Unintended Consequences in
AI
Mistakes/Unintended Consequences in
AI
Who is to blame?
• Driver?
• Tesla?
• AI?
• Scientists who designed
the AI?
Tesla self-driving car
Mistakes/Unintended Consequences in
AI
Who is to blame?
• Driver?
• Tesla?
• AI?
• Scientists who designed
the AI?
Tesla self-driving car
How can we prevent mistakes in AI?
Bias in AI
• Google’s online ads showed high paying jobs
to men more often than women [CMU ’15]
Bias in AI
• Google search for “CEO”
Bias in AI
• Arrest records more likely to show up for
distinctively “black-sounding” names than for
“white-sounding” names [Harvard ‘13]
Bias in AI
• Uber offers better service (lower wait times) in
higher income areas [UMD ’16]
Bias in AI
• Is there bias in AI?
– If so, who is to blame?
– If not, who is to blame (for previous examples)? :)
Bias in AI
• Is there bias in AI?
– If so, who is to blame?
– If not, who is to blame (for previous examples)? :)
• How can we prevent bias in AI?
Bias in AI
• Is there bias in AI?
– If so, who is to blame?
– If not, who is to blame (for previous examples)? :)
• How can we prevent bias in AI?
– Determine when it may occur
– Determine why it occurs
– Correct for it
Security in AI
What is security?
State of being safe from danger or threat
AI contributes to security:
– MIT’s PatternEx able to identify 85% of cyber
attacks on businesses (using Neural Nets)
Security in AI
What is security?
State of being safe from danger or threat
AI contributes to security:
– MIT’s PatternEx able to identify 85% of cyber
attacks on businesses (using Neural Nets)
What about security for our data?
Privacy in AI
• iPhone’s secret tracker - Hidden encrypted file used to
track user’s movements
• Similar to Google’s “Latitude”
Privacy in AI
• How much privacy should there be in AI?
• Who should be responsible for this privacy?
Designers or users?
The success of AI will mean the end of the
human race
We must design robots with
laws of ethics.
Laws of Robotics
 Law Zero: A robot may not injure humanity, or, through inaction,
allow humanity to come to harm.
 Law One: A robot may not injure a human being, or through
inaction allow a human being to come to harm, unless this would
violate a higher order law.
 Law Two: A robot must obey orders given it by human beings,
except where such orders would conflict with a higher order law.
 Law Three: A robot must protect its own existence as long as
such protection does not conflict with a higher order law.
Laws of Robotics
 Law Zero: A robot may not injure humanity, or, through inaction,
allow humanity to come to harm.
 Law One: A robot may not injure a human being, or through
inaction allow a human being to come to harm, unless this would
violate a higher order law.
 Law Two: A robot must obey orders given it by human beings,
except where such orders would conflict with a higher order law.
 Law Three: A robot must protect its own existence as long as
such protection does not conflict with a higher order law.
Will these laws prevent Singularity?
Singularity in Robotics
The hypothesis that AI will trigger runaway
technological growth
Agents will rapidly self-improve and eventually
surpass human intelligence.
Possible?
Preventable?
Singularity in Robotics