Artificial intelligence/Robotics
Download
Report
Transcript Artificial intelligence/Robotics
ICT Ethics Bigger Task 2
Aki Heikkinen
What is artificial intelligence?
Artificial intelligence (AI) is an art to duplicate
human intelligence for non-living devices [2].
Modern day artificial intelligence is collection
of computation operations which makes a
machine function toward a specific goal [1].
AI consists perceiving an environment/object,
reasoning and decision making [1].
The result of all three factors is an act or
action toward the specific goal.
History of AI
Earliest examples of conceptual AI can be
found from Greek myths [2].
”We shall be like gods. We shall duplicate
God’s greatest miracle – the creation of
man” – Paracelus (1493 - 1541) [2].
AI has always been famous subject in
literature. For example Mary Shelley’s
Frankenstein (1818)
History of AI
Isaac Asimov (1950): Three Rules of
Robotics:
1) A Robot may not injure a human being, or
through inaction allow a human being to come to
harm.
2) A robot must obey the orders gtiven it by human
beings except where such orders would conflict
with the First Law
3) A robot must protect its own existence as long
as such protection does not conflict with the First
or Second Law.
History of AI
From the late 1970s the boom of smart
computing started [2].
Research of AI has become genuinely
international during 25 years [2].
In the early 1980s intelligent robotic
systems were being installed in factories
and other ”expert systems” were
introduced for real-life situations (for
example chess-playing) [2].
History of AI
In the 1990s and early 21st centry, AI
achieved great success for example in
logistics, data mining and in medical diagnosis
[3].
During this decade AI has been successfully
used in robot or other intelligent machines
(called autonomous or intelligent agents) [4].
Even thou till today the following question is
still open: ”Can a machine think?” [2]
AI Limitations
Because the human intelligence is hard to
predict, most AI researches have limited their
study to limited number of areas regarding
human thinking ability [4].
Only AI can only solve problems that can be
presented in symbolic ways and reasoned with
logic [4].
Even some real-world problems that are easy
for human to solve are still intractable for
current machines [4].
AI Ethics
”We want the machines to do those things
we are not able to do because we are not
good at them, yet we do not want them to
get too good.” [4]
We can’t be assurred that machines share
our own values, truths and virtues [4].
In possible future machines will become
more intelligent and have more
responsibilities and autonomy, which only
humans have been known to have [4].
AI Ethical questions
Questions regarding intelligent agents [4]:
How will humans perceive intelligent agents?
How will autonomous agents feel about humans?
Can humans let intelligent agents keep on getting
more intelligent even if it would surpass human
intelligence?
How intelligent agents will do what they are suppose
to do?
Will intelligent agent do only what they are suppose
to do?
AI Ethical questions
Questions regarding intelligent agents [4]:
Who will be responsible for the actions of intelligent
agents?
Will intelligent agents outperform their owners?
Will they eventually eliminate the need for human
skills?
How much power and autonomy should human give
to intelligent agents? Will these intelligent agents
eventually take away human autonomy and
consequently take control of human destiny?
AI Ethical questions
Legal and moral issues [5]:
How AI may be used to benefit humanity?
Should human rights be applied to
intelligent agents if they can be created to
be sentient creates?
Can intelligent agent be sentenced like
human beings?
Is it right to play creator by creating
intelligence?
The case 1: Skynet-effect
Government defense section wants to develop new computer
system for military intelligence that is mainly maintained by
advanced artificial intelligent. This request is made because it’s
hard and slow for humans to operate all these tasks. The
system would also be more cost-effective compared to 1000
humans fullday job salary.
Some designs:
Humans can interfer with the system anytime when necessary.
The system is desgined to handle all everyday jobs dealing with
general military intelligence and inform humans from possible
threats.
To make the system as much effective its desgined to have
access to all military and classified intelligence information.
In addition the system is also designed to have capability in
quick decision making during critical-situations such as
launching nuclear weapon.
The case 1: Skynet-effect
Questions regarding the case:
How much responsibility can humans give to
computer system?
How much power can we give to computer system?
Can be we assurred that the system won’t remove,
malform or share classified informations?
Is it ethically right to replace 1000 human employee
with an intelligent computer system?
If the system will do something unnecessary but
harmful who are the ones to blaim? Engineers?
The case 2: Total autonomy
Super-intelligent humanoid robot (Rusty) is created which can
learn new things using advanced semantic neural networking
technology.
Rusty has only three rules (Isaac Asimov’s Three Rules of
Robotics) which filter it’s decision and learning activities.
When Rusty is first time activated it doesn’t know anything
(’tabula rasa’). It simply have only lots of sensors, algorithms
and basic initial moving functionalities which it can use to learn
new things.
First thing Rusty learns is to how to maintain balance and walk
using legs. Secondly it starts to observe enrivonment and learn
new things.
After a year Rusty has learned lots of things about our world.
The case 2: Total autonomy
Subcase 1: Rusty has run out of memory capacity. It
has learned what computer memory is and how it
can be installed. Rusty also has learned that such
memory can bought it from local hardware store.
Can Rusty upgade himself?
Subcase 2: Rusty has learned that adopting a child
is beneficial for majority. Can Rusty adopt a human
child?
Subcase 3: One day heroic Rusty protects bank
empoyees from robbery by sealing them in safe.
However Rusty has not learned that there is no air in
the particular safe and people die inside. Can Rusty
be sentenced because of his act? After all he has
capability to learn that what he did was wrong.
References
[1] National Research Council Staff (1997). Computer Science and
Artificial Intelligence. Washington, DC, USA: National
Academies Press.
[2] McCorduck, P. (2004). Machines Who Think : A Personal
Inquiry into the History and Prospects of Artificial Intelligence. A
K Peters, Limited
[3] Wikipedia: Artificial Intelligence. WWW-page,
http://en.wikipedia.org/wiki/Artificial_intelligence (10.12.2009)
[4] Kizza, J M. (2002). Ethical and Social Issues in the Information
Age. Springer-Verlag New York, Incorporated
[5] Wikipedia: Ethics of artificial Intelligence. WWW-page,
http://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence
(10.12.2009)