Transcript Document

Artificial General Intelligence
The Shortest Path to a Positive Singularity
Ari A. Heljakka
GenMind Ltd
“Within thirty years, we will have the
technological means to create superhuman
intelligence. Shortly thereafter, the human era
will be ended…
When greater-than-human intelligence
drives progress, that progress will be much
more rapid”
The Coming Technological Singularity
Vernor Vinge (1993)
“Two years after Artificial Intelligences reach
human equivalence, their speed doubles. One
year later, their speed doubles again.
Six months - three months - 1.5 months ...
Singularity.
Plug in the numbers for current computing
speeds, the current doubling time, and an
estimate for the raw processing power of the
human brain, and the numbers match in: 2021.
But personally, I'd like to do it sooner.”
Staring into the Singularity 1.2.5
Eliezer S. Yudkowsky (2001)
“Certainly my best current projected
range of 2020-2060 is voodoo like
anyone else's, but I'm satisfied that I've
done a good literature search on the
topic, and perhaps a deeper polling of
the collective intelligence on this issue
than I've seen elsewhere to date. To
me, estimates much earlier than 2020
are unjustified in their optimism, and
likewise, estimates after 2060 seem
oblivious to the full scope and
power of the… processes in the
universe.”
Nanotech.biz interview with
John Smart (2001)
“I set the date for the Singularityrepresenting a profound and disruptive
transformation in human capability- as 2045.
The nonbiological intelligence created in that
year will be one billion times more
powerful than all human intelligence
today."
The Singularity is Near,
When Humans Transcend
Biology - Ray Kurzweil (2005)
"One could argue that agriculture and the
industrial revolution represent other
Singularity situations, albeit weak ones
compared to the one which may be upon us
next.
Ben Goertzel
Stephan Bugaj
But, while evolution might take millions of
years to generate another psychological
sea change as dramatic as the emergence
of modern humanity, technology may do the
job much more expediently. The
technological Singularity can be
expected to induce rapid and dramatic
change in the nature of life, mind and
experience.”
The Path to Posthumanity
Dr. Ben Goertzel & Stephan Bugaj (2006)
Credit: Ray Kurzweil
Credit: Ray Kurzweil
Singularity Enabling Technologies
Strong AI
Robotics
Nanotech
Biotech

Ray Kurzweil and some
other leading futurists
advocate a longer-term
approach to AGI via
brain mapping

Major projects such as
IBM’s Blue Brain and
Artificial Development’s
Ccortex are working in
this direction.

However, this approach
requires AGI engineers
to sit and wait for
decades while
neuroscientists figure
out how to better map
the brain, and the
computer engineers
build better hardware

Rather than waiting for
the neuroscientists, the
Novamente AGI design
fills the knowledge gap
via appropriate
deployment of computer
science

This approach may
feasibly lead to AGI
equaling or surpassing
human level intelligence
before 2020.
Singularity Enabling Technologies
Strong AI
Robotics
Nanotech
Biotech
Singularity Enabling Technologies
Strong AI
Robotics
Nanotech
Biotech
Race-specific
pathogens
Gray goo
Matrix
Unfriendly goal system
Unstable goal system
Potential Dangers of a Greater-than
Human Intelligence AGI
Apr 2000 - WIRED
Why the future doesn’t need us
Bill Joy (co-founded Sun Microsystems in 1982)
G
Genetics
N
Nanotech
R
Robotics & Strong AI
R = Robotic & Strong AI
“Inherently there will be no absolute protection against
strong AI. Although the argument is subtle I believe that
maintaining an open free-market system for incremental
scientific and technology progress, in which each step is
subject to market acceptance, will provide the most
constructive environment for tech to embody widespread
humans values.” - Ray Kurzweil (2005)
Destruction
Totalitarian
EMBRACE
RELINQUISH
TECHNOLOGY
“Fine-Grained Relinquishment”
Solution: Augmentation & Uploading?
“It's okay to fail at building AI. The
dangerous thing is to succeed at
building AI and fail at Friendly AI.
Right now, right at this minute,
humanity is not prepared to handle
this. We're not prepared at all. The
reason we've survived so far is that AI
is surrounded by a protective shell of
enormous theoretical difficulties that
have prevented us from messing with
AI before we knew what we were
doing.”
Why We Need Friendly AI
-Eliezer Yudkowsky (2003)
Novamente’s Pragmatic Approach
to Safer AGI
Hierarchical goal system with ongoing compassion and Friendliness as part
of the supergoal
Ethics initially taught and evaluated via interactions in a simulated
environment…
Proactionary Principle
“Balance the risks of action and inaction.”
- Max More (2004)
Thank you!
Ari A. Heljakka
GenMind Ltd
DeGaris’s Law
“The initial condition of the
superhuman AI will determine its
ongoing evolution… initially.”
- Prof. Hugo de Garis (2006)