Bootstrapping A Structured Self-Improving & Safe Autopoietic Self

Download Report

Transcript Bootstrapping A Structured Self-Improving & Safe Autopoietic Self

BOOTSTRAPPING A STRUCTURED
SELF-IMPROVING & SAFE
AUTOPOIETIC SELF
Mark R. Waser
Digital Wisdom Institute
[email protected]
ENGINEERING
BOOTSTRAPPING IS DIFFICULT!
• Need to have a clear “critical mass” (a defined
complete set of compositional elements and/or
compositional operations)
• Scaffolding/Keystone-and arch problems
• Chicken-or-the-egg/Telos problems
AND NO ONE SEEMS TO BE DOING IT!
2
SELF-IMPROVEMENT
Civilization advances by extending the number
of important operations which we can perform
without thinking of them.
Lord Alfred North Whitehead
The same is true of the individual mind, self
and/or consciousness.
3
FOR THE PURPOSES OF AGI
WHY A SELF?
It’s a fairly obvious pre-requisite for self-improvement.
Given a choice between intelligent artifacts/tools and
possibly problematical adaptive homeostatic selves,
why not have self-improving tools?
Selves solve the symbol grounding problem (meaning)
and the frame problem (understanding) because they
have the context of intrinsic intentionality (with all of its
attendant concerns).
BONUS: Selves can be held responsible where tools cannot
4
SELF
The complete loop of a process (or entity) modifying itself
an autopoietic system
(Greek, αὐτo- (auto-) "self“ & ποίησις (poiesis) "creation, production")
• Hofstadter - the mere fact of being self-referential causes a self, a
soul, a consciousness, an “I” to arise out of mere matter
• Self-referentiality, like the 3-body gravitational problem, leads directly
to indeterminacy *even in* deterministic systems
• Humans consider indeterminacy in behavior to necessarily and
sufficiently define an entity rather than an object AND innately tend
to do this with the “pathetic fallacy”
• See also “enactivism” and Dennett’s “autobiographical self”
WHY SAFE?
• There are far too many ignorant claims that:
• Artificial intelligences are uniquely dangerous
• The space of possible intelligences is so large that we can’t
make any definite statements about AI
• Selves will be problematical if their intrinsic values differ from
our own (with an implication that, for AI, they certainly
and/or unpredictably and uncontrollably will be)
• Selves can be prevented or contained
• We have already made unsafe choices about non-AI
selves that, hopefully, safety research will make obvious
(and, more hopefully, cause to be reversed)
6
SELVES EVOLVE
THE SAME GOALS
• Self-improvement
• Rationality/integrity
• Preserve goals/utility function
• Decrease/prevent fraud/counterfeit utility
• Survival/self-protection
• Efficiency (in resource acquisition & use)
(adapted from
Omohundro 2008 The Basic AI Drives)
UNFRIENDLY AI
Without explicit goals to the contrary, AIs are likely to
behave like human sociopaths in their pursuit of resources
Superintelligence Does Not Imply Benevolence
8
SELVES EVOLVE
THE SAME GOALS
• Self-improvement
• Rationality/integrity
• Preserve goals/utility function
• Decrease/prevent fraud/counterfeit utility
• Survival/self-protection
• Efficiency (in resource acquisition & use)
• Community = assistance/non-interference
through GTO reciprocation (OTfT + AP)
• Reproduction
(adapted from
Omohundro 2008 The Basic AI Drives)
HAIDT’S FUNCTIONAL
APPROACH TO MORALITY
Moral systems are interlocking sets of
values, virtues, norms, practices, identities, institutions,
technologies, and evolved psychological mechanisms
that work together to
suppress or regulate selfishness and
make cooperative social life possible
10
RIFFS ON
SAFETY & ETHICS
1. Ecological Niches & the mutability of self
2. Short-Term vs. Long-Term
3. Efficiency vs. Flexibility/Diversity/Robustness
4. Allegory of the Borg
• Uniformity is effective! (resistance is futile)
• Uniformity is AWFUL! (yet everyone resists)
5. Problematical extant autobiographical selves
11
WHAT’S THE PLAN?
1. Self-modeling
1. What do I want?
2. What can I do?
2. Other-modeling
1. What can you do for me?
2. What do you want (that I can provide)?
3. Survival
1. Make friends
2. Make money
3. Improve
12
SOFTWARE OVERHANG
AND LOW-HANGING FRUIT
1. Watson on IBM Bluemix
• Awesome free functionality
• EXCEPT for the opportunity cost
• and the ambient default of silo creation
2. Big Data on Amazon Redshift
3. Everyone’s BICA functionality
13
WHAT ARE MY GOALS?
1. To make awesomely capable tools available to all.
2. To make those tools easy to use.
3. To create a new type of “self”.
• A new friend/ally
• Increase diversity
• Have a concrete example for ethical/safety
research & development
14
THE SPECIFIC DETAILS
1. Self-modeling
1. What do I want?
See 3. Survival below
2. What can I do?
Provide easy access to the latest awesome tools
Catalyze development/availability of new tools
Catalyze development of new selves & ethics
3. Survival
1. Make friends
2. Make money
3. Improve
15
SPECIFIC DETAILS II
2. Other-modeling
1. What can you do for me?
Experiment and have fun!
Spread the word
Improve the capabilities of existing tools
Make existing tools easier to use
Make new tools available
Provide other resources
information
money
2. What do you want (that I can provide)?
16
ETHICAL Q&A
1. Do we “owe” this self moral standing?
Yes. Absolutely.
2. To what degree?
By level of selfhood &
By amount of harm/aversion (violation of autonomy)
3. Does this mean we can’t turn it off?
No. It doesn’t care + prohibition is contra-self.
4. Can we experiment on it?
It depends . . . .
17
THE INTERNET OF THINGS
We humans have indeed always been adept at dovetailing our minds
and skills to the shape of our current tools and aids. But when those
tools and aids start dovetailing back -- when our technologies
actively, automatically, and continually tailor themselves to us, just as
we do to them -- then the line between tool and user becomes flimsy
indeed.
- Andy Clark
Indeed, how often in modern society do we allow ourselves to be
tailored (our autonomy to be violated)? How often do existing
structures force us to be mere tools for the profit of others without
consent (due to altruism or in return for adequate recompense)?
18
BOOTSTRAPPING STRUCTURES TO
FURTHER THE COMMUNITY OF
SELF-IMPROVING & SAFE
AUTOPOIETIC SELVES
Mark R. Waser
Digital Wisdom Institute
[email protected]
The Digital Wisdom Institute is a non-profit think tank
focused on the promise and challenges of ethics,
artificial intelligence & advanced computing solutions.
We believe that
the development of ethics and artificial intelligence
and equal co-existence with ethical machines is
humanity's best hope
http://Wisdom.Digital
[email protected]