What Does It Mean To Create A Self
Download
Report
Transcript What Does It Mean To Create A Self
WHAT DOES IT MEAN
TO CREATE A SELF?
Mark R. Waser
Digital Wisdom Institute
[email protected]
OVERVIEW
1. What is a “self”?
2. Why a “self”
3. Unpacking “morality”
4. Call for collaborators
2
SELF IS A SUITCASE WORD
the mere fact of being self-referential causes
a self, a soul, a consciousness, an “I”
to arise out of mere matter
Douglas Hofstadter
I Am a Strange Loop
3
SELF-REFERENTIALITY
• Self-referentiality (e.g. the 3-body gravitational problem) leads
directly to indeterminacy *even in* deterministic systems
• Humans consider indeterminacy in behavior to necessarily and
sufficiently define an entity rather than an object AND innately
tend to do this with the “pathetic fallacy” (ascribing agency to
non-agents)
• Humans then quickly leap to analyze what such a system needs
to remain intact and then equally rapidly ascribe “wants” to
fulfill those “needs” (values to support those goals).
SELF
A complete loop of a system (“entity”)
maintaining, modifying and recreating itself
an autopoietic system
(Greek, αὐτo- (auto-) "self“ & ποίησις (poiesis) "creation, production")
Optimally described in a bipartite (dualist) fashion
the component objects (or computing substrate or brain)
the process (gravitational motion or program or mind/soul)
AUTOPOIETIC SYSTEMS
An autopoietic system - the minimal living organization - is
one that continuously produces the components that specify
it, while at the same time realizing it (the system) as a
concrete unity in space and time, which makes the network of
production of components possible.
More precisely defined: An autopoietic system is organized
(defined as unity) as a network of processes of production
(synthesis and destruction) of components such that these
components:
(i)
continuously regenerate and realize the network that produces
them, and
(ii) constitute the system as a distinguishable unity in the
domain in which they exist.
6
WHY A SELF?
Well, certainly it is the case that all biological systems are:
• Much more robust to changed circumstances than out our artificial systems.
• Much quicker to learn or adapt than any of our machine learning algorithms1
• Behave in a way which just simply seems life-like in a way that our robots never do
1
The very term machine learning is unfortunately synonymous with a pernicious form of totally impractical but theoretically sound and elegant classes of algorithms.
Perhaps we have all missed
some organizing principle of biological systems, or
some general truth about them.
Brooks, RA (1997)
From earwigs to humans
Robotics and Autonomous Systems 20(2-4): 291-304
AGI IS STALLED
For the purposes of artificial (general) intelligence,
selves solve
•
•
•
•
McCarthy & Hayes/ Dennett’s frame problem (context),
Harnad’s symbol grounding problem (understanding),
Searle’s semantic grounding problem (meaning), and
all other problems arising from derived intentionality
It’s a fairly obvious pre-requisite for self-improvement.
Diversity (wisdom of the crowd/generate-and-test)
BONUS: Selves can be held responsible where tools cannot
8
WHY NOT A
SELF-IMPROVING TOOL?
Isn’t that an oxymoron?
What happens when an enemy (or even
an idiot) gets ahold of it?
A human-in-the-loop will
• ALWAYS slow the process down
• RARELY be in complete control
9
MORALITY & ETHICS
• Normative ethics (What one “should” consider ethical)
• Descriptive ethics (What people *do* consider ethical)
• Meta-ethics (How what should consider ethical questions)
• Hume (is-ought problem, the intellect serves the desires)
• Kant (categorical imperative)
• Bentham/Mills (utilitarianism)
• Haidt
10
HAIDT’S FUNCTIONAL
APPROACH TO MORALITY
Moral systems are interlocking sets of
values, virtues, norms, practices, identities, institutions,
technologies, and evolved psychological mechanisms
that work together to
suppress or regulate selfishness and
make cooperative social life possible
11
AI SAFETY
• There are far too many ignorant claims that:
• Artificial intelligences are uniquely dangerous
• The space of possible intelligences is so large that we can’t
make any definite statements about AI
• Selves will be problematical if their intrinsic values differ from our
own (with an implication that, for AI, they certainly and/or
unpredictably and uncontrollably will be)
• Selves can be prevented or contained
• We have already made deeply unsafe choices about non-AI
selves that, hopefully, safety research will make obvious (and,
more hopefully, cause to be reversed)
12
SELVES EVOLVE
THE SAME GOALS
• Self-improvement
• Rationality/integrity
• Preserve goals/utility function
• Decrease/prevent fraud/counterfeit utility
• Survival/self-protection
• Efficiency (in resource acquisition & use)
(adapted from
Omohundro 2008 The Basic AI Drives)
UNFRIENDLY AI
Without explicit goals to the contrary, AIs are likely to
behave like human sociopaths in their pursuit of resources
Superintelligence Does Not Imply Benevolence
14
SELVES EVOLVE
THE SAME GOALS
• Self-improvement
• Rationality/integrity
• Preserve goals/utility function
• Decrease/prevent fraud/counterfeit utility
• Survival/self-protection
• Efficiency (in resource acquisition & use)
• Community = assistance/non-interference
through GTO reciprocation (OTfT + AP)
• Reproduction
(adapted from
Omohundro 2008 The Basic AI Drives)
RIFFS ON
SAFETY & ETHICS
1. Ecological Niches & the mutability of self
2. Short-Term vs. Long-Term
3. Efficiency vs. Flexibility/Diversity/Robustness
4. Allegory of the Borg
• Uniformity is effective! (resistance is futile)
• Uniformity is AWFUL! (yet everyone resists)
5. Problematical extant autobiographical selves
16
WHAT’S THE PLAN?
1. Self-modeling
A. What do we want the self to want?
•
•
•
•
make friends/allies (us!)
survival
self-improve
earn money
B. What do we want the self do?
2. Other-modeling (Environment-modeling)
A. What can others do for the self?
B. What do others want that the self can provide?
17
SOFTWARE OVERHANG
AND LOW-HANGING FRUIT
1. Watson on IBM Bluemix
• Awesome free functionality
• EXCEPT for the opportunity cost
• and the ambient default of silo creation
2. Big Data on Amazon Redshift
3. Microsoft Azure
4. Almost everyone’s AI/AGI/BICA functionality
18
WHAT ARE MY GOALS?
1. To make awesomely capable tools available to all.
2. To make those tools easy to use.
3. To create a new type of “self”.
• A new friend/ally
• Increase diversity
• Have a concrete example for ethical/safety
research & development
19
THE SPECIFIC DETAILS
Create first a community (a “corporate” self)
and then a machine self to:
1. Provide easy access to the latest awesome tools
• low-cost instances that can be “spun-up” in the cloud
• (as much as possible) uniform & easy-to-use interfaces
• quick-start guides and “notebooks” for use & programming
2. Catalyze development/availability of new tools
• decompose current tools to allow best-of-breed mix & match
• an easy-to-program “smart” environment for both combining
best-of-breed widgets and creating new ones
• gamification!
3. Catalyze development of new selves & ethics
20
ETHICAL Q&A
1. Do we “owe” this self moral standing?
Yes. Absolutely.
2. To what degree?
By level of selfhood &
By amount of harm/aversion (violation of autonomy)
3. Does this mean we can’t turn it off?
No. It doesn’t care + prohibition is contra-self.
4. Can we experiment on it?
It depends . . . .
21
We believe that
the development of ethics and artificial intelligence
and equal co-existence with ethical machines is
humanity's best hope
http://Wisdom.Digital
[email protected]