Poster - Dr. Tom Froese
Download
Report
Transcript Poster - Dr. Tom Froese
Autonomy: a review and a reappraisal
Tom Froese, Nathaniel Virgo & Eduardo Izquierdo
Centre for Computational Neuroscience & Robotics, University of Sussex, Brighton, UK
Motivation
Constitutive autonomy
In the field of artificial life there is no agreement on
what defines ‘autonomy’. This makes it difficult to
measure progress made towards understanding as
well as engineering autonomous systems.
Mars rover
Behavioral autonomy
The constitutive approach can provide a precise definition of autonomy
in operational terms (e.g. Varela, Maturana & Uribe 1974). This has the
consequence that its applicability is mainly restricted to actual
organisms, since its aim is to distinguish life from non-life. Most current
artificial life agents do not posess constitutive autonomy.
A behavioral definition of autonomy can generally accommodate both
artificial and biological agents. At the same time, however, it has
difficulties in specifying exactly what makes such systems autonomous.
Consequently, the requirements are often trivially met in many cases
(e.g. Franklin 1995, p. 233). As an ambiguous and very inclusive
approach, it threatens to make the concept of autonomy meaningless.
(artificial?)
GOFAI robotics
Life
Seagull
Simulated agent
(Beer 1995)
constitutive autonomy
behavioral autonomy
These researchers tend to marginalize the
differences between the autonomy of artificial
agents and that of living organisms.
On the other hand, researchers coming from a
biological background often use the word to denote
(metabolic) self-constitution.
Constitutive autonomy: the concept is used to refer to the
self-constitution of the system through its own operations.
These researchers tend to treat the differences in
autonomy between living and artificial agents as
absolute. We argue that making a clear distinction
between the two approaches can resolve a lot of this
apparent opposition.
E-Mail: {t.froese, n.d.virgo, e.j.izquierdo}@sussex.ac.uk
Taking both approaches to autonomy into account will be
important for future research in artificial life.
Finally, it is important to note that the widespread disregard of the dimension of
constitutive autonomy is a serious shortcoming not only for scientific research, but also
in terms of our own understanding of what it means to be human.
Here, we review the diversity of approaches and
categorize them by introducing a conceptual
distinction between behavioral and constitutive
autonomy (Froese, Virgo & Izquierdo 2007).
Behavioral autonomy: the concept ‘autonomy’ is used to refer
to the robustness and flexibility of a system’s behavior.
However, the vast majority of this kind of research does not
address constitutive autonomy. The question of how viability
constraints emerge from the internal operations of a system while
coupled to its environment is particularly relevant. However,
more work is starting to be done in this area (e.g. Ikegami &
Suzuki 2007, Di Paolo 2003).
Concluding remarks
Autonomy: a review
On the one hand, researchers associated within an
engineering context mostly focus on the behavioral
capacity of a system.
By distinguishing between behavioral and constitutive autonomy,
we can see that this question actually demands two distinct
responses.
Despite the lack of a commonly accepted definition it seems
reasonable to say that today’s systems are indeed more
behaviorally autonomous (than at the start of ECAL, for
example).
Simulated metabolism
(Bourgine & Stewart 2004)
New robotics
Are today’s artificial agents
more autonomous?
As Boden (1996) points out: “what science tells us about human autonomy is practically
important, because it affects the way in which ordinary people see themselves – which
includes the way in which they believe it is possible to behave”.
Autonomy: a reappraisal
These considerations make it evident that there is a pressing need of finding a principled way of integrating these two approaches into
one coherent framework of autonomous systems research. What kind of research methodology is up to this task?
On the side of behavioral autonomy one of the most popular approaches is evolutionary robotics (Harvey et al. 2005). However, it
models only behavior and the evolved agents are constitutively autonomous by definition only. More thought needs to be given as to
how natural cognition is constrained by the constitutive processes which give rise to living systems. Is it the case that adding
further biological mechanisms into the behavioral approach makes it closer to being autonomous in the constitutive sense (e.g. Di
Paolo 2003)?
On the side of constitutive autonomy we find cellular automata (e.g. Varela, Maturana & Uribe 1974) as well as simulated (e.g.
Mavelli & Ruiz-Mirazo 2007) and actual chemistry (e.g. Bitbol & Luisi 2004). The problem here is how to get from systems that selfconstitute to systems that self-constitute and do something interesting at the same time.
The future of artificial life?
While advances have been made in designing and understanding systems which are purely self-constituting (e.g. Bourgine & Stewart
2004) or purely behavioral (e.g. Beer 1995), little effort has been made to tackle these two complementary aspects of life in an
integrated fashion. The major challenge for future artificial life research will be to address this shortcoming.
Only when we are able to investigate both constitutive and behavioral autonomy via synthetic means can the field of artificial life
claim to provide one coherent framework of autonomous systems research.
The field of artificial life is therefore also faced by an ethical imperative to invest more
effort into improving our understanding of constitutive autonomy. Only then can we
ground our understanding of human freedom not only in terms of the behavior involved
in mere external constraint satisfaction, but also in terms of the creativity involved in
dynamic and open-ended self-realization.
References
Beer, R.D. (1995), “A dynamical systems perspective on agent-environment interaction”, Artificial Intelligence, 72(1-2), pp. 173-215
Bitbol, M. & Luisi, P.L. (2004), “Autopoiesis with or without cognition: defining life at its edge”, J. R. Soc. Interface, 1(1), pp. 99-107
Boden, M.A. (1996), “Autonomy and Artificiality”, in: M.A. Boden (ed.), The Philosophy of Artificial Life, Oxford Uni. Press, pp. 95-108
Bourgine, P., & Stewart, J. (2004), “Autopoiesis and Cognition”, Artificial Life, 10(3), pp. 327-345
Brooks, R.A. (1991), “Intelligence without representation”, Artificial Intelligence, 47(1-3), pp. 139-160
Di Paolo, E.A. (2003), “Organismically-inspired robotics: homeostatic adaptation and teleology beyond the closed sensorimotor loop”, in: K.
Murase & T. Asakura (eds.), Dynamical Systems Approach to Embodiment and Sociality, Adelaide, Australia: Advanced Knowledge Int., pp. 19-42
Franklin, S. (1995), Artificial Minds, Cambridge, MA: The MIT Press
Froese, T., Virgo, N. & Izquierdo, E. (2007), “Autonomy: a review and a reappraisal”, in: F. Almeida e Costa et al. (eds.), Proc. of the 9th Euro.
Conf. on Artificial Life, Berlin, Germany: Springer-Verlag
Harvey, I., Di Paolo, E.A., Wood, R., Quinn, M. & Tuci, E. A. (2005), ‘Evolutionary Robotics: A new scientific tool for studying cognition’,
Artificial Life, 11(1-2), pp. 79-98
Ikegami, T. & Suzuki, K. (2007), “From Homeostatic to Homeodynamic Self”, BioSystems, in press
Mavelli, F. & Ruiz-Mirazo, K. (2007), “Stochastic simulations of minimal self-reproducing cellular systems”, Phil. Trans. R. Soc. B, in press
Varela, F.J., Maturana, H.R. & Uribe, R. (1974), “Autopoiesis: The organization of living systems, its characterization and a model”, BioSystems, 5,
pp. 187-196
http://lifeandmind.wordpress.com