NeuroEvolution of Augmenting Topologies (Neat)

Download Report

Transcript NeuroEvolution of Augmenting Topologies (Neat)

NeuroEvolution of
Augmenting Topologies
(Neat)
Kenneth O. Stanley, et al.
Many Papers
•
•
•
•
•
•
•
•
•
•
Efficient Evolution of Neural Networks through Complexification (Stanley,
PhD. Dissertation)
Automatic Feature Selection in Neuroevolution (Whiteson, Stone, Stanley,
Miikkulainen and Kohl)
Competitive Coevolution through Evolutionary Complexification (Stanley,
Miikkulainen)
Evolving a Roving Eye for Go (Stanley, Miikkulainen)
Continual Coevolution through Complexication (Stanley, Miikkulainen)
Efcient Reinforcement Learning through Evolving Neural Network
Topologies (Stanley, Miikkulainen)
Evolving Neural Networks through Augmenting Topologies (Stanley,
Miikkulainen)
Efcient Evolution of Neural Network Topologies (Stanley, Miikkulainen)
A Taxonomy for Artificial Embryogeny (Stanley, Miikkulainen)
Etc.
What is NEAT?
•
•
“In a process called complexification, NEAT begins by searching in a space of simple
networks, and gradually makes them more complex as the search progresses. By
starting minimally, NEAT is more likely to find efficient and robust solutions than
neuroevolution methods that begin with large fixed or randomized topologies; by
elaborating on existing solutions, it can gradually construct even highly complex
solutions. In this dissertation, NEAT is first shown faster than traditional approaches
on a challenging reinforcement learning benchmark task. Second, by building on
existing structure, it is shown to maintain an ”arms race” even in open-ended
coevolution. Third, NEAT is used to successfully discover complex behavior in three
challenging domains: the game of Go, an automobile warning system, and a real-time
interactive video game. Experimental results in these domains demonstrate that
NEAT makes entirely new applications of machine learning possible.” (Stanley, PhD.
Dissertation)
“Unlike other systems that evolve network topologies and weights (Angeline et al.
1993; Gruau et al. 1996; Yao 1999; Zhang and Muhlenbein 1993), all the networks in
the first generation in NEAT have the same small topology: All the inputs are directly
connected to every output, and there are no hidden nodes. These first generation
networks differ only in their initial random weights. Speciation protects new
innovations, allowing diverse topologies to gradually accumulate over evolution.
Thus, because NEAT protects innovation using speciation, it can start in this manner,
minimally, and grow new structure over generations.” (Stanley, PhD. Dissertation)
Philosophy of NEAT
Mutation
Mutation in NEAT can change both connection weights and network structures.
Crossover and
Speciation
Historical markings make it possible for
the system to divide the population into
species based on topological similarity.
The number of excess and disjoint
genes between a pair of genomes is a
natural measure of their compatibility.
The more disjoint two genomes are,
the less evolutionary history they
share, and thus the less compatible
they are.
Automatic Feature Selection in Neuroevolution
(Whiteson, Stone, Stanley, Miikkulainen and Kohl)
• “This paper presents a novel method called FS-NEAT
which extends the NEAT neuroevolution method to
automatically determine an appropriate set of inputs for
the networks it evolves. By learning the network’s inputs,
topology, and weights simultaneously, FS-NEAT
addresses the feature selection problem without relying
on meta-learning or labeled data. Initial experiments in
an autonomous car racing simulation demonstrate that
FS-NEAT can learn better and faster than regular NEAT.
In addition, the networks it evolves are smaller and
require fewer inputs. Furthermore, FS-NEAT’s
performance remains robust even as the feature
selection task it faces is made increasingly difficult.”
• “NEAT’s initial networks are small but not as small as possible. The
structure of the initial networks, in which each input is connected
directly to each output, reflects an assumption that all the available
inputs are useful and should be connected to the rest of the network.
In domains where the input set has been selected by a human
expert, this assumption is reasonable. However, in many domains
no such expert is available and the input set may contain many
redundant or irrelevant features. In such cases, the initial
connections used in regular NEAT can significantly harm
performance by unnecessarily increasing the size of the search
space.”
• “FS-NEAT is an extension to NEAT that attempts to solve this
problem by starting even more minimally: with networks having
almost no links at all. As in regular NEAT, hidden nodes and links
are added through mutation and only those additions that aid
performance are likely to survive. Hence, FS-NEAT begins in even
lower dimensional spaces than regular NEAT and feature selection
occurs implicitly: only those links emerging from useful inputs will
tend to survive.”
Lessons Learned
• “The empirical results presented in this
paper demonstrate that when some of the
available inputs are redundant or
irrelevant, FS-NEAT can learn better
networks and learn them faster than
regular NEAT. In addition, the networks it
learns are smaller and use fewer inputs.
These results are consistent across
feature sets of different sizes.”
Significance