Transcript Document

Comparative Reproduction Schemes for
Evolving Gathering Collectives
A.E. Eiben, G.S. Nitschke, M.C. Schut
Computational Intelligence Group
Department of Computer Science, Faculty of Sciences
De Boelelaan 1081a, 1081 HV Amsterdam
The Netherlands
[email protected], [email protected], [email protected]
Introduction
• Research theme: Emergent Collective Intelligence
• Investigating artificial evolution for adaptability at local
level, and desired emergent behaviors at global level
• Comparing agent reproduction schemes under two
types of evolvable controllers: heuristic and neural-net
• Results from a collective gathering simulation
Background
• Emergent behavior in collective
gathering:
• Nolfi et al. (2003): Cooperative
transport by s-bots
• Drogoul et al. (1995): Emergent
functionality, division of labor in a
simulated ant colony gathering task
• Perez-Uribe et al. (2003):Emergent
cooperative behavior for gathering by
artificial ants, based on colony type
(heterogeneous vs. homogenous)
Experimental Setup
• JAWAS simulator: simulating
collective gathering potentially thousands of
agents (swarm scape)
• Initially 1000 agents, 3
resource types (different
values and cooperation
needed to gather)
• Agent goal: to gather the
highest value of resources
possible during lifetime
• Cooperation needed for
‘good’ solutions – gathering
of highest value of resources
Task domain: Minesweeping
Capacity
Threshold
Mine type A
Mine type B
Mine type C
Extraction
Cost
300
150
75
8
4
2
Transport
Cost
0.04
0.02
0.01
Fitness
Reward
20
10
5
• Fitness rewards given to agents for successful gathering
(delivery to the home area).
• Collective evaluation: Total value gathered – taken at end of
Simulation and as average of 100 runs
Comparative Agent Reproduction Schemes
Temporal Dimension
• SREL : Single Reproduction at End of Lifetime
• MRDL: Multiple Reproduction during an Agents
Lifetime
Spatial Dimension
• Locally restricted: Reproduction only with agents in
adjacent cells
• Panmictic: Reproduction with agents anywhere in the
environment
Agent Controllers: Heuristic Controller
Agent Controllers: Neural Network Controller
Evolutionary setup
• For both heuristic and NN :
– Gathering and transport
parameter values evolved
• Heuristic controller static
throughout evolutionary
process
• Neural network controller
dynamic over evolutionary
process – i.e. NN weights
evolved
• NN controllers evolved under a
neuro-evolution process
Experimental Results
Experimental Results: Evolution Control
Conclusions
• SREL/Local – the most effective scheme – under given
behavior evaluation - validated under two approaches
• Fitness saved by agents until end of lifetime – successful
agents pass on fitness – especially effective under NE –
where recombined mutated weights are passed on
• Not the case under MRDL – where agents may not be
given sufficient time to adapt to their task before
reproducing