Artificial Life/Agents

Download Report

Transcript Artificial Life/Agents

Artificial Life/Agents
Creatures: Artificial Life Autonomous Software
Agents for Home Entertainment
Stephen Grand, 1997
Learning Human-like Opponent Behaviour for
Interactive Computer Games
Christian Bauckhage, Christian Thurau, and Gerhard Sagerer, 2003
Evolving Neural Network Agents in the NERO
Video Game
Kenneth O. Stanley, Bobby D. Bryant, Risto Miikkulainen, 2005
Creatures
• 1997 state of the art Artificial Life game
• Creatures with neural network brains and an
evolving genome
• Learn by punishment/reward reinforcement
• Can learn rudimentary ‘verb-object’ language
• Sense of sight, sound, touch
• Complex biochemistry (metabolism, immune
system, genetically encoded morphology)
Creatures’ Brains
• Hebbian learning
• ~1000 neurons, ~5000 synapses
• Organised into ‘lobes’:
Characteristics
• Designed for efficiency (runs on 1997
commodity hardware)
• Limited number of neurons
• Brain model is also limited, restricts potential
functions
Learning Human-like Opponent
Behaviour
Learning Human-like Opponent
Behaviour
• Neural-network control system for a Quake II
bot
• Offline, supervised learning
• Feed-forward, back-propagation learning,
multilayer perceptron network
• One network for moving, one for aiming
• Trained to learn one path, then multiple
paths, then moving and aiming
Advantages
• Potentially cheaper and faster than scripting
bots
• Generalises to novel situations
• More efficient than on-line learning bots
• Good introduction to learning agents
Problems
• Paper is horribly structured and hard to read
• Assumption: only the agent’s current
state/environmental influences matter!
• Experiments didn’t work very well
• Bots still static, can’t learn opponent tactics
• Maybe difficult to get training data
Evolving Neural Network Agents in NERO
Evolving Neural Network Agents in NERO
• Online, reinforcement learning
• Agent fitness increased by learning and
evolution
• Player can train teams of bots to compete
against each other in increasingly complex
training scenarios
• Won Best Paper Award at the IEEE Symposium
on Computer Intelligence and Games
The Network
Learning Method
• rtNeat: basically, a technique for evolving
increasingly complex neural networks
• Benefits over traditional RL:
– Diversity increased/maintained through speciation
– Can keep a memory of past events
• Player provides customised fitness function
• NERO removes the worst agents, breeds the
best ones
• Currently NERO is quite simple
• Paper presents no quantitative results, but
results seem promising