CS140-FSMinGames

Download Report

Transcript CS140-FSMinGames

Artificial Intelligence and
Computer Games
Dr. John Laird
University of Michigan
Modifications by
Roger Webster, Ph.D.
Different Practices of AI
• The study of “rational” behavior and processing.
• The study of human behavior and cognitive processing.
• The study of other approaches: neural, evolutionary.
• Computational models of component processes:
knowledge bases, inference engines, search techniques,
machine learning techniques.
• Understanding the connection between domains &
techniques.
• Computational constraints vs. desired behavior.
• Application of techniques to real world problems.
Roles of AI in Games
• Opponents
• Teammates
• Strategic Opponents
• Support Characters
• Autonomous Characters
• Commentators
• Camera Control
• Plot and Story Guides/Directors
Goals of AI action game opponent
• Provide a challenging opponent
• Not always as challenging as a human -- Quake monsters.
• Not too challenging.
• Should not be superhuman in accuracy, precision, sensing, ...
• Should not be too predictable.
• Through randomness.
• Through multiple, fine-grained responses.
• Through adaptation and learning.
AI Agent in a Game
• Define an API for agents: sensing and acting.
• Each time through control loop, update each agent
• Encapsulate all agent data structures.
• And so agents can’t trash each other or the game.
• Share global data structures on maps, etc.
Agent 1
Agent 2
Player
Game
Structure of an Intelligent Agent
• Sensing: perceive features of the environment.
• Thinking: decide what action to take to achieve its
goals, given the current situation and its knowledge.
• Acting: doing things in the world.
Thinking has to make up for limitations in sensing and
acting.
The more accurate the models of sensing and acting,
the more realistic the behavior.
Sensing Limitations & Complexities
• Limited sensor distance
• Limited field of view:
• Must point sensor at location and keep it on.
• Obstacles
• Complex room structures
• Detecting and computing paths to doors
• Noise in sensors
• Different sensors give different information and have
different limitations.
• Sound: omni-directional, gives direction, distances, speech, ...
• Vision: limited field of view, 2 1/2D, color, texture, motion, ...
• Smell: omni-directional, chemical makeup.
Simple Behavior
• Random motion
• Just roll the dice to pick when and which direction to move
• Simple pattern
• Follow invisible tracks: Galaxians
• Tracking
• Pure Pursuit: Move toward agent’s current position
• Heat seeking missile
• Lead Pursuit: Move to position in front of agent
• Collision: Move toward where agent will be
• Weave: Every N seconds move X degree off opponent’s
bearing
• Spiral: Head 90-M degrees off of opponent’s bearing
• Evasive – opposite of any tracking
Random
Simple Patterns
Pure Pursuit
Lead Pursuit
Collision
Moving in the World: Path
Following
• Just try moving toward goal.
Goal
Source
Problem
Goal
Source
Create Avoidance Regions
Goal
Source
Indoors - Nodes
Run
Attack
Pickup
Die
Path Planning
• Find a path from one point to another using an internal
model
• Satisfying: Try to find a good way to achieve a goal
• Optimizing: Try to find best way to achieve goal
Path Finding
3
4
10
5
7
2
7
2
3
5
1
7
5
6
2
2
6
6
5
1
3
8
3
2
2
4
2
2
0
Analysis
• Find the shortest path through a maze of rooms.
• Approach is A*:
• At each step, calculate the cost of each expanded path.
• Also calculate an estimate of remaining cost of path.
• Extend path with the lowest cost + estimate.
• Cost can be more than just distance:
•
•
•
•
•
Climbing and swimming are harder (2x)
Monster filled rooms are really bad (5x)
Can add cost to turning – creates smoother paths
But must be a numeric calculation.
Must guarantee that estimate is not an overestimate.
• A* will always find shortest path.
Goals
• Exposure to AI on tactical decision making
•
•
•
•
•
Not state-of-the-art AI, but relevant to Computer Games
Concepts not code
Analysis of strengths and weaknesses
Pointers to more detailed references
Enough to be “dangerous”
• What’s missing?
•
•
•
•
Sensing models
Path planning and spatial reasoning
Scripting languages
Teamwork, flocking, (see boids by Craig Reynolds
www.red3d.com)
• Personality, emotion, …
• How all of these pieces fit together
Planning
•
Variety of AI decision-making techniques:
1.
2.
3.
4.
5.
6.
7.
Finite-state Machines
Decision Trees
Neural Networks
Genetic Algorithms
Rule-based Systems
Fuzzy Logic
Planning Systems
•
Context of a simple game scenario
•
Implementation issues
•
Evaluate their strengths and weaknesses
Types of Behavior to Capture
• Wander randomly if don’t see or hear an enemy.
• When see enemy, attack
• When hear an enemy, chase enemy
• When die, respawn
• When health is low and see an enemy, retreat
Execution Flow of an AI Engine
Can be extremely expensive
Sense
Can be modulated by “think”
Finite-state machines
Decision trees
Neural nets
Think
Fuzzy logic
Rule-based systems
Planning systems
Act
Finite State Machines
John Laird and Michael van Lent
University of Michigan
AI Tactical Decision Making Techniques
Modifications by
Roger Webster, Ph.D.
Example FSM
Events:
Attack
E, -D
E=Enemy Seen
S=Sound Heard
E
E
-E
E
Wander
-E, -S, -D
D=Die
D
Chase
S, -E, -D
S
-S
D
D
-E
Code
…
S
Spawn
D
…
Action (callback) performed when a transition occurs
Example FSM
Events:
Attack
E, -D
E=Enemy Seen
S=Sound Heard
E
E
-E
E
Wander
-E, -S, -D
D=Die
D
Chase
S, -E, -D
S
-S
D
D
-E
S
Spawn
D
Problem: No transition
from attack to chase
Example FSM - Better
Attack
E, -D, -S
S
Attack-S
E, -D, S
-S
E
D
E
-E
-E
E
Wander
-E, -S, -D
D
Chase
S, -E, -D
S
-S
D
D
-E
S
Spawn
D
Events:
E=Enemy Seen
S=Sound Heard
D=Die
Example FSM with Retreat
Attack-ES
E,-D,S,-L
S
Attack-E
E,-D,-S,-L
-S
L
Retreat-S
-E,-D,S,L
L
-L
E
-E
E E
Wander-L
-E,-D,-S,L
L
-L
-L
E
L
Retreat-ES
E,-D,S,L
-S
-L
Events:
E=Enemy Seen
S=Sound Heard
D=Die
L=Low Health
S
-E
Wander
-E E
-E,-D,-S,-L
D D
D
D
Spawn
D
(-E,-S,-L)
S
Chase
-E,-D,S,-L
Retreat-E
E,-D,-S,L
Each feature with
N values can
require N times as
many states
Extended FSM: Save Values
L
Attack
E, -L
Retreat
L
-L
-S
E
D
E
L
Events:
E=Enemy Seen
S=Sound Heard
D=Die
L=Low Health
-E
E
Chase
S,-L
Wander S
-E,-D,-S
-E
D
D
D
Spawn
D
(-E,-S,-L)
S
Maintain memory of
current values of all
events – transition
event on old events
Augmented FSM:
Action on Transition
L
Action
Attack
E, -L
Retreat
L
-L
-S
E
D
E
L
Events:
E=Enemy Seen
S=Sound Heard
D=Die
L=Low Health
-E
Execute action
during transition
E
Chase
S,-L
Wander S
-E,-D,-S
-E
D
D
D
Spawn
D
(-E,-S,-L)
S
Hierarchical FSM
• Expand a state into its own FSM
Attack
E/-E
Wander
Pick-up
Powerup
S/-S
Chase
Start
Turn Right
Go-through
Door
Die
Spawn
Non-Deterministic Hierarchical
FSM (Markov Model)
Wander
No enemy
Attack
Approach
Aim &
Slide Right
& Shoot
.3
Aim &
Slide Left
& Shoot
.3
.4
.3
.3
Start
.4
Aim &
Jump &
Shoot
Die
Start
Simple Implementation
• Compile into an array of state-name, event
• state-name := array[state-name] [event]
• Uses state-name to call execution logic
• Add buffers to queue up events in case get
simultaneous events
• Hierarchical
• Create array for every FSM
• Have stack of states
• Classify events according to stack
• Update state which is sensitive to current event
event
state
Extended & Augmented
• Use C++ class for states
• Methods for actions and transitions
FSM Evaluation
• Advantages:
• Very fast – one array access
• Can be compiled into compact data structure
• Dynamic memory: current state
• Static memory: state diagram – array implementation
• Can create tools so non-programmer can build behavior
• Non-deterministic FSM can make behavior unpredictable
• Disadvantages:
• Number of states can grow very fast
• Exponentially with number of events: s=2e
• Number of arcs can grow even faster: a=s2
• Hard to encode complex memories
• Propositional representation
• Difficult to put in “pick up the better weapon”, attack the closest enemy
General References
• AI
• Deloura, Game Programming Gems, Charles River Media,
2000, Section 3.0 & 3.1, pp. 221-248.
• Russell and Norvig: Artificial Intelligence: A Modern
Approach, Prentice Hall, 1995.
• Nilsson, Artificial Intelligence: A New Synthesis, Morgan
Kaufmann, 1998.
• AI and Computer Games
• LaMothe: Tricks of the Windows Game Programming Gurus,
SAMS, 1999, Chapter 12, pp. 713-796.
• Deloura, Game Programming Gems, Charles River Media,
2000, Section 3, pp. 219-350.
• www.gameai.com
• www.gamedev.net/