AMS 2003 - The Laboratory for Advanced Systems Research
Download
Report
Transcript AMS 2003 - The Laboratory for Advanced Systems Research
An Active Self-Optimizing
Multiplayer Gaming
Architecture
V. Ramakrishna, Max Robinson, Kevin Eustice
and Peter Reiher
Laboratory for Advanced Systems Research
University of California, Los Angeles
Fifth International Workshop on Active
Middleware Services
June 25th, 2003
Overview
Multiplayer games suffer from various
problems
•
Also representative of other distributed applications
Our system
•
•
An infrastructure for networked multiplayer games
• Route packets using a multicast tree
• Infrastructure capable of modifying itself on the fly and in
face of changing network conditions
Infrastructure is a middleware built using active
networks; it is transparent to the application
2
Outline
Multiplayer Gaming
Project Objectives
System Architecture
Implementation
Performance Evaluation
Future Work and Conclusion
3
Introduction
Networked multiplayer games
• Hugely popular industry
• Support increasing number of players
• DOOM (1993) – LAN-based game, used IPX
• Quake (1996) – first TCP/IP based game, scales
more than DOOM
• Massively multiplayer games
• EverQuest, StarCraft, Counter Strike, Diablo
• Hundreds of game worlds all over the Internet
• Each world supports thousands of players
4
Multiplayer Game Design Issues
Graphics and animation quality improve
by leaps and bounds
• Only essential data must be delivered
• Network could become the bottleneck
Advances made in
Less work done to improve networking
infrastructure
• Improving response time
• Maintaining consistent game state
5
Game Infrastructure Models
Peer-to-Peer
Client-Server
Hybrid Approaches
6
Peer to Peer Architecture
7
Peer-to-Peer Architecture
Pros
Cons
• Optimal response time
• Potential for interest management
• Lot of redundant communication
• Doesn’t scale well
• Poor administrative control
8
Client Server Architecture
9
Client Server Architecture
10
Client Server Architecture
Pros
Cons
• Scales reasonably well
• Companies can retain administrative control
• Server eventually becomes a bottleneck
• Static topology
• Suboptimal position of server with respect to
clients
11
Mirrored Server Architecture
12
Mirrored Server Architecture
Pros
Cons
• Scales well
• Uniformity in response time
• Allows administrative control
• Redundant communication
• Static topology
• Inconsistent game states at servers
13
Objectives
Build a generic network infrastructure for
multiplayer games
•
•
Minimize redundant data communication
Dynamic and self-adjusting in the face of failure
Demonstrate usefulness of overlay network of
active nodes
•
•
Allows design of generic middleware for both new and
legacy applications
Can also be used for related applications like
distributed simulations
14
Gaming Infrastructure
(Dynamic Multicast)
Build a multicast tree spanning all player
nodes
• Based on some metric, such as link latency
Select a node located “centrally” with
respect to all tree nodes
• Mark this as the root or server
15
An Example
16
An Example
Root or Server
Deaggregation
Aggregation
Aggregation
Duplication
+ Duplication
Deaggregation
17
Features of the Infrastructure
Overlay active network
•
•
Comprises of all adapter nodes, including player
nodes
Requires state information to be maintained
Dynamism of the infrastructure
•
•
Every active node monitors network conditions
periodically
If current tree structure is found to be sub-optimal
• Modify the tree
• Relocate the root to a suitable position
18
What are the gains ?
Number of packet transmissions reduced
Tree is fault tolerant
No static server
More uniform response time
• Decreased work at routers
• Sensitive to changes in network conditions
19
Implementation
Prototype game infrastructure built
•
Target game DOOM, running on Linux
•
Active Networks Execution Environment
•
• Peer-to-peer model, uses UDP
• 4 player limit
• Game proceeds in lock-step
• ANTS (Active Node Transfer System)
• Maintains a cache at every node for storing packets
“IPcept” kernel module used for transparent proxying
and masquerading of sockets
20
Tree Construction and
Monitoring
Initial tree formation
• Statically built
• Each active node registers with the root
• Branch nodes perform aggregation and
•
•
duplication of “active” DOOM packets
Player (client) nodes perform deaggregation
ANTS routing table at every node
21
Network Monitoring
Each node “pings” neighbors periodically
to get latency information; sends info to
root
Root calculates optimal tree and optimal
position of root in that tree
Tree replaced if necessary
• Root, and other adapters, relocated
• Packets routed through the new tree
22
Tree Computation
Optimal spanning tree: Steiner tree
problem (NP-complete)
Calculate optimal source based shortest
path tree using Dijkstra’s algorithm
Place root at center of tree
23
Performance Evaluation –
Overhead due to Middleware
Overhead introduced by the middleware layer
•
•
•
Two active players, single link
• Average = 4.1 msec
• 93% of packets experience lower than average latency
• Median = 1.75 msec
Three nodes in a chain; end nodes are players, middle node
is root
• Median at players = 1.85 msec
• Median at root = 1.5 msec
Periodic spikes in overhead due to our network monitoring
24
Performance Evaluation –
Overhead of Topology Change
Old Root
New Root
Typical overhead of root
relocation: 100-200 msec
Maximum overhead
recorded: ~ 700 msec
25
Performance Evaluation
Simulation of the active gaming architecture;
taking measurements for large graphs
Network traffic: total number of packets over
links during a game step
Tree quality: average distance between
player nodes
26
Simulation
Used Georgia Tech topology generator to
generate random graphs (250 nodes):
•
•
•
2 Transit-Stub graphs (Internet-like topology)
1 Random graph using Waxman model
1 Three-level Hierarchical graph
All nodes considered active
Multicast group size varying from 2 to 30,
selected randomly; readings from 100
instances of each group size
27
Network Traffic Comparison
6000
99% Confidence Intervals
5000
Number of Packets
4000
Peer to Peer
Client Server
Multicast
3000
2000
1000
0
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Number of Players
28
20
21
22
23
24
25
26
27
28
29
30
Network Traffic Comparison (ClientServer vs Dynamic Multicast)
250
99% Confidence Intervals
Number of Packets
200
150
Client Server
Multicast
100
50
0
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Number of Players
29
20
21
22
23
24
25
26
27
28
29
30
Average Player-to-Player
Distance
180
160
140
Average Distance
120
100
99% Confidence Intervals
80
60
40
20
0
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Number of Players
30
Peer to Peer
Multicast
Related Work
Panda and Conductor – LASR, UCLA
•
Application-transparent adaptation
Gathercast – [He2002]
•
Packet aggregation paradigm
Multicasting using active networks; e.g. [Lehman98]
MiMaze gaming architecture – INRIA, France
•
Uses IP multicasting
A distributed multiplayer game server system –
[Cronin2001], University of Michigan
•
•
•
Mirrored-server architecture
Reliable multicasting, clients connect to nearest server
Requires re-modeling of the game
31
Future Work
Make the system more fault tolerant;
recover from failure of tree root
Replicate server functionality
• Peer to peer communication between servers
• Reduces chance of bottleneck
Build individual game clusters based on
proximity
32
Conclusion
Demonstrated that adaptation of packet distribution
infrastructure can improve game performance
Proved the feasibility of using active networks to
adapt game architectures
•
•
•
Both new and legacy games
By extension, this can be used for other classes of
distributed applications
Performance impact will be even greater for non-real time
applications
Proved that dynamically modifiable trees and
relocatable servers are practical
•
On-the-fly modifications have very small impact on
performance
33
Thank You
Further questions ?
Email:
[email protected]
Web Page:
http://www.cs.ucla.edu/~vrama
34