Transcript Document
Towards Virtual Networks for Virtual
Machine Grid Computing
Ananth I. Sundararaj
Peter A. Dinda
Prescience Lab
Department of Computer Science
Northwestern University
http://virtuoso.cs.northwestern.edu
Outline
•
•
•
•
•
•
•
•
Virtual machine grid computing
Virtuoso system
Networking challenges in Virtuoso
Enter VNET
VNET
Adaptive virtual network
Related Work
Conclusions
Current Status
2
1
arbitrary amounts of
AimDeliver
computational power to perform
distributed and parallel computations
Traditional
Paradigm
New
Paradigm
2
Resource multiplexing using
Grid OS level mechanism
Computing
3b
5
Grid Computing
using virtual
machines
4
3a
6a
Problem1:
6b
Complexity from resource
Solution
user’s perspective
Problem2:
Complexity from resource
owner’s perspective
Virtual Machines
What are they?
How to leverage
them?
3
Virtual Machines
Virtual machine monitors (VMMs)
•Raw machine is the abstraction
•VM represented by a single
image
•VMware GSX Server
4
Virtual machine grid computing
• Approach: Lower level of abstraction
– Raw machines, not processes, jobs, RPC calls
R. Figueiredo, P. Dinda, J. Fortes, A Case For Grid
Computing on Virtual Machines, ICDCS 2003
• Mechanism: Virtual machine monitors
• Our Focus: Middleware support to hide complexity
–
–
–
–
–
–
–
Ordering, instantiation, migration of machines
Virtual networking
remote devices
Connectivity to remote files, machines
Information services
Monitoring and prediction
Resource control
5
The Simplified Virtuoso Model
User’s
LAN
Virtual networking ties the
machine back to user’s
home network
Orders a raw
machine
VM
Specific hardware and
performance
Basic software
installation available
Virtuoso continuously monitors and adapts
User
6
User’s View in Virtuoso Model
User’s
LAN
VM
User
7
Outline
•
•
•
•
•
•
•
•
Virtual machine grid computing
Virtuoso system
Networking challenges in Virtuoso
Enter VNET
VNET
Adaptive virtual network
Related Work
Conclusions
Current Status
8
Why VNET? A Scenario
Foreign hostile
LAN
User’s friendly
LAN
IP network
User has just bought
Virtual Machine
9
Why VNET? A Scenario
VM traffic going
out on foreign
LAN
Foreign hostile
LAN
User’s friendly
LAN
X
IP network
Host
Proxy
Virtual Machine
A machine is suddenly plugged into a foreign
network. What happens?
•
Does it get an IP address?
•
Is it a routeable address?
•
Does firewall let its traffic
through? To any port?
VNET: A bridge with long wires
10
Outline
•
•
•
•
•
•
•
•
Virtual machine grid computing
Virtuoso system
Networking challenges in Virtuoso
Enter VNET
VNET
Adaptive virtual network
Related Work
Conclusions
Current Status
11
A Layer 2 Virtual Network for the
User’s Virtual Machines
• Why Layer 2?
–
–
–
–
Protocol agnostic
Mobility
Simple to understand
Ubiquity of Ethernet on end-systems
• What about scaling?
– Number of VMs limited (~1024 per user)
– One VNET per user
– Hierarchical routing possible because MAC
addresses can be assigned hierarchically
12
VNET operation
“eth0”
ethx
Client
LAN
Client
VNET
Proxy
Ethernet Packet
Captured by
Promiscuous
Packet Filter
ethy
“Host
Only”
Network
VM
“eth0”
vmnet0
ethz
IP Network
VNET
Host
Ethernet Packet Tunneled
over TCP/SSL Connection
Ethernet
Packet
Injected
Directly
into VM
interface
Traffic outbound from the user’s LAN
13
Performance Evaluation
However
Main goal
VNET’s performance should be
Convey the network management
problem induced by VMs to the
home network of the user
• In line with physical network
Metrics
Latency
Why?
How?
• Comparable to other options
• Sufficient for scenarios
Bandwidth
Why?
How?
• small transfer
• ping
• Large transfer
• ttcp
• Interactivity
• hour long intervals
• low throughput
• socket buffer
14
• 1 GB of data
VNET test configuration
100 mbit
Switches
Client
100 mbit
Switch
100 mbit
Switch
Router
Router
Firewall 1
Proxy
VM
Host
Local
IP Network
(14 hops via Abilene)
Carnegie Mellon University, PA
Northwestern University, IL
Wide area configuration
Client
100 mbit
100 mbit
Switches Firewall Switch
1
Router
100 mbit
Switch
Firewall
2
Proxy
VM
100 mbit
Switches
Host
Local
Local area configuration
15
Average latency over WAN
40
Host - VM
35
30
Milliseconds
Proxy - Host
25
20
15
10
Client - Proxy
5
0
Client<->VM
Client<->VM (VNET)
Client<->VM (VNET+SSL)
(Physical Network)
VM
Host
Client
Proxy
Northwestern University, IL
IP Network
16
Carnegie Mellon University, PA
Standard deviation of latency over
WAN
What: VNET increases
variability in latency
80
Why: TCP connection
between VNET servers
trades packet loss for
increased delay
70
Milliseconds
60
50
40
30
20
10
0
Client<->VM
(Physical Network)
Client<->VM (VNET)
Client<->VM (VNET+SSL)
17
Bandwidth over WAN
2
Expectation:
1.8
VNET to achieve
throughput comparable
to the physical network
MB/s
1.6
1.4
What do we see:
1.2
VNET achieves lower
than expected
throughput
1
Why:
0.8
VNET’s is tricking
TTCP’s TCP connection
0.6
0.4
0.2
0
Host<->Client
Client<->VM (VNET)
Client<->VM (VNET+SSL)
18
Outline
•
•
•
•
•
•
•
•
Virtual machine grid computing
Virtuoso system
Networking challenges in Virtuoso
Enter VNET
VNET
Adaptive virtual network
Related Work
Conclusions
Current Status
19
VNET Overlay
Foreign hostile
LAN 1
User’s friendly
LAN
VM 1
Host 1
+
VNET
IP network
Proxy
+
VNET
VM 4
Foreign hostile
LAN 4
Foreign hostile
LAN 2
Host 4
+
VNET
VM 3
Host 3
+
VNET
Foreign hostile
LAN 3
VM 2
Host 2
+
20 VNET
Bootstrapping the Virtual Network
VM
VM
Vnetd
Host +
VNETd
Proxy +
VNETd
• Star topology always possible
VM
• Topology may change
• Links can be added or removed on demand
• Virtual machines can migrate
• Forwarding rules can change
• Forwarding rules can be added or removed on
21
demand
Application communication
topology and traffic load;
application processor load
Vnetd layer can collect all
this information as a side
effect of packet transfers
and invisibly act
• VM migrates
VM
Layer
VNETd
Layer
• Topology changes
• Routing change
• Reservation
Network bandwidth and
latency; sometimes
topology
Physical
Layer
22
Outline
•
•
•
•
•
•
•
•
Virtual machine grid computing
Virtuoso system
Networking challenges in Virtuoso
Enter VNET
VNET
Adaptive virtual network
Related Work
Conclusions
Current Status
23
Related Work
• Collective / Capsule Computing (Stanford)
– VMM, Migration/caching, Hierarchical image files
• Denali (U. Washington)
– Highly scalable VMMs (1000s of VMMs per node)
• SODA and VIOLIN (Purdue)
– Virtual Server, fast deployment of services
•
•
•
•
•
VPN
Virtual LANs, IEEE
Overlay Networks: RON, Spawning networks, Overcast
Ensim
Virtuozzo (SWSoft)
– Ensim competitor
• Available VMMs: IBM’s VM, VMWare, Virtual
PC/Server, Plex/86, SIMICS, Hypervisor, VM/386
24
Conclusions
• There exists a strong case for grid computing
using virtual machines
• Challenging network management problem
induced by VMs in the grid environment
• Described and evaluated a tool, VNET, that
solves this problem
• Discussed the opportunities, the combination
of VNET and VMs present, to exploit an
adaptive overlay network
25
Current Status
• Application traffic load measurement
and topology inference [Ashish Gupta]
• Support for arbitrary topologies and
forwarding rules
• Dynamic adaptation to improve
performance
26
Pseudo
proxy
Current Status Snapshots
27
• For More Information
– Prescience Lab (Northwestern University)
• http://plab.cs.northwestern.edu
– Virtuoso: Resource Management and
Prediction for Distributed Computing using
Virtual Machines
• http://virtuoso.cs.northwestern.edu
• VNET is publicly available from
• http://virtuoso.cs.northwestern.edu
28
Isn’t It Going to Be Too Slow?
Small relative
virtualization
overhead;
compute-intensive
Relative
overheads < 5%
Application
Resource ExecTime Overhead
(10^3 s)
SpecHPC Physical
Seismic
VM, local
(serial,
medium) VM, Grid
16.4
N/A
16.6
1.2%
16.8
2.0%
9.31
N/A
9.68
4.0%
9.70
4.2%
virtual FS
SpecHPC Physical
Climate VM, local
(serial,
medium) VM, Grid
virtual FS
Experimental setup: physical: dual Pentium III 933MHz, 512MB memory, RedHat 7.1,
30GB disk; virtual: Vmware Workstation 3.0a, 128MB memory, 2GB virtual disk, RedHat 2.0
NFS-based grid virtual file system between UFL (client) and NWU (server)
29
Isn’t It Going To Be Too Slow?
3
2.5
2
1.5
1
0.5
Tasks on
Physical
Machine
Tasks on
Virtual
Machine
Tasks on
Physical
Machine
Tasks on
Virtual
Machine
Tasks on
Physical
Machine
Tasks on
Virtual
Machine
0
No Load
Light Load
Heavy Load
Synthetic benchmark: exponentially arrivals of compute bound
tasks, background load provided by playback of traces from PSC
Relative overheads < 10%
30
Isn’t It Going To Be Too Slow?
• Virtualized NICs have very similar
bandwidth, slightly higher latencies
– J. Sugerman, G. Venkitachalam, B-H Lim, “Virtualizing I/O Devices
on VMware Workstation’s Hosted Virtual Machine Monitor”,
USENIX 2001
• Disk-intensive workloads (kernel build,
web service): 30% slowdown
– S. King, G. Dunlap, P. Chen, “OS support for Virtual Machines”,
USENIX 2003
However: May not scale with faster NIC or disk
31
Average latency over WAN
P-H
40
Physical
36.993
VMWare
37.436
36.848
35.622
35
VNET
Client -- C
37.535
35.524
Proxy -- P
Host -- H
30
Inline with Physical?
25
Physical= C-P + P-H + H-VM
= 0.34 + 36.993 + 0.189
20
= 37.522 ms
15
10
VNET = 37.535 ms
C-P
= 35.525 ms (with SSL)
H-VM
Comparison with options
5
0
0.345
VNET
0.044
0.189
= 37.535 ms
= 35.525 ms (with SSL)
VMware = 35.625 (NAT)
= 37.435 ms (bridged)
Physical network
VMware options
VNET options
32
Standard deviation of latency over
WAN
What: VNET increases
variability in latency
80
Physical
VMWare
VNET
77.287
70
Client -- C
60
Proxy -- P
50
40.763
H-VM
40
Physical= C-P + P-H + H-VM
C-P
18.702
18.484
17.287
H-VM
10
0
Host -- H
Inline with Physical?
30
20
Why: TCP connection between
VNET servers trades
packet loss for
increased delay
= 19.907 ms
4.867
1.105
0.011
0.095
= 1.11 + 18.702 + 0.095
VNET = 77.287 ms
= 40.763 ms (with SSL)
33
Bandwidth over WAN
Physical
network
2
11.2
Physical
1.97
1.93
207.6
27.9
VMWare
VMWare bridged
networking
VNET SSH
1.8
1.63
1.6
What: VNET achieves lower
than expected
throughput
Why: VNET’s is tricking
TTCP’s TCP connection
1.4
1.22
MB/s
1.2
1
0.8
Expect:VNET to achieve
throughput comparable
to the physical network
0.94
0.72
0.6
0.4
0.2
0
0.4
Inline with Physical?
Physical= 1.93 MB/s
nt Host <->VM ridged) (NAT) NET) SSL) SSH) VNET = 1.22 MB/s
Ho<st->Pro<x-y>Proxy->Cliet<
>
-> Host M (B >VM M (V
<
l
<
ET>+Client (
N
V
V
V
<
Loca Client Host Host Hos
>
t
>
(
t<- Clien lient<- ->VM ost<= 0.94 MB/s (with SSL)
H
C lient<
Clien
C
34