IBM Presentations: Smart Planet Template

Download Report

Transcript IBM Presentations: Smart Planet Template

Alex Landau
25 May 2010
Plugging the Hypervisor Abstraction
Leaks Caused by Virtual Networking
Alex Landau, David Hadas, Muli Ben-Yehuda
IBM Research – Haifa
© 2010 IBM Corporation
Hypervisor leaks
 Original goal of hypervisors – complete replica of physical hardware
 Application running on host should be able to run in guest
 Host details leaked to guest
–Instruction set extensions
–Bridged networking
• Leaked IP address, subnet mask, etc.
–NAT
• Not suitable for many applications
2
© 2010 IBM Corporation
Why leaks are bad?
 Why is that a problem?
–Checkpoint / restart
–Cloning
–Live migration
 Example:
–Guest acquires IP address from DHCP
–Guest is live-migrated to different data center
–Guest uses old IP address in new network
 Current solution:
–Defer problem to guests and network equipment
–E.g., VLANs
3
© 2010 IBM Corporation
Packet flow today (in KVM)
QEMU
Guest
QEMU
Guest
Guest
application
Socket Interface
Socket Interface
Guest
Kernel
Host
Kernel
4
Guest
application
Guest
Network
Stack
Guest
Kernel
Guest
Network
Stack
Network
Adapter
Driver
VIRTIO
Frontend
Emulated
Network
Adapter
VIRTIO
Backend
Virtual
Network
Interface
TAP
Host
Network
Services
(E.g. Bridge
or VAN
central services)
Virtual
Network
Interface
TAP
© 2010 IBM Corporation
How to avoid leaks?
 Hypervisor, not network, is responsible for avoiding leaks
 Guests should be:
–Offered an isolated virtual environment
–Independent of physical network characteristics (e.g., topology)
–Independent of physical location (e.g., IP addresses)
 Example:
–Guest should receive IP address independent of:
• Host running the guest
• Data center containing the host
• Network configuration of the host
5
© 2010 IBM Corporation
Avoiding leaks – Encapsulation
 Guest produces Layer-2 frame
 Host encapsulates it in UDP packet
 Host finds destination host
–By peeking at destination (guest) MAC address
–And “somehow” finding destination host
 Host transmits UDP packet
 Receiver host receives UDP packet
 Receiver host decapsulates Layer-2 frame from UDP packet
 Receiver host passes Layer-2 frame to guest
6
© 2010 IBM Corporation
Proposed packet flow – Dual Stack
QEMU
Guest
Guest
application
QEMU
Guest
Guest
Network
Stack
App.
Socket Interface
Socket Interface
Guest
Kernel
Guest
application
Guest
Kernel
Guest
Network
Stack
Network
Adapter
Driver
VIRTIO
Frontend
Driver
Emulated
Network
Adapter
VIRTIO
Backend
Traffic
Encapsulation
Traffic
Encapsulation
Guest
Stack
(Glue)
Socket Interface
Isolation
Host
Kernel
7
Host
Network
Stack
Host
Stack
Net Driver
Driver
© 2010 IBM Corporation
Performance
 Path from guest to wire is long
 Latencies are manifested in the form of:
–Packet copies
–VM exits and entries
–User/Kernel mode switches
–Host QEMU process scheduling
8
© 2010 IBM Corporation
Large packets
 Transport and Network layers capable of up to 64KB packets
 Ethernet limit is 1500 bytes
–Ignoring jumbo frames
 But there is no Ethernet wire between guest and host!
 Set MTU to 64KB in guest
 64KB packets are transferred from guest to host
–Inhibit TCP/UDP checksum calculation and verification
9
© 2010 IBM Corporation
Large packets – Flow
 Application writes 64KB to TCP socket
 TCP, IP check MTU (=64KB) and create 1 TCP segment, 1 IP packet
 Guest virtual NIC driver copies entire 64KB frame to host
 Host writes 64KB frame into UDP socket
 Host stack creates 1 64KB UDP packet
 If packet destination = VM on local host
–Transfer 64KB packet directly on the loopback interface
 If packet destination = other host
–Host NIC segments 64KB packet in hardware
10
© 2010 IBM Corporation
CPU affinity and pinning
 QEMU process contains 2 threads
–CPU thread (actually, one CPU thread per guest vCPU)
–IO thread
 Linux process scheduler selects core(s) to run threads on
 Many times scheduler made wrong decisions
–Schedule both on same core
–Constantly reschedule (core 0 -> 1 -> 0 -> 1 -> …)
 Solution/workaround – pin CPU thread to core 0, IO thread to core 1
11
© 2010 IBM Corporation
Flow control
 Guest does not anticipate flow control at Layer-2
 Thus, host should not provide flow control
–Otherwise, bad effects similar to TCP-in-TCP encapsulation will
happen
 Lacking flow control, host should have large enough socket buffers
 Example:
–Guest uses TCP
–Host buffers should be at least guest TCP’s bandwidth x delay
12
© 2010 IBM Corporation
Performance results
Throughput
13
Receiver CPU Utilization
© 2010 IBM Corporation
Thank you!
14
© 2010 IBM Corporation