Bandwidth-on-demand to reach the optimal throughput of media

Download Report

Transcript Bandwidth-on-demand to reach the optimal throughput of media

Bandwidth-on-demand to reach the
optimal throughput of media
Brecht Vermeulen
Stijn Eeckhaut, Stijn De Smet, Bruno Volckaert, Joachim Vermeir,
Filip De Turck, Piet Demeester (Ghent University – IBBT)
Ibrahim Habib, Zhaoming Li (City University of New York )
With acknowledgment to IBBT FIPA & GEISHA project, VRT, IBM
& University of Antwerp
TNC 2007
Broadcaster workflow
Raw video
material
ingest
Media production
ingest
Archiving
Playout
p. 2
Brecht Vermeulen
TNC 2007
Tape based workflow
Digital tapes
Non-Linear editing
Linear editing
Rough cut
Local conversion to file
Final editing
Voice over
Playout
p. 3
Brecht Vermeulen
TNC 2007
File based workflow
Digital files
Archive
Digital tape from camera
or memory device
Central
storage
NLE clients
Playout
Windows or Apple
p. 4
Brecht Vermeulen
TNC 2007
Files: so ?
Mb/s
1h video (in GB)
Transfer 1h video in seconds over Gb/s

SD
DV25
28,95
13,0
111
HD
DNxHD 2.2 HDCAM HDCAM-SR
220
144
440
99,0
64,8
198,0
843
551
1685
Proxy
1,5
0,7
6
Research issues:




Optimal large file transfer: network & server
performance
Offsite transcoding/rendering farms (& editing &
voice-over & subtitles, ...)
File-based archiving
Disaster recovery
Brecht Vermeulen
TNC 2007
p. 5
Contents
Introduction
 Optimising server networking



TCP/IP offloading vs. CPU based
FTP vs. NFS vs. CIFS
Network based vision
 Ongoing research
 Conclusion

p. 6
Brecht Vermeulen
TNC 2007
TCP tuning options

Adapt kernel TCP parameters (free)






Bigger receive window: more data in-transit
Important if bandwidth*delay is high
Linux: rmem,wmem,tcp_rmem, tcp_wmem,
mem,netdev_max_backlog
Windows registry: Tcp1323Opts=3,
GlobalMaxTcpWindowSize,TcpWindowSize,AFD
DefaultReceive(Send)window
e.g. Buffers and window on 4MB
Jumbo frames ($)


MTU 9000 bytes, ..., 16000 bytes
Not really a standard-> NICs, switches to be tested
p. 7
Brecht Vermeulen
TNC 2007
TCP offloading

TCP checksum & segmentation offload ($)




Most modern good nics
Works with standard kernel
Warning: some cards say that they do offloading,
but it is done in the driver software
Full TCP offload ($$)


Complete TCP/IP stack on the NIC (incl.
retransmits, slow start...)
TCP setup/teardown still by host



Webserver short connections vs. long transfers
Problems with e.g. Bonding
Kernel patch needed (linux)
p. 8
Brecht Vermeulen
TNC 2007
TCP offloading
Normal NIC
Offloading
p. 9
Brecht Vermeulen
TNC 2007
TCP offloading tests

Back-to-back tests between AMD dual Opteron systems
(Opteron 246 @ 2GHz)
 Intel PRO/1000 NIC (4 x 1 Gbps)
 TCP checksum & segm offload
 Chelsio T204 TOE (4 x 1 Gbps)
 full TCP offload (= TCP Offload Engine)

TCP throughput measured with Iperf
 Generates TCP streams on different interfaces
 Transfers are memory-to-memory

Limitations
 PCI-X bus: 64 bit @ 133 MHz
~ 1GB/s
 PCI-X is a half-duplex bus, PCI Express is a full-duplex point-topoint connection
 Maximal (unidir) TCP efficiency: 94.1%
941 Mbps per link
 99% for 9000 byte MTU
p. 10
Brecht Vermeulen
TNC 2007
TCP offloading results

Chelsio TOE vs. Intel Pro 1000 (MTU 1500)



4 links unidir: 3.7 Gb/s vs. Intel NIC 2.7 Gb/s
4 links bidir: 7 Gb/s vs. Intel NIC 3.2 Gb/s
Jumbo frames on Intel: throughput +, CPU 8 Gb/s
MTU
9000
4 Gb/s
50%
Intel
Chelsio
Intel
Chelsio
MTU
1500
100%
p. 11
Brecht Vermeulen
TNC 2007
Protocol comparison setup


Transfers between storage and memory
GPFS fibre channel storage used


360MB/s write, 690MB/s read from one server
2.88Gb/s write, 5.52 Gb/s read
p. 12
Brecht Vermeulen
TNC 2007
Protocol comparison
FTP > NFS > CIFS for reads
 FTP > CIFS > NFS for writes
 FTP with chelsio close to GPFS performance

p. 13
Brecht Vermeulen
TNC 2007
CIFS (synchr.) vs. Latency: model
p. 14
Brecht Vermeulen
TNC 2007
Contents
Introduction
 Optimising server networking
 Network based vision





Broadcasters’ problems
Media grid farms
Archiving
Disaster recovery
Ongoing research
 Conclusion

p. 15
Brecht Vermeulen
TNC 2007
Broadcasters’ problems
Typically broadcasters work together with
production houses, remote studios, ...
 Storage and computing is not core business
of broadcasters -> outsource to datacenters ?
 Networking seems THE solution, BUT...





FTP > NFS > CIFS+delay issue: but remember
windows editing clients -> CIFS
HDCAM-SR: 440Mb/s video codec
Storage bandwidth: both for archiving (and retrieve
something from archive), disaster recovery
Time-critical (journals)
p. 16
Brecht Vermeulen
TNC 2007
Mediagrid farms
Editing on standard definition, rendering
on rendering farms on HD (editing effects,
cuts, ...)
 Problems:




Standard grid infrastructure is more directed
towards computing intensive vs. storage/dataset
intensive tasks
For broadcasters: guarantees are needed on
bandwidth and computing availability
Bandwidth to the rendering farms should be high,
but can be by reservation (e.g. for non-live
productions).
p. 17
Brecht Vermeulen
TNC 2007
Archive: (S)ATA disk price evolution
10,0
9,0
Max Capacity
SATA 2000
75
SCSI 2000
73
SATA 2007
750
SCSI 2007
300
8,0
7,0
6,0
€/GB
7,4
17,4
0,3
2,6
5,0
4,0
3,0
2,0
1,0
0,0
1999
2000
2001
2002
2003
2004
Price per GB
2005
Source: own purchase prices 1999-2007
Brecht Vermeulen
TNC 2007
2006
2007
p. 18
Archive

Cheaper disks and tape library systems:


Online/nearline file-based archiving
Storage management
p. 19
Brecht Vermeulen
TNC 2007
Archive providers
p. 20
Brecht Vermeulen
TNC 2007
Archive needs
Only high bandwidth when retrieving
content
 Uploading of content may be slower
 Some content may be duplicated to two
sites, other to only one site
 Reservations for guaranteed bandwidth ?

p. 21
Brecht Vermeulen
TNC 2007
Disaster recovery




Central storage is large
Production is done on this
Total restore = > 24 hours
Solution:




Archive
Central
storage
Working on remote copy ?
Networking/server performance ?
Client CIFS ?
Bandwidth guarantees on-demand for this ?
NLE clients
Playout
p. 22
Brecht Vermeulen
TNC 2007
Contents
Introduction
 Optimising server networking
 Network based vision
 Ongoing research



VPN between Gent and New York
Conclusion
p. 23
Brecht Vermeulen
TNC 2007
VPN: Gent – New York

For now: only 100Mb/s
Cuny (NY)
HOPI
New York
New York
New York
LDP session to exchange vlan 4003
Ingress LSP in NL
Egress LSP in NY
HOPI-Ghent_Ams_Nyc
Ingress LSP in NY
Egress LSP in NL
HOPI-Ghent_Nyc_Ams
Transit LSP in BELNET’s Ghent
Egress LSP in GEANT2 NL
Amsterdam
HOPI-Ghent_Ghent_Ams
BELNET
Ghent
IBBT
Amsterdam
Ingress LSP in GEANT2 NL
Figure provided
by Dante
HOPI-Ghent_Ams_Ghent
p. 24
Brecht Vermeulen
TNC 2007
CVLSR
CHEETAH Virtual Label Switching Router
 Linux control PC with GMPLS engine




Ethernet switch with
bandwidth
reservations
Due to delay in setup
and performance
issues, research is
still ongoing
One possible way
p. 25
Brecht Vermeulen
TNC 2007
Conclusions

Demand from broadcasters:



Bandwidth and remote storage/computing
Large files
Research:


Optimal configuration and tuning of protocol
parameters and servers to use the bandwidth
Is bandwidth reservation a solution for network
distribution of this functionality ?
p. 26
Brecht Vermeulen
TNC 2007