幻灯片 1 - University of New South Wales
Download
Report
Transcript 幻灯片 1 - University of New South Wales
Multi-granular, multi-purpose
and multi-Gb/s monitoring on
off-the-shelf systems
TELE9752 Group 3
Agenda
•Introduction
•System Overview
•Performance Evaluation Results
•An Application Sample
•Related Work
•Conclusion
I. Introduction
What is it?
Multi-granular
Identify an event in either flow records or packet data or both angles
• Multi-purpose
perform tasks in parallel
different traffic-related purpose
sharing granularities between apps
•
Why use it?
• Low-cost
• High performance in off-the-shelf systems
• Provides flexibility between capturing and processing traffic interaction
Features
Network trouble shooting
• Traffic flows to mornitor the sudden changes(e.g. peaks)
• Flow traces (e.g. identify the troublesome agents)
• Traffic inspections for futher problem diagnose (e.g. lost packets)
Data Pre-processing
• e.g. provides flow records to all apps (skip step of flow record creation)
Performance
Conventional approach vs M 3Omon
Optimization techs
• Low –level hardware affinities
• Allow NIC driver and default stack optimizations
• Software optimizations
Contributions
• API development for Multi-granular apps
• Construct data at different granularities – saving duplicated efforts
• Works at multi-Gb/s rates after all Optimization
• Scalability ,available for open-source license
TING TAN 3/3
II. System Overview
HPCAP
• Kernel level module implementing network
traffic sniffer, at real time.
• for each NIC to be monitored, a kernel-level
thread is instantiated and assigned to its
receive queue.
• For new packet, thread makes a copy to a
kernel-level packet buffer
• Packet data are accessed on a singleproducer/multiple-consumer basis.
M3 - OMON
• Consists of 3 sub modules –
• Packet Dumper - reads fixed-size blocks of
bytes (e.g. 1 MB) from the buffer and writes
them to disk.
- an independent periodic process (e.g. CRON)
is in charge of deleting old capture files when
the volume is nearly full.
• Flow manager - flow reconstruction and
statistic collection.
M3 - OMON
• Flow store - table indexed with a hash over
the 5-tuple, handling collisions with linked
lists.
• Maintains a list of active flows with each node
containing a pointer to the flow record in the
hash table.
• Periodically (e.g. every second) generates the
MRTG statistics, both writing them to a file
and sending them through a multicast socket.
M3 - OMON
• Flow exporter -different thread exports flow records,
writing them to disk and using a multicast socket.
- Flows may be exported in either an extended NetFlow
or standard IPFIX formats.
- Each Flow record - 5-tuple - MAC addresses, first/last
packet timestamps, counters of bytes and packets,
average/standard deviation/minimum/maximum for
both packet length and inter-arrival times, TCP
statistics (e.g. counters of flags or number of packets
with TCP zero-window advertisements), the first 10
packet lengths and inter-arrivals and, if required, the
first N bytes of payload, which is configurable.
M3 - Omon’s API
• provides real-time and offline access to the data
gathered by the system, namely: raw packets (PCAP
format), MRTG statistics and flow records. It has been
designed taking as a reference the de facto standard
PCAP library.
• Real time pkt data-applications to hook as HPCAP
listeners and read packets using a packet loop function
similar to pcap_loop implemented in the PCAP library.
• Exported flow recs and MRTG data - loop over the
records subscribing to the corresponding multicast
group.
III. PERFORMANCE EVALUATION
RESULTS
This table shows the mean throughput and standard error of the mean when
repeating the 10 min experiments 50 times, for both applications and for fixed-size
line-rate synthetic traffic. It also shows both applications only lose packets in the
worst-case scenario.
This table shows the mean and standard error of the mean for both system
throughput and packet loss when receiving the CAIDA trace at link speed.
And it shows the performance obtained by the complete M3Omon system.
It also shows the overall performance when instantiating two forensic
(offline) applications—one for packets and one for flows—and using all of
the available cores for real-time flow record processing.
IV. An application sample: DETECTPRO
• It leverages leverages M3Omon to monitor
network traffic without being concerned
about lower-level tasks.
• DetectPro reads aggregate statistics to
diagnose both short-term and long-term
changes and reports the corresponding alarms.
• It selects and inspects packet traces
corresponding to the alarm period.
Anomalou
s increase
The number of connections has increased in this time interval but the increment
in the involved bytes and packets is not relevant.
Hosts in the subnets represented as 40.10.0.0/16 and 238.138.39.0/24, in directions A and
B respectively, generated a huge number of SYN flag activated packets
V. Related work
• Capturing engines: PacketShader, PF_Ring,
netmap, PFQ, and DPDK.
• System: Tstart, TM (time machine).
• Hardware-accelerated monitoring center –
HAMOC.
• Application: Blockmon, traffic classification,
NIDS.
Conclusion
• Propose a monitoring system architecture
consist of three main blocks:
– M3Omon
– HPCAP
– An API allowing Multi-granular data accessibility