Lecture 18 - The University of Texas at Dallas
Download
Report
Transcript Lecture 18 - The University of Texas at Dallas
Data Mining for
Malicious Code Detection
and
Security Applications
Prof. Latifur Khan
Mr. Mehedy Masud
Dr. Mamoun Awad
Prof. Bhavani Thuraisingham
The University of Texas at Dallas
March 19, 2007
Lecture#18
4/9/2016 05:20
2
Outline
0 Overview of Data Mining
0 Vision for Assured Information Sharing (Our framework)
0 Data Mining for Cyber security applications
- Intrusion Detection
- Data Mining for Assembly code
- Data mining for Buffer overflow
- Data mining for firewall policy checking
- Data mining for email work detection
0 Privacy will be discussed in Lecture #19
4/9/2016 05:20
3
Vision: Assured Information Sharing
Data/Policy for Coalition
Publish
Data/Policy
Publish
Data/Policy
Publish
Data/Policy
Component
Data/Policy for
Agency A
Component
Data/Policy for
Agency C
Component
Data/Policy for
Agency B
1.
Friendly partners
2.
Semi-honest partners
3.
Untrustworthy partners
4/9/2016 05:20
4
What is Data Mining?
Information Harvesting
Knowledge Mining
Data Mining
Knowledge Discovery
in Databases
Data Dredging
Data Archaeology
Data Pattern Processing
Database Mining
Knowledge Extraction
Siftware
The process of discovering meaningful new correlations, patterns, and trends by
sifting through large amounts of data, often previously unknown, using pattern
recognition technologies and statistical and mathematical techniques
(Thuraisingham, Data Mining, CRC Press 1998)
4/9/2016 05:20
What’s going on in data mining?
0 What are the technologies for data mining?
- Database management, data warehousing, machine learning,
statistics, pattern recognition, visualization, parallel processing
0 What can data mining do for you?
- Data mining outcomes: Classification, Clustering, Association,
Anomaly detection, Prediction, Estimation, . . .
0 How do you carry out data mining?
- Data mining techniques: Decision trees, Neural networks,
Market-basket analysis, Link analysis, Genetic algorithms, . . .
0 What is the current status?
- Many commercial products mine relational databases
0 What are some of the challenges?
- Mining unstructured data, extracting useful patterns, web
mining, Data mining, security and privacy
5
4/9/2016 05:20
6
Data Mining for Intrusion Detection: Problem
0
An intrusion can be defined as “any set of actions that attempt to
compromise the integrity, confidentiality, or availability of a resource”.
0
Attacks are:
- Host-based attacks
- Network-based attacks
0
Intrusion detection systems are split into two groups:
- Anomaly detection systems
- Misuse detection systems
0
Use audit logs
- Capture all activities in network and hosts.
- But the amount of data is huge!
4/9/2016 05:20
7
Misuse Detection
0 Misuse Detection
4/9/2016 05:20
8
Problem: Anomaly Detection
0 Anomaly Detection
4/9/2016 05:20
9
Our Approach: Overview
Training
Data
Class
Hierarchical
Clustering (DGSOT)
SVM Class Training
Testing
DGSOT: Dynamically growing self organizing tree
Testing Data
4/9/2016 05:20
Our Approach: Hierarchical Clustering
Our Approach
Hierarchical clustering with SVM flow chart
10
4/9/2016 05:20
11
Results
Training Time, FP and FN Rates of Various Methods
Average
FP
Average
FN
Rate
(%)
Rate
(%)
Accuracy
Total
Training
Time
Random
Selection
52%
0.44 hours
40
47
Pure SVM
57.6%
17.34 hours
35.5
42
SVM+Rocchio
Bundling
51.6%
26.7 hours
44.2
48
SVM + DGSOT
69.8%
13.18 hours
37.8
29.8
Methods
Average
4/9/2016 05:20
Introduction: Detecting Malicious Executables using Data Mining
0
What are malicious executables?
- Harm computer systems
- Virus, Exploit, Denial of Service (DoS), Flooder, Sniffer, Spoofer,
Trojan etc.
- Exploits software vulnerability on a victim
- May remotely infect other victims
- Incurs great loss. Example: Code Red epidemic cost $2.6
Billion
0
Malicious code detection: Traditional approach
- Signature based
- Requires signatures to be generated by human experts
- So, not effective against “zero day” attacks
12
4/9/2016 05:20
13
State of the Art in Automated Detection
OAutomated detection approaches:
0Behavioural: analyse behaviours like source, destination address,
attachment type, statistical anomaly etc.
0Content-based: analyse the content of the malicious executable
- Autograph (H. Ah-Kim – CMU): Based on automated
signature generation process
- N-gram analysis (Maloof, M.A. et .al.): Based on mining
features and using machine learning.
4/9/2016 05:20
Our New Ideas (Khan, Masud and
Thuraisingham)
✗Content -based approaches consider only
machine-codes (byte-codes).
✗Is it possible to consider higher-level source
codes for malicious code detection?
✗Yes: Diassemble the binary executable and
retrieve the assembly program
✗Extract important features from the
assembly program
✗Combine with machine-code features
14
4/9/2016 05:20
Feature Extraction
✗Binary n-gram features
- Sequence of n consecutive bytes of binary
executable
✗Assembly n-gram features
- Sequence of n consecutive assembly
instructions
✗System API call features
- DLL function call information
15
4/9/2016 05:20
The Hybrid Feature Retrieval Model
0 Collect training samples of normal and
malicious executables.
0 Extract features
0 Train a Classifier and build a model
0 Test the model against test samples
16
4/9/2016 05:20
Hybrid Feature Retrieval (HFR)
0
Training
17
4/9/2016 05:20
Hybrid Feature Retrieval (HFR)
0
Testing
18
4/9/2016 05:20
Feature Extraction
Binary n-gram features
- Features are extracted from the byte codes in the form of ngrams, where n = 2,4,6,8,10 and so on.
Example:
Given a 11-byte sequence: 0123456789abcdef012345,
The 2-grams (2-byte sequences) are: 0123, 2345, 4567, 6789,
89ab, abcd, cdef, ef01, 0123, 2345
The 4-grams (4-byte sequences) are: 01234567, 23456789,
456789ab,...,ef012345 and so on....
Problem:
- Large dataset. Too many features (millions!).
Solution:
- Use secondary memory, efficient data structures
- Apply feature selection
19
4/9/2016 05:20
Feature Extraction
Assembly n-gram features
- Features are extracted from the assembly programs in the form
of n-grams, where n = 2,4,6,8,10 and so on.
Example:
three instructions
“push eax”; “mov eax, dword[0f34]” ; “add ecx, eax”;
2-grams
(1) “push eax”; “mov eax, dword[0f34]”;
(2) “mov eax, dword[0f34]”; “add ecx, eax”;
Problem:
- Same problem as binary
Solution:
- Same solution
20
4/9/2016 05:20
21
Feature Selection
0
Select Best K features
0
Selection Criteria: Information Gain
0
Gain of an attribute A on a collection of examples S is given by
| Sv |
Gain ( S, A) Entropy ( S)
Entropy ( Sv )
|
S
|
VValues ( A)
4/9/2016 05:20
22
Experiments
0
Dataset
- Dataset1: 838 Malicious and 597 Benign executables
- Dataset2: 1082 Malicious and 1370 Benign executables
- Collected Malicious code from VX Heavens (http://vx.netlux.org)
0
Disassembly
- Pedisassem ( http://www.geocities.com/~sangcho/index.html )
0
Training, Testing
- Support Vector Machine (SVM)
- C-Support Vector Classifiers with an RBF kernel
4/9/2016 05:20
23
Results
0
0
0
HFS = Hybrid Feature Set
BFS = Binary Feature Set
AFS = Assembly Feature Set
4/9/2016 05:20
24
Results
0
0
0
HFS = Hybrid Feature Set
BFS = Binary Feature Set
AFS = Assembly Feature Set
4/9/2016 05:20
25
Results
0
0
0
HFS = Hybrid Feature Set
BFS = Binary Feature Set
AFS = Assembly Feature Set
4/9/2016 05:20
26
Future Plans
0
System call:
- seems to be very useful.
- Need to Consider Frequency of call
- Call sequence pattern (following program path)
- Actions immediately preceding or after call
0
Detect Malicious code by program slicing
- requires analysis
4/9/2016 05:20
27
Data Mining for Buffer Overflow Introduction
0
Goal
- Intrusion detection.
- e.g.: worm attack, buffer overflow attack.
0
Main Contribution
- 'Worm' code detection by data mining coupled with
'reverse engineering'.
- Buffer overflow detection by combining data mining with
static analysis of assembly code.
4/9/2016 05:20
28
Background
0
What is 'buffer overflow'?
- A situation when a fixed sized buffer is overflown by a
larger sized input.
0
How does it happen?
- example:
........
char buff[100];
gets(buff);
........
memory
Input
string
buff
Stack
4/9/2016 05:20
29
Background (cont...)
0
Then what?
buff
memory
buff
........
char buff[100];
gets(buff);
........
Stack
Stack
Return address
overwritten
Attacker's code
memory
buff
Stack
New return address points
to this memory location
4/9/2016 05:20
Background (cont...)
0
So what?
- Program may crash
or
- The attacker can execute his arbitrary code
0
It can now
- Execute any system function
- Communicate with some host and download some 'worm'
code and install it!
- Open a backdoor to take full control of the victim
0
How to stop it?
30
4/9/2016 05:20
Background (cont...)
0
Stopping buffer overflow
- Preventive approaches
- Detection approaches
0
Preventive approaches
- Finding bugs in source code. Problem: can only work
when source code is available.
- Compiler extension. Same problem.
- OS/HW modification
0
Detection approaches
- Capture code running symptoms. Problem: may require
long running time.
- Automatically generating signatures of buffer overflow
attacks.
31
4/9/2016 05:20
CodeBlocker (Our approach)
0
A detection approach
0
Based on the Observation:
- Attack messages usually contain code while normal
messages contain data.
0
Main Idea
- Check whether message contains code
0
Problem to solve:
- Distinguishing code from data
32
4/9/2016 05:20
Severity of the problem
0
It is not easy to detect actual instruction sequence from a given
string of bits
34
4/9/2016 05:20
Our solution
0
Apply data mining.
0
Formulate the problem as a classification problem (code,
data)
0
Collect a set of training examples, containing both instances
0
Train the data with a machine learning algorithm, get the
model
0
Test this model against a new message
35
4/9/2016 05:20
CodeBlocker Model
36
4/9/2016 05:20
Feature Extraction
37
4/9/2016 05:20
Disassembly
0
We apply SigFree tool
- implemented by Xinran Wang et al. (PennState)
38
4/9/2016 05:20
Feature extraction
0
0
39
Features are extracted using
- N-gram analysis
- Control flow analysis
N-gram analysis
What is an n-gram?
-Sequence of n instructions
Traditional approach:
-Flow of control is ignored
2-grams are: 02, 24, 46,...,CE
Assembly program
Corresponding IFG
4/9/2016 05:20
Feature extraction (cont...)
0
Control-flow Based N-gram analysis
What is an n-gram?
-Sequence of n instructions
Proposed Control-flow based
approach
-Flow of control is considered
2-grams are:
02, 24, 46,...,CE, E6
Assembly program
Corresponding IFG
40
4/9/2016 05:20
Feature extraction (cont...)
0
Control Flow analysis. Generated features
- Invalid Memory Reference (IMR)
- Undefined Register (UR)
- Invalid Jump Target (IJT)
0
Checking IMR
- A memory is referenced using register addressing and
the register value is undefined
- e.g.:
mov ax, [dx + 5]
0
Checking UR
- Check if the register value is set properly
0
Checking IJT
- Check whether jump target does not violate instruction
boundary
41
4/9/2016 05:20
Putting it together
0
Why n-gram analysis?
- Intuition: in general,
disassembled executables should have a different pattern
of instruction usage than disassembled data.
0
Why control flow analysis?
- Intuition: there should be no invalid memory references or
invalid jump targets.
0
Approach
- Compute all possible n-grams
- Select best k of them
- Compute feature vector (binary vector) for each training
example
- Supply these vectors to the training algorithm
42
4/9/2016 05:20
Experiments
0
Dataset
- Real traces of normal messages
- Real attack messages
- Polymorphic shellcodes
0
Training, Testing
- Support Vector Machine (SVM)
43
4/9/2016 05:20
Results
0
0
CFBn: Control-Flow Based n-gram feature
CFF: Control-flow feature
44
4/9/2016 05:20
Novelty, Advantages, Limitations, Future
0
0
0
0
Novelty
- We introduce the notion of control flow based n-gram
- We combine control flow analysis with data mining to
detect code / data
- Significant improvement over other methods (e.g. SigFree)
Advantages
- Fast testing
- Signature free operation
- Low overhead
- Robust against many obfuscations
Limitations
- Need samples of attack and normal messages.
- May not be able to detect a completely new type of attack.
Future
- Find more features
- Apply dynamic analysis techniques
- Semantic analysis
45
4/9/2016 05:20
Analysis of Firewall Policy Rules
Using Data Mining Techniques
•Firewall is the de facto core technology of today’s network security
•First line of defense against external network attacks and threats
•Firewall controls or governs network access by allowing or
denying the incoming or outgoing network traffic according to
firewall policy rules.
•Manual definition of rules often result in in anomalies in the policy
•Detecting and resolving these anomalies manually is a tedious and
an error prone task
•Solutions:
•Anomaly detection:
•Theoretical Framework for the resolution of anomaly;
A new algorithm will simultaneously detect and resolve
any anomaly that is present in the policy rules
•Traffic Mining: Mine the traffic and detect anomalies
46
4/9/2016 05:20
47
Traffic Mining
0 To bridge the gap between what is written in the firewall policy rules
and what is being observed in the network is to analyze traffic and
log of the packets– traffic mining
= Network traffic trend may show that some rules are outdated or not used recently
Firewall
Policy Rule
Firewall
Log File
Mining Log File
Using Frequency
Filtering
Rule
Generalization
Edit
Firewall Rules
Identify Decaying
&
Dominant Rules
Generic Rules
4/9/2016 05:20
Traffic Mining Results
1: TCP,INPUT,129.110.96.117,ANY,*.*.*.*,80,DENY
2: TCP,INPUT,*.*.*.*,ANY,*.*.*.*,80,ACCEPT
3: TCP,INPUT,*.*.*.*,ANY,*.*.*.*,443,DENY
4: TCP,INPUT,129.110.96.117,ANY,*.*.*.*,22,DENY
5: TCP,INPUT,*.*.*.*,ANY,*.*.*.*,22,ACCEPT
6: TCP,OUTPUT,129.110.96.80,ANY,*.*.*.*,22,DENY
7: UDP,OUTPUT,*.*.*.*,ANY,*.*.*.*,53,ACCEPT
8: UDP,INPUT,*.*.*.*,53,*.*.*.*,ANY,ACCEPT
9: UDP,OUTPUT,*.*.*.*,ANY,*.*.*.*,ANY,DENY
10: UDP,INPUT,*.*.*.*,ANY,*.*.*.*,ANY,DENY
11: TCP,INPUT,129.110.96.117,ANY,129.110.96.80,22,DENY
12: TCP,INPUT,129.110.96.117,ANY,129.110.96.80,80,DENY
13: UDP,INPUT,*.*.*.*,ANY,129.110.96.80,ANY,DENY
14: UDP,OUTPUT,129.110.96.80,ANY,129.110.10.*,ANY,DENY
15: TCP,INPUT,*.*.*.*,ANY,129.110.96.80,22,ACCEPT
16: TCP,INPUT,*.*.*.*,ANY,129.110.96.80,80,ACCEPT
17: UDP,INPUT,129.110.*.*,53,129.110.96.80,ANY,ACCEPT
18: UDP,OUTPUT,129.110.96.80,ANY,129.110.*.*,53,ACCEPT
48
Rule 1, Rule 2: ==>
GENRERALIZATION
Rule 1, Rule 16: ==>
CORRELATED
Rule 2, Rule 12: ==> SHADOWED
Rule 4, Rule 5: ==>
GENRERALIZATION
Rule 4, Rule 15: ==>
CORRELATED
Rule 5, Rule 11: ==> SHADOWED
Anomaly Discovery Result
4/9/2016 05:20
Worm Detection: Introduction
0
0
0
0
0
0
0
-
What are worms?
Self-replicating program; Exploits software vulnerability on a victim;
Remotely infects other victims
Evil worms
Severe effect; Code Red epidemic cost $2.6 Billion
Goals of worm detection
Real-time detection
Issues
Substantial Volume of Identical Traffic, Random Probing
Methods for worm detection
Count number of sources/destinations; Count number of failed connection
attempts
Worm Types
Email worms, Instant Messaging worms, Internet worms, IRC worms, Filesharing Networks worms
Automatic signature generation possible
EarlyBird System (S. Singh -UCSD); Autograph (H. Ah-Kim - CMU)
49
4/9/2016 05:20
Email Worm Detection using Data Mining
Task:
given some training instances of both
“normal” and “viral” emails,
induce a hypothesis to detect “viral” emails.
We used:
Naïve Bayes
SVM
Outgoing
Emails
The Model
Test data
Feature
extraction
Machine
Learning
Classifier
Training data
Clean or Infected ?
50
4/9/2016 05:20
51
Assumptions
0
Features are based on outgoing emails.
0
Different users have different “normal” behaviour.
0
Analysis should be per-user basis.
0
Two groups of features
- Per email (#of attachments, HTML in body,
text/binary attachments)
- Per window (mean words in body, variable words
in subject)
0
Total of 24 features identified
0
Goal: Identify “normal” and “viral” emails based on
these features
4/9/2016 05:20
52
Feature sets
- Per email features
= Binary valued Features
Presence of HTML; script tags/attributes; embedded
images; hyperlinks;
Presence of binary, text attachments; MIME types of file
attachments
= Continuous-valued Features
Number of attachments; Number of words/characters in
the subject and body
- Per window features
= Number of emails sent; Number of unique email recipients;
Number of unique sender addresses; Average number of
words/characters per subject, body; average word length:;
Variance in number of words/characters per subject, body;
Variance in word length
= Ratio of emails with attachments
4/9/2016 05:20
53
Data Mining Approach
Clean/
Infected
Classifier
Test
instance
SVM
infected
?
Naïve Bayes
Clean/
Infected
Test instance
Clean
?
Clean
4/9/2016 05:20
54
Data set
0
Collected from UC Berkeley.
- Contains instances for both normal and viral emails.
0
Six worm types:
- bagle.f, bubbleboy, mydoom.m,
- mydoom.u, netsky.d, sobig.f
0
Originally Six sets of data:
- training instances: normal (400) + five worms (5x200)
- testing instances: normal (1200) + the sixth worm (200)
0 Problem: Not balanced, no cross validation reported
0 Solution: re-arrange the data and apply cross-validation
4/9/2016 05:20
Our Implementation and Analysis
0
Implementation
- Naïve Bayes: Assume “Normal” distribution of numeric and real
data; smoothing applied
- SVM: with the parameter settings: one-class SVM with the radial basis
function using “gamma” = 0.015 and “nu” = 0.1.
0
Analysis
-
NB alone performs better than other techniques
-
SVM alone also performs better if parameters are set correctly
mydoom.m and VBS.Bubbleboy data set are not sufficient (very low detection
accuracy in all classifiers)
-
The feature-based approach seems to be useful only when we have
identified the relevant features
gathered enough training data
Implement classifiers with best parameter settings
55