1-bit covert channel.

Download Report

Transcript 1-bit covert channel.

CS 5950/6030 Network Security
Class 15 (W, 10/5/05)
Leszek Lilien
Department of Computer Science
Western Michigan University
Based on Security in Computing. Third Edition by Pfleeger and Pfleeger.
Using some slides courtesy of:
Prof. Aaron Striegel — at U. of Notre Dame
Prof. Barbara Endicott-Popovsky and Prof. Deborah Frincke — at U. Washington
Prof. Jussipekka Leiwo — at Vrije Universiteit (Free U.), Amsterdam, The Netherlands
Slides not created by the above authors are © by Leszek T. Lilien, 2005
Requests to use original slides for non-profit purposes will be gladly granted upon a written request.
3. Program Security
3.1. Secure Programs – Defining & Testing
3.2. Nonmalicious Program Errors
3.3. Malicious Code
3.3.1. General-Purpose Malicious Code (incl. Viruses)
a. Introduction
b. Kinds of Malicious Code
c. How Viruses Work – PART 1
Class
14
2
c. How Viruses Work – PART 2
d. Virus Signatures
e. Preventing Virus Infections
f. Seven Truths About Viruses
g. Case Studies
h. Virus Removal and System Recovery After Infection
3.3.2. Targeted Malicious Code
a. Trapdoors
3.3.2. Targeted Malicious Code

Targeted = written to attack a particular system, a
particular application, and for a particular purpose

Many virus techniques apply
Some new techniques as well

3
Outline:
a. Trapdoors
b. Salami attack
c. Covert channels
a. Trapdoors (1)


4
Original def:
Trapdoor / backdoor - A hidden computer flaw known to an
intruder, or a hidden computer mechanism (usually
software) installed by an intruder, who can activate the trap
door to gain access to the computer without being blocked
by security services or mechanisms.
A broader definition:
Trapdoor – an undocumented entry point to a module
 Inserted during code development
 For testing
 As a hook for future extensions
 As emergency access in case of s/w failure
Trapdoors (2)

Testing:
 With stubs and drivers for unit testing (Fig. 3-10 p. 138)
 Testing with debugging code inserted into tested
modules


Major sources of trapdoors:
 Left-over (purposely or not) stubs, drivers, debugging code
 Poor error checking


E.g., allowing for unacceptable input that causes buffer overflow

Some were used for testing, some random
Undefined opcodes in h/w processors
Not all trapdoors are bad


5
May allow programmer to modify internal module variables
Some left purposely w/ good intentions
— facilitate system maintenance/audit/testing
Class 14 Ended Here
6
3. Program Security
3.1. Secure Programs – Defining & Testing
3.2. Nonmalicious Program Errors
3.3. Malicious Code
3.3.1. General-Purpose Malicious Code (incl. Viruses)
...
Class
14
Class
15
7
c. How Viruses Work – PART 2
d. Virus Signatures
e. Preventing Virus Infections
f. Seven Truths About Viruses
g. Case Studies
h. Virus Removal and System Recovery After Infection
3.3.2. Targeted Malicious Code
a. Trapdoors
a. Salami attack
b. Covert channels
b. Salami attack
Salami attack - merges bits of seemingly inconsequential
data to yield powerful results



8
Old example: interest calculation in a bank:
 Fractions of 1 ¢ „shaved off” n accounts and deposited in
attacker’s account
 Nobody notices/cares if 0.1 ¢ vanishes
 Can accumulate to a large sum
Easy target for salami attacks: Computer computations
combining large numbers with small numbers
 Require rounding and truncation of numbers
 Relatively small amounts of error from these op’s are
accepted as unavoidable – not checked unless a strong
suspicion
 Attacker can hide „salami slices” within the error margin
c. Covert Channels (CC) (1)

9
Outline:
i. Covert Channels - Definition and Examples
ii. Types of Covert Channels
iii. Storage Covert Channels
iv. Timing Covert Channels
v. Identifying Potential Covert Channels
vi. Covert Channels - Conclusions
CC – Definition and Examples (1)




So far: we looked at malicious pgms that perform wrong
actions
Now: pgms that disclose confidential/secret info
 They violate confidentiality, secrecy, or privacy of info
Covert channels = Program that leaks information
Examples
1) An old military radio communication network

The busiest node is most probably the command center
2) A group of student preparing for Objective type exam
. One of the student understand the course and agrees to help
others(by some actions)
3) Secret ways spies recognize each other
10


Holding a certain magazine in hand
Exchanging a secret gesture when approaching each other
Covert Channels – Definition and Examples (2)
How programmers create covert channels?




Uses covert channel to communicate extracted data
The attack is based on Trojan horse
Example: pgm with Trojan horse using covert channel
 Should be:
Protected
Legitimate
Data
<------[ Service Pgm ]------> User

Is:
Protected
Legitimate
Data
<------[ Service Pgm ]------> User
[ with Trojan h. ]
covert channel
Spy(watch secretly)
(Spy - e.g., programmer who put Trojan into pgm;
directly or via Spy Pgm)
11
How covert channels are created?
i.e. how data is hidden.
Example: leaked data hidden in output reports (or displays)
 Different ‘marks’ in the report:


Varying report format
 Changing line length / changing number of lines
per page
 Printing or not printing certain values, characters,
or headings
For example: changing the word TOTAL to TOTALS
in the heading would not be noticed, but this
creates a one bit of information, that means 1-bit
covert channel.
12
ii. Types of Covert Channels
Types of covert channels

13

Storage covert channels
 Convey info by presence or absence of an object in
storage

Timing covert channels
 Convey info by varying the speed at which things
happen
iii. Storage Channels (1)


Example of storage channel: file lock covert channel in
multiuser system
A covert channel can signal one bit of information by
whether or not a file is locked.

lock(1)

Service program

unlock(0) File
Locked?
yes:1
no:0 Spy’s program
Protected data

14
User can see
spy can see
iv. Timing Channels
15

Timing channels are shared resource channel in which the
shared resource is time.

Simple example of timing channel:
 Multiprogramming system “slices” processor time for
programs running on the processor
 2 processes only: Trojan (Pgm with Trojan) and Spy Pgm
 Trojan receives all odd slices
Spy Pgm receives all even slices
 Trojan signals Xk=1 by using its time slice,
signals Xk=0 by abstaining from using its slice
v. Identifying Potential Covert Channels
16

Covert channels are not easy to identify
 Otherwise wouldn’t be covert, right?

Two techniques for locating covert channels:
1) Shared Resource Matrix
2) Information Flow Method
(1)
Identifying Potential Covert Channels (2)
1) The Shared Resource Matrix method


Shared resource is basis for a covert channel
=> identify shared resources and processes
reading/writing them
Step 1: Construct Shared Resource Matrix(SRM)
Rows — resources
Columns — processes that access them:
R = observe resource M = modify/set/create/delete resource
Example
Lock
X (confidential
data)
17
Process 1
Process 2
R, M
R, M
R
Identifying Potential Covert Channels (3)


...
Pgm 1
Pgm 2
Lock on FX
R, M
R, M
X (confid.)
R
Step 2: Look for pattern:
Meaning of this pattern:
Process Pj can get value of
Resource Rn via Process Pi
(and a covert channel)
Pi
Pj
Rm
M
R
Rn
R
Q: Do you see such a pattern in SRM above?
18
Identifying Potential Covert Channels (4)


...
Process 1
Process 2
Lock on FX
R, M
R, M
X (confid.)
R
Step 2: Look for pattern:
Meaning of this pattern:
Process Pj can get value of
Resource Rn via Process Pi
(and a covert channel)


19
i
j
m
M
R
n
R
Q: Do you see such a pattern in SRM above?
A: Yes. Process 2 can get value of X via Process 1
(no surprise: Proc. 1 & 2 are Trojan & Spy
from earlier example)
Identifying Potential Covert Channels (5)
2) Information Flow Method

Flow analysis of pgm’s syntax


Can be automated within a compiler
Identifies non-obvious flows of info between pgm
statements
Examples of flows of info between pgm stmts




B:= A – an explicit flow from A to B
B:= A; C:=B – an explicit flow from A to C (via B)
IF C=1 THEN B:=A
– an explicit flow from A to B
– an implicit flow from C to B (bec. B can change iff C=1)
20
Identifying Potential Covert Channels (6)

More examples of flows of info between pgm stmts
[textbook and J. Leiwo]
21
Identifying Potential Covert Channels (7)

Steps of Information Flow Method (IFM)
1) Analyze statements
2) Integrate results to see which outputs affected by which
inputs

Variants of IFM:
1) IFM during compilation
2) IFM on design specs
22
Covert Channels - Conclusions

Covert channels are a serious threat to confidentiality and
thus security
(„CIA” = security)

Any virus/Trojan horse can create a covert channel

23
In open systems — no way to prevent covert channels
 Very high security systems require a painstaking and
costly design preventing (some) covert channels
 Analysis must be performed periodically as high security
system evolves
End of Class 15
24
Controls against program Security


25
How to control security of pgms during their development
and maintenance
Outline:
a. Introduction
b. Developmental controls for security
c. Operating system controls for security
d. Administrative controls for security
e. Conclusions
a. Introduction


26
„Better to prevent than to cure”
Three types of controls for security (against pgm flaws):
1) Developmental controls
2) OS controls
3) Administrative controls
b. Developmental Controls for Security (1)

Nature of s/w development
 Collaborative effort, involves people with different skills
 Team of developers, each involved in  1 of stages:









27
Requirement specification
 do “X”
 do “X” and “nothing more”
Design
Implementation
Testing
Documenting at each stage
Reviewing at each stage
Managing system development thru all stages
Maintaining deployed system (updates, patches, new versions,
etc.)
Both product and process contribute to overall quality
Developmental Controls for Security (2)

Fundamental principles of s/w engineering
1) Modularity
2) Encapsulation
3) Info hiding
1) Modularity


Dividing a process into subtask
Adv.
Maintenance, understandability, reuse, correctness,
testing,
Modularity should improve correctness
 Fewer flaws => better security


28
Developmental Controls for Security (3)
2) Encapsulation
 Minimizing info sharing with other modules
=> Limited interfaces reduce number of covert channels
 Well documented interfaces
 „Hiding what should be hidden and showing what should
be visible.”
3) Information hiding



29
Module is a black box
 Well defined function and I/O
Easy to know what module does but not how it does it
Reduces complexity, interactions, covert channels, ...
=> better security
Developmental Controls for Security (4)

Techniques for building solid software
1) Peer reviews
2) Hazard analysis
3) Testing
4) Good design
5) Risk prediction & mangement
6) Static analysis
7) Configuration management
8) Additional developmental controls
... all discussed below ...
30
Developmental Controls for Security (5)
1) Peer reviews - three types
 Reviews
 Informal
 Team of reviewers
 Gain consensus on solutions
before development
 Walk-throughs
 Product is presented to the team to discover
flaws in designed document
 Inspection
 Detailed analysis of product against prepared list
of concern
 Various types of peer reviews can be highly effective
31
Developmental Controls for Security (6)
2) Hazard analysis
= systematic techniques to expose
potentially hazardous system states,
including security vulnerabilities

32
Components of HA
 Hazard lists
 System-wide view (not just code)
 Begins Day 1
 Continues throughout SDLC (= s/w dev’t life cycle)
Developmental Controls for Security (7)
3) Testing – phases:
 Module/component/unit testing of indiv. modules
 Integration testing of interacting (sub)system modules
 (System) function testing checking against function described by
requirement specifications


(System) acceptance testing – with customer against

(System) installation testing after installation on customer’s

Regression testing after updates/changes to s/w, system still
customer’s requirements — on seller’s or customer’s premises
system
works without performance degradation.
Types of testing
 Black Box testing – testers can’t examine code
 White Box / Clear box testing – testers can examine design and
code, can see inside modules/system
33
Developmental Controls for Security (8)
Configuration management
= process of controling system modifications during
development and maintenance

Offers security benefits by scrutinizing
new/changed code

Lesson from mistake

Proof of program correctness.

Program computes the particular result
correctly , and does nothing beyond that .
34
Operating System Controls for Security (1)
Developmental controls not always used
OR:
 Even if used, not foolproof
=> Need other, complementary controls, incl. OS
controls


35
Such OS controls can protect against some pgm
flaws
Operating System Controls for Security (2)

36
Trusted software
–Code rigorously developed and analyzed so we can
trust that it does all and only what specification says
 Trusted code establishes foundation upon which
untrusted code runs
 Trusted code establishes security baseline for
the whole system
 In particular, OS can be trusted s/w
Operating System Controls for Security (3)

37
Key characteristics determining if OS code is trusted
1) Functional correctness
 OS code consistent with specs
2) Enforcement of integrity
 OS keeps integrity of its data and other
resources even if presented with flawed or
unauthorized commands
3) Limited privileges
 OS minimizes access to secure data/resources
 Trusted pgms must have „need to access” and
proper access rights to use resources protected
by OS
 Untrusted pgms can’t access resources
protected by OS
Operating System Controls for Security (4)

Ways of increasing security if untrusted
pgms present:
1) Mutual suspicion
2) Confinement
3) Access log
1) Mutual suspicion between programs

Distrust other pgms – treat them as if they
were incorrect or malicious

Pgm protects its interface data

38
With data checks, etc.
Operating System Controls for Security (5)
2) Confinement



39
OS can confine access to resources by suspected pgm
Example 1: strict compartmentalization

Pgm can affect data and other pgms only within its
compartment
Can limit spread of viruses
Operating System Controls for Security (6)
3) Audit log / access log

Records who/when/how (e.g., for how long)
accessed/used which objects



40
Events logged: logins/logouts, file accesses, pgm
ecxecutions, device uses, failures, repeated
unsuccessful commands (e.g., many repeated failed
login attempts can indicate an attack)
Audit frequently for unusual events, suspicious
patterns
Forensics – investigation to find who broke law,
policies, or rules
Administrative Controls for Security (1)


41
They prohibit or demand certain human behavior via
policies, procedures, etc.
They include:
1) Standards of program development
2) Security audits
3) Separation of duties
Administrative Controls for Security (2)
1) Standards and guidelines for program development
 Capture experience and wisdom from previous projects
 Facilitate building higher-quality s/w (incl. more secure)
 They include:
 Design S&G – design tools, languages, methodologies
 S&G for documentation, language, and coding style
 Programming S&G - incl. reviews, audits
 Testing S&G
 Configuration mgmt S&G
2) Security audits
 Check compliance with S&G
 Scare potential dishonest programmer from including
illegitimate code (e.g., a trapdoor)
42
Administrative Controls for Security (3)
3) Separation of duties
 Break sensitive tasks into  2 pieces to be performed by
different people (learned from banks)
 Example 1: modularity
 Different developers for cooperating modules
 Example 2: independent testers
 Rather than developer testing her own code
43
Conclusions




Developmental / OS / administrative controls help
produce/maintain higher-quality (also more secure) s/w
Art and science - no „silver bullet” solutions
„A good developer who truly understands security will
incorporate security into all phases of development.”
Summary:
Control
44
(for Controls for Security)
Purpose
Benefit
Developmental
Limit mistakes
Make malicious code difficult
Produce better software
Operating
System
Limit access to system
Promotes safe sharing of info
Administrative
Limit actions of people
Improve usability, reusability
and maintainability