Transcript Document
CS 5950 –
Computer Security and Information Assurance
Section 6: Program Security
This is the long version of Section 6.
It includes OPTIONAL slides that you may SKIP.
Dr. Leszek Lilien
Department of Computer Science
Western Michigan University
Slides based on Security in Computing. Third Edition by Pfleeger and Pfleeger.
Using some slides courtesy of:
Prof. Aaron Striegel — course taught at U. of Notre Dame
Prof. Barbara Endicott-Popovsky and Prof. Deborah Frincke (U. Idaho) — taught at U. Washington
Prof. Jussipekka Leiwo — taught at Vrije Universiteit (Free U.), Amsterdam, The Netherlands
Slides not created by the above authors are © 2006 by Leszek T. Lilien
Requests to use original slides for non-profit purposes will be gladly granted upon a written request.
Program Security – Outline
(1)
6.1. Secure Programs – Defining & Testing
a.
b.
c.
d.
e.
Introduction
Judging S/w Security by Fixing Faults
Judging S/w Security by Testing Pgm Behavior
Judging S/w Security by Pgm Security Analysis
Types of Pgm Flaws
6.2. Nonmalicious Program Errors
a.
b.
c.
d.
Buffer overflows
Incomplete mediation
Time-of-check to time-of-use errors
Combinations of nonmalicious program flaws
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
2
Program Security – Outline
(2)
6.3. Malicious Code
6.3.1. General-Purpose Malicious Code incl.
Viruses
a.
b.
c.
d.
e.
f.
g.
h.
Introduction
Kinds of Malicious Code
How Viruses Work
Virus Signatures
Preventing Virus Infections
Seven Truths About Viruses
Case Studies
Virus Removal and System Recovery After Infection
6.3.2. Targeted Malicious Code
a. Trapdoors
b. Salami attack
c. Covert channels
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
3
Program Security – Outline
(3)
6.4. Controls for Security
a. Introduction
b. Developmental controls for security
c. Operating System controls for security
d. Administratrive controls for security
e. Conclusions
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
4
6. Program Security
(1)
Program security –
The fundamental step in applying security to computing
Protecting programs is the heart of computer security
All kinds of programs, from apps via OS, DBMS, networks
Issues:
How to keep pgms free from flaws
How to protect computing resources from pgms with flaws
Issues of trust not considered:
How trustworthy is a pgm you buy?
How to use it in its most secure way?
Partial answers:
Third-party evaluations
Liability and s/w warranties
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
5
--SKIP-- Program Security (2)
Outline:
6.1. Secure Programs – Defining and Testing
6.2. Nonmalicious Program Errors
6.3. Malicious Code
6.3.1. General-Purpose Malicious Code incl. Viruses
6.3.2. Targeted Malicious Code
6.4. Controls Against Program Threats
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
6
6.1. Secure Programs - Defining & Testing
… Continued …
Section 6 – Computer Security and Information Assurance – Spring 2006
[cf. B.
Endicott-Popovsky]
© 2006 by Leszek T. Lilien
7
a. Introduction (1)
Pgm is secure if we trust that it provides/enforces:
Confidentiality
Integrity
Availability
What is „Program security?”
Depends on who you ask
user - fit for his task
programmer - passes all „her” tests
manager - conformance to all specs
Developmental criteria for program security include:
Correctness of security & other requirements
Correctness of implementation
Correctness of testing
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
8
Introduction (2)
Fault tolerance terminology:
Error - may lead to a fault
Fault - cause for deviation from intended function
Failure - system malfunction caused by fault
Note:
[cf. A. Striegel]
Faults - seen by „insiders” (e.g., programmers)
Failures - seen by „outsiders” (e.g., independent testers, users)
Error/fault/failure example:
Programmer’s indexing error, leads to buffer overflow fault
Buffer overflow fault causes system crash (a failure)
Two categories of faults w.r.t. duration
[cf. A. Striegel]
Permanent faults
Transient faults – can be much more difficult to diagnose
Section 6 – Computer Security and Information Assurance – Spring 2006
[cf.
A. Striegel]
© 2006 by Leszek T. Lilien
9
Basic approaches to having secure programs:
1) Judging s/w security by fixing pgm faults
Red Team / Tiger Team tries to crack s/w
If pgm withstands the attack => security is good
2) Judging s/w security by testing pgm behavior
Run tests to compare behavior vs. requirements (think testing or
Ss/w engg)
Important: If a flaw detected as a failure
underlying fault (the cause)
look for the
Recall: fault seen by insiders, failure – by outsiders
If possible, detect faults before they become failures
Any kind of fault/failure can cause a security incident
=> we must consider security consequences for all kinds of detected
faults/failures
Even inadvertent faults / failures
(an effect),
Inadvertent faults are the biggest source of security vulnerabilities
exploited by attackers
Testing only increases probability of eliminating faults
Section 6 – Computer Security and Information Assurance – Spring 2006
[cf. B.
Endicott-Popovsky]
© 2006 by Leszek T. Lilien
10
3) Judging s/w security by pgm security analysis
Best approach to judging s/w security
Analyze what can go wrong
At every stage of program development!
From requirement definition to testing
After deployment
Configurations / policies / practices
Section 6 – Computer Security and Information Assurance – Spring 2006
[cf. B.
Endicott-Popovsky]
© 2006 by Leszek T. Lilien
11
--SKIP-- b. Judging S/w Security by Fixing
Faults
An approach to judge s/w security: penetrate and patch
Red Team / Tiger Team tries to crack s/w
If you withstand the attack => security is good
Is this true? Rarely.
Too often developers try to quick-fix problems
discovered by Tiger Team
Quick patches often introduce new faults due to:
Pressure – causing narrow focus on fault, not
context
Non-obvious side effects
System performance requirements not allowing
for security overhead
Section 6 – Computer Security and Information Assurance – Spring 2006
[cf.
A. Striegel]
© 2006 by Leszek T. Lilien
12
--SKIP-- c. Judging S/w Security
by Testing Pgm Behavior (1)
Better approach to judging s/w security:
testing pgm behavior
Compare behavior vs. requirements (think testing/SW eng)
Program security flaw =
= inappropriate behavior caused by a pgm fault or failure
Flaw detected as a fault or a failure
Important: If flaw detected as a failure (an effect), look for
the underlying fault (the cause)
Recall: fault seen by insiders, failure – by outsiders
If possible, detect faults before they become failures
Note:
Texbook defines flaw-vulnerability-flaw in a circular way
– a terminology soup!
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
13
--SKIP-- Judging S/w Security by Testing Pgm Behavior (2)
Any kind of fault/failure can cause a security incident
Misunderstood requirements /
error in coding / typing error
In a single pgm / interaction of k pgms
Intentional flaws or accidental (inadvertent) flaws
Therefore, we must consider security consequences for all
kinds of detected faults/failures
Even inadvertent faults / failures
Inadvertent faults are the biggest source of security
vulnerabilities exploited by attackers
Even dormant faults
Eventually can become failures harming users
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
14
--SKIP-- Judging S/w Security by Testing Pgm Behavior (3)
Problems with pgm behavior testing
Limitations of testing
Complexity – malicious attacker’s best friend
Can’t test exhaustively
Testing checks what the pgm should do
Can’t test what the pgm should not do
i.e., can’t make sure that pgm does only what it should do –
nothing more
Too complex to model / to test
Exponential # of pgm states / data combinations
a faulty line hiding in 10 million lines of code
Evolving technology
New s/w technologies appear
Security techniques catching up with s/w technologies
Section 6 – Computer Security and Information Assurance – Spring 2006
[cf.
Striegel]
© A.
2006
by Leszek T. Lilien
15
++SKIP++ d. Judging S/w Security
by Pgm Security Analysis
Best approach to judging s/w security:
pgm security analysis
Analyze what can go wrong
At every stage of program development!
After deployment
From requirement definition to testing
Configurations / policies / practices
Protect against security flaws
Specialized security methods and techniques
Specialized security tools
E.g., specialized security meth/tech/tools for switching s/w
Section 6 – Computer Security and Information Assurance – Spring 2006
[cf.©B.2006
Endicott-Popovsky]
by Leszek T. Lilien
16
e. Types of Pgm Flaws
Taxonomy of pgm flaws:
1) Intentional
a) Malicious
b) Nonmalicious
2) Inadvertent
a) Validation error (incomplete or inconsistent)
e.g., incomplete or inconsistent input data
b) Domain error
e.g., using a variable value outside of its domain
c) Serialization and aliasing
serialization – e.g., in DBMSs or OSs
aliasing - one variable or some reference, when changed, has an
indirect (usually unexpected) effect on some other data
Note: ‘Aliasing’ not in computer graphics sense!
d) Inadequate ID and authentication (Section 4—on OSs)
e) Boundary condition violation
f) Other exploitable logic errors
Section 6 – Computer Security and Information Assurance – Spring 2006
[cf. B.© Endicott-Popovsky]
2006 by Leszek T. Lilien
17
6.2. Nonmalicious Program Errors
Nonmalicious program errors include:
a. Buffer overflows
b. Incomplete mediation
c. Time-of-check to time-of-use errors
d. Combinations of nonmalicious program flaws
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
18
a. Buffer Overflows
(1)
Buffer overflow flaw — often inadvertent (=>nonmalicious)
but with serious security consequences
Many languages require buffer size declaration
C language statement: char sample[10];
Execute statement:
sample[i] = ‘A’; where i=10
Out of bounds (0-9) subscript – buffer overflow occurs
Some compilers don’t check for exceeding bounds
C does not perform array bounds checking.
Similar problem caused by pointers
No reasonable way to define limits for pointers
Section 6 – Computer Security and Information Assurance – Spring 2006
[cf. B.
Endicott-Popovsky]
© 2006 by Leszek T. Lilien
19
Buffer Overflows (2)
Where does ‘A’ go?
Depends on what is adjacent to ‘sample[10]’
Affects user’s data - overwrites user’s data
Affects users code - changes user’s instruction
Affects OS data
- overwrites OS data
Affects OS code
- changes OS instruction
This is a case of aliasing (cf. Slide 26)
Section 6 – Computer Security and Information Assurance – Spring 2006
[cf. ©
B.2006
Endicott-Popovsky]
by Leszek T. Lilien
20
Buffer Overflows (3)
Implications of buffer overflow:
Attacker can insert malicious data values/instruction
codes into „overflow space”
Supp. buffer overflow affects OS code area
Attacker code executed as if it were OS code
Attacker might need to experiment to see what
happens when he inserts A into OS code area
Can raise attacker’s privileges (to OS privilege level)
When A is an appropriate instruction
Attacker can gain full control of OS
Section 6 – Computer Security and Information Assurance – Spring 2006
[cf.©B.2006
Endicott-Popovsky]
by Leszek T. Lilien
21
Buffer Overflows (4)
Supp. buffer overflow affects a call stack area
A scenario:
Stack: [data][data][...]
Pgm executes a subroutine
=> return address pushed onto stack
(so subroutine knows where to return control to when finished)
Stack: [ret_addr][data][data][...]
Subroutine allocates dynamic buffer char sample[10]
=> buffer (10 empty spaces) pushed onto stack
Stack: [..........][ret_addr][data][data][...]
Subroutine executes: sample[i] = ‘A’ for i = 10
Stack: [..........][A][data][data][...]
Note: ret_address overwritten by A!
(Assumed: size of ret_address is 1 char)
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
22
Buffer Overflows (5)
Supp. buffer overflow affects a call stack area—CONT
Stack: [..........][A][data][data][...]
Subroutine finishes
Buffer for char sample[10] is deallocated
Stack: [A][data][data][...]
RET operation pops A from stack (considers it ret. addr.)
Stack: [data][data][...]
Pgm (which called the subroutine) jumps to A
=> shifts program control to where attacker wanted
Note: By playing with ones own pgm attacker can specify
any „return address” for his subroutine
Upon subroutine return, pgm transfers control to
attacker’s chosen address A (even in OS area)
Next instruction executed is the one at address A
Could be 1st instruction of pgm that grants
highest access privileges to its „executor”
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
23
Buffer Overflows (6)
Note:
[Wikipedia – aliasing]
C programming language specifications do not specify
how data is to be laid out in memory (incl. stack layout)
Some implementations of C may leave space between
arrays and variables on the stack, for instance, to
minimize possible aliasing effects.
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
24
Buffer Overflows (7)
Web server attack similar to buffer overflow attack:
pass very long string to web server (details: textbook, p.103)
Buffer overflows still common
Used by attackers
to crash systems
to exploit systems by taking over control
Large # of vulnerabilities due to buffer overflows
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
25
b. Incomplete Mediation (1)
Incomplete mediation flaw — often inadvertent (=>
nonmalicious) but with serious security consequences
Incomplete mediation:
Sensitive data are in exposed, uncontrolled condition
Example
URL to be generated by client’s browser to access server,
e.g.:
http://www.things.com/order/final&custID=101&part=555A&qy=20
&price=10&ship=boat&shipcost=5&total=205
Instead, user edits URL directly, changing price and total
cost as follows:
http://www.things.com/order/final&custID=101&part=555A&qy=20
&price=1&ship=boat&shipcost=5&total=25
User uses forged URL to access server
The server takes 25 as the total cost
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
26
--SKIP-- Incomplete Mediation (2)
Unchecked data are a serious vulnerability!
Possible solution: anticipate problems
Don’t let client return a sensitive result (like total)
that can be easily recomputed by server
Use drop-down boxes / choice lists for data input
Prevent user from editing input directly
Check validity of data values received from client
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
27
c. Time-of-check to Time-of-use Errors (1)
Time-of-check to time-of-use flaw — often inadvertent (=>
nonmalicious) but with serious security consequences
A.k.a. synchronization flaw / serialization flaw
TOCTTOU — mediation with “bait and switch” in the middle
Non-computing example:
Swindler shows buyer real Rolex watch (bait)
After buyer pays, switches real Rolex to a forged one
In computing:
Change of a resource (e.g., data) between time
access checked and time access used
Q: Any examples of TOCTTOU problems from
computing?
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
28
Time-of-check to Time-of-use Errors (2)
...
TOCTTOU — mediation with “bait and switch” in the middle
...
Q: Any examples of TOCTTOU problems from
computing?
A: E.g., DBMS/OS: serialization problem:
pgm1 reads value of X = 10
pgm1 adds X = X+ 5
pgm2 reads X = 10, adds 3 to X, writes X = 13
pgm1 writes X = 15
X ends up with value 15 – should be X = 18
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
29
--SKIP-- Time-of-check to Time-of-use Errors (3)
Prevention of TOCTTOU errors
Be aware of time lags
Use digital signatures and certificates to „lock” data
values after checking them
So nobody can modify them after check & before
use
Q: Any examples of preventing TOCTTOU from
DBMS/OS areas?
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
30
--SKIP-- Time-of-check to Time-of-use Errors (4)
Prevention of TOCTTOU errors
...
Q: Any examples of preventing TOCTTOU from
DBMS/OS areas?
A1: E.g., DBMS: locking to enforce proper serialization
(locks need not use signatures—fully controlled by DBMS)
In the previous example:
will force writing X = 15 by pgm 1, before pgm2
reads X (so pgm 2 adds 3 to 15)
OR:
will force writing X = 13 by pgm 2, before pgm1
reads X (so pgm 1 adds 5 to 13)
A2: E.g., DBMS/OS: any other concurrency control
mechanism enforcing serializability
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
31
--SKIP-- d. Combinations of Nonmal. Pgm
Flaws
The above flaws can be exploited in multiple steps by a
concerted attack
Nonmalicious flaws can be exploited to plant malicious flaws
(next)
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
32
6.3. Malicious Code
Malicious code or rogue pgm is written to exploit flaws in pgms
Malicious code can do anything a pgm can
Malicious code can change
Data
Other programs
Malicious code - „officially” defined by Cohen in 1984
but virus behavior known since at least 1970 Ware’s study for
Defense Science Board (classified, made public in 1979)
Outline for this Subsection:
6.3.1. General-Purpose Malicious Code (incl. Viruses)
6.3.2. Targeted Malicious Code
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
33
6.3.1. General-Purpose Malicious Code
(incl. Viruses)
Outline
a. Introduction
b. Kinds of Malicious Code
c. How Viruses Work
d. Virus Signatures
e. Preventing Virus Infections
f. Seven Truths About Viruses
g. Case Studies
Section 6 – Computer Security and Information Assurance – Spring 2006
[cf. B.
Endicott-Popovsky]
© 2006 by Leszek T. Lilien
34
a. Introduction
Viruses are prominent example of general-purpose malicious
code
Not „targeted” against any user
Attacks anybody with a given app/system/config/...
Viruses
Many kinds and varieties
Benign or harmful
Transferred even from trusted sources
Also from „trusted” sources that are negligent to update
antiviral programs and check for viruses
Section 6 – Computer Security and Information Assurance – Spring 2006
[cf. ©B.2006
Endicott-Popovsky]
by Leszek T. Lilien
35
--REMIND YOURSELF – (from Section 1)
b. Kinds of Malicious Code
Trapdoors
(1)
Trojan Horses
X
Files
Bacteria
Logic Bombs
Worms
Viruses
[cf. Barbara Edicott-Popovsky and Deborah Frincke, CSSE592/492, U. Washington]
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
36
--REMIND YOURSELF – (from Section 1) b. Kinds of Malicious Code (2)
Trojan horse - A computer program that appears to have a
useful function, but also has a hidden and potentially
malicious function that evades security mechanisms,
sometimes by exploiting legitimate authorizations of a
system entity that invokes the program
Virus - A hidden, self-replicating section of computer
software, usually malicious logic, that propagates by
infecting (i.e., inserting a copy of itself into and becoming part of)
another program. A virus cannot run by itself; it requires
that its host program be run to make the virus active.
Worm - A computer program that can run independently,
can propagate a complete working version of itself onto
other hosts on a network, and may consume computer
resources destructively.
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
37
--REMIND YOURSELF – (from Section 1) Kinds of Malicious Code (3)
Bacterium - A specialized form of virus which does not attach to a specific file.
Usage obscure.
Logic bomb - Malicious [program] logic that activates when
specified conditions are met. Usually intended to cause
denial of service or otherwise damage system resources.
Time bomb - activates when specified time occurs
Rabbit – A virus or worm that replicates itself without limit
to exhaust resource
Trapdoor / backdoor - A hidden computer flaw known to an
intruder, or a hidden computer mechanism (usually
software) installed by an intruder, who can activate the trap
door to gain access to the computer without being blocked
by security services or mechanisms.
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
38
--SKIP-- Kinds of Malicious Code (4)
Above terms not always used consistently, esp. in popular
press
Combinations of the above kinds even more confusing
E.g., virus can be a time bomb
— spreads like virus, „explodes” when time occurs
Term „virus” often used to refer to any kind of malicious
code
When discussing malicious code, we’ll often say „virus”
for any malicious code
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
39
c. How Viruses Work (1)
Pgm containing virus must be executed to spread virus or
infect other pgms
Even one pgm execution suffices to spread virus widely
Virus actions: spread / infect
--SKIP-- Spreading – Example 1: Virus in a pgm on
installation CD
User activates pgm contaning virus when she runs
INSTALL or SETUP
Virus installs itself in any/all executing pgms present in
memory
Virus installs itself in pgms on hard disk
From now on virus spreads whenever any of the infected
pgms (from memory or hard disk) executes
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
40
--SKIP-- How Viruses Work (2)
Spreading – Example 2: Virus in attachment to e-mail msg
User activates pgm contaning virus (e.g. macro in MS
Word) by just opening the attachment
=> Disable automatic opening of attachments!!!
Virus installs itself and spreads ... as in Example 1...
Spreading – Example 3: Virus in downloaded file
File with pgm or document (.doc, .xls, .ppt, etc.)
You know the rest by now...
Document virus
Spreads via picture, document, spreadsheet, slide
presentation, database, ...
E.g., via .jpg, via MS Office documents .doc, .xls, .ppt, .mdb
Currently most common!
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
41
--SKIP-- How Viruses Work (3)
Kinds of viruses w.r.t. way of attaching to infected pgms
1) Appended viruses
Appends to pgm
Most often virus code precedes pgm code
Inserts its code before the 1st pgm instruction in
executable pgm file
Executes whenever program executed
2) Surrounding viruses
Surrounds program
Executes before and after infected program
Intercepts its input/output
Erases its tracks
The „after” part might be used to mask virus
existence
E.g. if surrounds „ls”, the „after” part removes listing of
virus file produced by „ls” so user can’t see it
... cont. ...
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
42
--SKIP-- How Viruses Work (4)
... cont. ...
3) Integrating viruses
Integrates into pgm code
Spread within infected pgms
4) Replacing viruses
Entirely replaces code of infected pgm file
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
43
--SKIP-- How Viruses Work (5)
(Replacing) virus V gains control over target pgm T by:
Overwriting T on hard disk
OR
Changing pointer to T with pointer to V
(textbook, Fig. 3-7)
OS has File Directory
File Directory has an entry that points to file with code for T
Virus replaces pointer to T’s file with pointer to V’s file
In both cases actions of V replace actions of T when user
executes what she thinks is „T”
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
44
How Viruses Work (6)
Characteristics of a ‘perfect’ virus (goals of virus writers)
Hard to detect
Not easily destroyed or deactivated
Spreads infection widely
Can reinfect programs
Easy to create
Machine and OS independent
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
45
How Viruses Work (7)
Virus hiding places
1) In bootstrap sector – best place for virus
Bec. virus gains control early in the boot process
Before detection tools are active!
Before infection:
After infection:
[Fig. cf. J. Leiwo & textbook]
2) In memory-resident pgms
TSR pgms (TSR = terminate and stay resident)
TSR pgms are most frequently used OS pgms or
specialized user pgms
=> good place for viruses (activated very often)
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
46
How Viruses Work (8)
3) In application pgms
Best for viruses: apps with macros
(MS Word, MS PowerPoint, MS Excel, MS Access, ...)
One macro: startup macro executed when app starts
Virus instructions attach to startup macro, infect
document files
Bec. doc files can include app macros (commands)
E.g., .doc file include macros for MS Word
Via data files infects other startup macros, etc. etc.
4) In libraries
Libraries used/shared by many pgms => spread virus
Execution of infected library pgm infects
5) In other widely shared pgms
Compilers / loaders / linkers
Runtime monitors
Runtime debuggers
Virus control pgms (!)
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
47
d. Virus Signatures (1)
Virus hides but can’t become invisible – leaves behind a virus
signature, defined by various patterns:
1) Storage patterns: must be stored somewhere/somehow
(maybe in pieces)
2) Execution patterns: executes in a particular way
3) Distribution patterns: spreads in a certain way
Virus scanners use virus signatures to detect viruses
(in boot sectior, on hard disk, in memory)
Scanner can use file checksums to detect changes to files
Once scanner finds a virus, it tries to remove it
I.e., tries to remove all pieces of a virus V from target pgm T
Virus scanner and its database of virus signatures must be upto-date to be effective!
Update and run daily!
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
48
Virus Signatures (2)
Detecting Virus Signatures
(1)
Difficulty 1 — in detecting execution patterns:
Most of effects of virus execution (see next page) are
„invisible”
Bec. they are normal – any legitimate pgm could cause them
(hiding in a crowd)
=> can’t help in detecion
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
49
--SKIP-- Virus Signatures (3)
Detecting Virus Signatures (2)
Virus Goal
How Achieved
Attach to executable
Attach to data/
control file
Remain in memory
Modify file directory / Write to executable pgm file
Modify directory / Rewrite data
Append to data / Append data to self
Intercept interrupt by modifying interrupt handler
address table / Load self in non-transient memory area
Intercept interrupt /Intercept OS call (e.g., to format
disk)
Modify system file / Modify ordinary executable pgm
Intercept system calls that would reveal self and falsify
results / Classify self as “hidden” file
Infect boot sector / Infect systems pgm
Infect ordinary pgm / Infect data ordinary pgm reads to
control its executable
Activate before deactivating pgmand block deactivation
Store copy to reinfect after deactivation
Infect disks
Conceal self
Spread self
Prevent deactivation
[cf. textbook & B. Endicott-Popovsky]
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
50
Virus Signatures (4)
Detecting Virus Signatures (3)
Difficulty 2 — in finding storage patterns:
Polymorphic viruses:
changes from one „form” (storage pattern) to another
Simple virus always recognizable by a certain char pattern
Polymorphic virus mutates into variety of storage patterns
Examples of polymorphic virus mutations
Randomly repositions all parts of itself and randomly
changes all fixed data within its code
Repositioning is easy since (infected) files stored as chains of data
blocks - chained with pointers
Randomly intersperses harmless instructions throughout its
code
(e.g., add 0, jump to next instruction)
Encrypting virus: Encrypts its object code (each time with a
different/random key), decrypts code to run
... More below ...
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
51
--SKIP-- Virus Signatures (5)
Detecting Virus Signatures (4)
Encrypting virus structure
stored
encryp
-ted
(informal pseudo-code)
array decr_key;
procedure decrypt(virus_code, decr_key)
...
end /* decrypt */
begin /* virus V in target pgm T */
decrypt (V, decr_key);
infect: if infect_condition met then
find new target pgms NT to infect;
mutate V into V’ for copying;
encrypt V’ with random key into V”;
save new key in file for V”;
attach V” to NT;
hide modification of NT (with stealth
code of V);
damage: if damage_condition met then
execute damage_code of V
else start T
end /* virus V in target pgm T */
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
52
--SKIP-- Virus Signatures (6)
Detecting Virus Signatures (5)
Encrypting virus: Encrypts its object code (each time with a
different/random key), decrypts code to run
Q: Is there any signature for encryption virus that a
scanner can see?
Hint: consider 3 parts of encryption virus:
„proper” virus code (infect/damage code)
decr_key
procedure decrypt
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
53
--SKIP-- Virus Signatures (7)
Detecting Virus Signatures (6)
...
Q: Q: Is there any signature for encryption virus that a
scanner can see?
A: Lets’ see:
„proper” virus code – encrypted with random key –
polymorphic
decr_key – random key used to encrypt/decrypt –
polymorphic
procedure decrypt (or a pointer to a library decrypt procedure)
– unencrypted, static
=> procedure decrypt of V is its signature
visible to a scanner
But: Virus writer can use polymorphic techniques on
decryption code to make it „less visible” (to hide it)
Virus writers and scanner writers challenge each other
An endless game?
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
54
e. Preventing Virus Infections
Preventing Virus Infections
Use commercial software from
trustworthy sources
But even this is not an absolute
guarantee of virus-free code!
Test new software on isolated computers
Open only safe attachments
Keep recoverable system image in safe place
Backup executable system files
Use virus scanners often (daily)
Update virus detectors daily
Databases of virus signatures change very often
[cf. B. Endicott-Popovsky]
No absolute guarantees even if you follow all the rules –
just much better chances of preventing a virus
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
55
f. Seven Truths About Viruses
Viruses can infect any platform
Viruses can modify “hidden” / “read only” files
Viruses can appear anywhere in system
Viruses spread anywhere sharing occurs
Viruses cannot remain in memory aftera complete power
off/power on on reboot
Viruses infect software that runs hardware
But virus reappears if saved on disk (e.g., in the boot sector)
There are firmware viruses (if firmware writeable by s/w)
Viruses can be malevolent, benign, or benevolent
Hmmm...
Would you like a benevolent virus doing good things (like compressing
pgms to save storage) but without your knowledge?
Section 6 – Computer Security and Information Assurance – Spring 2006
[cf. B.
Endicott-Popovsky]
© 2006 by Leszek T. Lilien
56
--SKIP-- g. Case Studies
(1)
The Internet Worm
Attacked on 11/2/1988
Invaded VAX and Sun-3 computers running versions of
Berkeley UNIX
Used their resources to attack still more computers
Within hours spread across the U.S
Infected hundreds / thousands of computers – serious
damage to Internet
Some uninfected networks were scared into disconnecting from
Internet => severed connections stopped necessary work
Made many computers unusable via resource exhaustion
Was a rabbit – supposedly by mistake unintended by its writer
Perpetrator was convicted in 1990 ($10,000 fine + 400 hrs of
community service + 3-year suspended jail sentence)
Caused forming Computer Emergency Response Team
(CERT) at Carnegie Mellon University[cf. textbook &
Section 6 – Computer Security and Information Assurance – Spring 2006
B. Endicott-Popovsky]
© 2006 by Leszek T. Lilien
57
--SKIP-- Case Studies (2)
Other case studies [textbook – interesting reading]
The Brain (Pakistani) Virus (1986)
Code Red (2001)
Denial-of-service (DoS) attack on www.whitehouse.gov
Web Bugs (generic potentially malicious code on web
pages)
Placing a cookie on your hard drive
Cookie collects statistics on user’s surfing habits
Can be used to get your IP address, which can then be used to
target you for attack
Block cookies or delete cookies periodically (e.g., using browser
command; in MS IE: Tools>Internet Options-General:Delete
Cookies)
Tool: Bugnosis from Privacy Foundation – locates web bugs
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
58
h. Virus Removal and
System Recovery After Infection
Fixing a system after infection by virus V:
1) Disinfect (remove) viruses (using antivirus pgm)
Can often remove V from infected file for T w/o
damaging T
if V code can be separated from T code and V did
not corrupt T
Have to delete T if can’t separate V from T code
2) Recover files:
- deleted by V
- modified by V
- deleted during disinfection (by antivirus pgm)
=> need file backups!
Make sure to have backups of (at least) important files
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
59
6.3.2. Targeted Malicious Code
Targeted = written to attack a particular system, a
particular application, and for a particular purpose
Many virus techniques apply
Some new techniques as well
Outline:
a. Trapdoors
b. Salami attack
c. Covert channels
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
60
a. Trapdoors (1)
--SKIP this def.-- Original def:
Trapdoor / backdoor - A hidden computer flaw known to an
intruder, or a hidden computer mechanism (usually
software) installed by an intruder, who can activate the trap
door to gain access to the computer without being blocked
by security services or mechanisms.
A broader definition:
Trapdoor – an undocumented entry point to a module
Inserted during code development
For testing
As a hook for future extensions
As emergency access in case of s/w failure
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
61
Trapdoors (2)
Testing:
With stubs and drivers for unit testing (Fig. 3-10 p. 138)
Testing with debugging code inserted into tested
modules
Major sources of trapdoors:
Left-over (purposely or not) stubs, drivers, debugging code
Poor error checking
May allow programmer to modify internal module variables
E.g., allowing for unacceptable input that causes buffer overflow
Some were used for testing, some random
Undefined opcodes in h/w processors
Not all trapdoors are bad
Some left purposely w/ good intentions
— facilitate system maintenance/audit/testing
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
62
b. Salami attack
Salami attack - merges bits of seemingly inconsequential
data to yield powerful results
Old example: interest calculation in a bank:
Fractions of 1 ¢ „shaved off” n accounts and deposited in
attacker’s account
Nobody notices/cares if 0.1 ¢ vanishes
Can accumulate to a large sum
Easy target for salami attacks: Computer computations
combining large numbers with small numbers
Require rounding and truncation of numbers
Relatively small amounts of error from these op’s are
accepted as unavoidable – not checked unless a strong
suspicion
Attacker can hide „salami slices” within the error margin
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
63
c. Covert Channels (CC) (1)
--SKIP-- Outline:
i. Covert Channels - Definition and Examples
ii. Types of Covert Channels
iii. Storage Covert Channels
iv. Timing Covert Channels
v. Identifying Potential Covert Channels
vi. Covert Channels - Conclusions
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
64
i. CC – Definition and Examples (1)
So far: we looked at malicious pgms that perform wrong
actions
Now: pgms that disclose confidential/secret info
They violate confidentiality, secrecy, or privacy of info
Covert channels = channels of unwelcome disclosure of info
Extract/leak data clandestinely
Examples
1) An old military radio communication network
The busiest node is most probably the command center
Nobody is so naive nowadays
2) Secret ways spies recognize each other
Holding a certain magazine in hand
Exchanging a secret gesture when approaching each other
...
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
65
Covert Channels – Definition and Examples (2)
How programmers create covert channels?
Providing pgm with built-in Trojan horse
Uses covert channel to communicate extracted data
Example: pgm w/ Trojan horse using covert channel
Should be:
Protected
Legitimate
Data
<------[ Service Pgm ]------> User
Is:
Protected
Legitimate
Data
<------[ Service Pgm ]------> User
[ w/ Trojan h. ]
covert channel
Spy
(Spy - e.g., programmer who put Trojan into pgm;
directly or via Spy Pgm)
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
66
Covert Channels – Definition and Examples (3)
How covert channels are created?
I.e., How leaked data are hidden?
Example: leaked data hidden in output reports (or displays)
Different ‘marks’ in the report: (cf. Fig. 3-12, p.143)
Varying report format
Changing line length / changing nr of lines per page
Printing or not certain values, characters, or headings
- each ‘mark’ can convey one bit of info
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
67
--SKIP-- Covert Channels – Definition and Examples (4)
Example – ctd.
How Trojan within pgm can leak a 4-bit value of a
protected variable X?
cf. Fig. 3-12, p.143
Trojan signals value of X as follows:
Bit-1
Bit-2
Bit-3
Bit-4
=
=
=
=
1
1
1
1
if
if
if
if
>1 space follows ‘ACCOUNT CODE:’; 0 otherwise
last digit in ‘seconds’ field is >5; 0 otherwise
heading uses ‘TOTALS’; 0 otherwise (uses ‘TOTAL’)
no space follows subtotals line; 0 otherwise
=> For the values as in this Fig,
Trojan signaled and spy got: X = ‘1101’
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
68
ii. Types of Covert Channels
Types of covert channels
Storage covert channels
Convey info by presence or absence of an object in
storage
Timing covert channels
Convey info by varying the speed at which things
happen
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
69
--SKIP-- iii. Storage Channels
(1)
Example of storage channel: file lock covert channel
Protected variable X has n bits: X1, ..., Xn
Trojan within Service Pgm leaks value of X
Trojan and Spy Pgm synchronized, so can „slice” time
into n intervals
File FX (not used by anybody else)
To signal that Xk=1, Trojan locks file FX for interval k (1≤
k ≤ n)
To signal that Xk=0, Trojan unlocks file FX for interval k
Spy Pgm tries to lock FX during each interval
If it succeds during k-th interval, Xk = 0 (FX was unlocked)
Otherwise, Xk = 1 (FX was locked)
(see Fig. 3-13, 3-14 – p.144-145)
Q: Why FX should not be used by anybody else?
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
70
--_SKIP-- Storage Channels (2)
Example of storage channel: file lock covert channel
...
Q: Why FX should not be used by anybody else?
A: Any other user lockin/unlocking FX would interfere with
Trojan’s covert channel signaling.
Isn’t such bit-by-bit signaling too slow?
No – bec. computers are very fast!
E.g., 10-100 bits/millisecond (10K – 100K b/s) is very slow
for computers
It still can leak entire P&P textbook in just minutes
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
71
--SKIP-- Storage Channels (3)
Examples of covert storage channels (synchronized intervals!)
Covert channels can use:
File locks (discussed above)
Disk storage quota
To signal Xk=1, Trojan create enormous file (consuming
most of available disk space)
Spy Pgm attempts to create enormous file. If Spy fails
(bec. no disk space available), Xk = 1; otherwise, Xk = 0
Existence of a file
To signal Xk=1, Trojan creates file FX (even empty file)
Spy Pgm atempts to create file named FX. If Spy fails
(bec. FX already exists), Xk = 1; otherwise, Xk = 0
Other resources - similarly
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
72
--SKIP-- Storage Channels (4)
Covert storage channels require:
Shared resource
To indicate Xk=1 or Xk=0
Synchronized time
To know which bit is signaled:
in interval k, Xk is signaled
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
73
--SKIP-- iv. Timing Channels
Recall: Timing channels convey info by varying the speed
at which things happen
Simple example of timing channel:
Multiprogramming system „slices” processor time for
programs running on the processor
2 processes only: Trojan (Pgm w/ Trojan) and Spy Pgm
Trojan receives all odd slices (unless abstains)
Spy Pgm receives all even slices (unless abstains)
Trojan signals Xk=1 by using its time slice,
signals Xk=0 by abstaining from using its slice
see: Fig.3-15, p.147 – how ‘101’ is signaled
Details: Trojan takes Slice 1 (its 1st slice) signaling X1=1
Trojan abstains from taking Slice 3 (its 2nd slice) signaling X2=0
Trojan takes Slice 5 (its 3rd slice) signaling X3=1
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
74
--SKIP-- v. Identifying Potential Covert
Channels (1)
Covert channels are not easy to identify
Otherwise wouldn’t be covert, right?
Two techniques for locating covert channels:
1) Shared Resource Matrix
2) Information Flow Method
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
75
--SKIP-- Identifying Potential Covert Channels (2)
1) The Shared Resource Matrix method
Shared resource is basis for a covert channel
=> identify shared resources and processes
reading/writing them
Step 1: Construct Shared Resource Matrix
Rows — resources
Columns — processes that access them:
R = observe resource M = modify/set/create/delete resource
Example
Lock on FX
X (confid.)
Process 1
Process 2
R, M
R
R, M
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
76
--SKIP-- Identifying Potential Covert Channels (3)
...
Pgm 1
Pgm 2
Lock on FX
R, M
R, M
X (confid.)
R
Step 2: Look for pattern:
Meaning of this pattern:
Process Pj can get value of
Resource Rn via Process Pi
(and a covert channel)
Pi
Pj
Rm
M
R
Rn
R
Q: Do you see such a pattern in SRM above?
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
77
--SKIP-- Identifying Potential Covert Channels (4)
...
Process 1
Process 2
Lock on FX
R, M
R, M
X (confid.)
R
Step 2: Look for pattern:
Meaning of this pattern:
Process Pj can get value of
Resource Rn via Process Pi
(and a covert channel)
i
j
m
M
R
n
R
Q: Do you see such a pattern in SRM above?
A: Yes. Process 2 can get value of X via Process 1
(no surprise: Proc. 1 & 2 are Trojan & Spy
from earlier example)
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
78
--SKIP-- Identifying Potential Covert Channels (5)
2) Information Flow Method
Flow analysis of pgm’s syntax
Can be automated within a compiler
Identifies non-obvious flows of info between pgm
statements
Examples of flows of info between pgm stmts
B:= A – an explicit flow from A to B
B:= A; C:=B – an explicit flow from A to C (via B)
IF C=1 THEN B:=A
– an explicit flow from A to B
– an implicit flow from C to B (bec. B can change iff C=1)
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
79
--SKIP-- Identifying Potential Covert Channels (6)
More examples of flows of info between pgm stmts
[textbook and J. Leiwo]
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
80
--SKIP-- Identifying Potential Covert Channels (7)
Steps of Information Flow Method (IFM)
1) Analyze statements
2) Integrate results to see which outputs affected by which
inputs
Variants of IFM:
1) IFM during compilation
2) IFM on design specs
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
81
Covert Channels - Conclusions
Covert channels are a serious threat to confidentiality and
thus security
(„CIA” = security)
Any virus/Trojan horse can create a covert channel
In open systems — no way to prevent covert channels
Very high security systems require a painstaking and
costly design preventing (some) covert channels
Analysis must be performed periodically as high security
system evolves
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
82
6.4. Controls for Security
How to control security of pgms during their development
and maintenance
--SKIP-- Outline:
a. Introduction
b. Developmental controls for security
c. Operating system controls for security
d. Administrative controls for security
e. Conclusions
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
83
a. Introduction
„Better to prevent than to cure”
Preventing security flaws
We have seen a lot of possible security flaws
How to prevent (some of) them?
Software engineering concentrates on developing and
maintaining high-quality s/w
We’ll take a look at some techniques useful specifically
for developing/ maintaining secure s/w
Three types of controls for security (against pgm flaws):
1) Developmental controls
2) OS controls
3) Administrative controls
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
84
b. Developmental Controls for Security (1)
Nature of s/w development
Collaborative effort
Team of developers, each involved in 1 of stages:
Requirement specification
Regular req. specs: „do X”
Security req. specs: „do X and nothing more”
Design
Implementation
Testing
Documenting at each stage
Reviewing at each stage
Managing system development thru all stages
Maintaining deployed system (updates, patches, new versions,
etc.)
Both product and process contribute to overall quality
— incl. security dimension of quality
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
85
Developmental Controls for Security (2)
Fundamental principles of s/w engineering
1) Modularity
2) Encapsulation
3) Info hiding
1) Modularity
Modules should be:
Single-purpose - logically/functionally
Small - for a human to grasp
Simple - for a human to grasp
Independent – high cohesion, low coupling
High cohesion – highly focused on (single) purpose
Low coupling – free from interference from other modules
Modularity should improve correctness
Fewer flaws => better security
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
86
Developmental Controls for Security (3)
2) Encapsulation
Minimizing info sharing with other modules
=> Limited interfaces reduce # of covert channels
Well documented interfaces
„Hiding what should be hidden and showing what should
be visible.”
3) Information hiding
Module is a black box
Well defined function and I/O
Easy to know what module does but not how it does it
Reduces complexity, interactions, covert channels, ...
=> better security
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
87
Developmental Controls for Security (4)
Many techniques for building solid software
--SKIP-1) Peer reviews
2) Hazard analysis
3) Testing
4) Good design
5) Risk prediction & mangement
6) Static analysis
7) Configuration management
8) Additional developmental controls
--SKIP--> ... Please read on your own ...
..Also see slides—all discussed below ...
Section 6 – Computer Security and Information Assurance – Spring 2006
[cf. B.
Endicott-Popovsky]
© 2006 by Leszek T. Lilien
88
--SKIP-- Developmental Controls for Security (5)
1) Peer reviews - three types
Reviews
Informal
Team of reviewers
Gain consensus on solutions
before development
Walk-throughs
Developer walks team through code/document
Discover flaws in a single design document
Inspection
Formalized and detailed
Statistical measures used
Various types of peer reviews can be highly effective
Section 6 – Computer Security and Information Assurance – Spring 2006
[cf. ©
B.2006
Endicott-Popovsky]
by Leszek T. Lilien
89
--SKIP-- Developmental Controls for Security (6)
2) Hazard analysis
= systematic techniques to expose
potentially hazardous system states,
incl. security vulnerabilities
Components of HA
Hazard lists
What-if scenarios – identifies non-obvious hazards
System-wide view (not just code)
Begins Day 1
Continues throughout SDLC (= s/w dev’t life cycle)
Techniques
HAZOP – hazard and operability studies
FMEA – failure modees and effects analysis
FTA – fault tree analysis
Section 6 – Computer Security and Information Assurance – Spring 2006
[cf. B.
Endicott-Popovsky]
© 2006 by Leszek T. Lilien
90
--SKIP-- Developmental Controls for Security (7)
3) Testing – phases:
Module/component/unit testing of indiv. modules
Integration testing of interacting (sub)system modules
(System) function testing checking against requirement specs
(System) performance testing
(System) acceptance testing – with customer against
customer’s requirements — on seller’s or customer’s premises
(System) installation testing after installation on customer’s
system
Regression testing after updates/changes to s/w
Types of testing
Black Box testing – testers can’t examine code
White Box / Clear box testing – testers can examine design and
code, can see inside modules/system
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
91
Developmental Controls for Security (8)
4) Good design
Good design uses:
i. Modularity / encapsulation / info hiding
ii. Fault tolerance
iii. Consistent failure handling policies
iv. Design rationale and history
v. Design patterns
i. Using modularity / encapsulation / info hiding
- as discussed above
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
92
Developmental Controls for Security (9)
ii. Using fault tolerance for reliability and security
System tolerates component failures
System more reliable than any of its components
Different than for security, where system is as secure as its
weakest component
[cf. B. Endicott-Popovsky]
Fault-tolerant approach:
Anticipate faults
(car: anticipate having a flat tire)
Active fault detection rather than pasive fault detection
(e.g., by use of mutual suspicion: active input data checking)
Use redundancy
Isolate damage
Minimize disruption
(car: have a spare tire)
(car: replace flat tire, continue your trip)
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
93
Developmental Controls for Security (10)
Example 1: Majority voting (using h/w redundancy)
3 processor running the same s/w
E.g., in a spaceship
Result accepted if results of 2 processors agree
Example 2: Recovery Block (using s/w redundancy)
Primary Code
e.g., Quick Sort
Secondary
Code
e.g., Bubble Sort
Acceptance Test
Section 6 – Computer Security and Information Assurance – Spring 2006
Quick Sort –
– new code (faster)
Bubble Sort –
– well-tested code
© 2006 by Leszek T. Lilien
94
--SKIP-- Developmental Controls for Security (11)
4) Good design – cont.2
iii. Using consistent failure handling policies
Each failure handled by one of 3 ways:
Retrying
Correcting
Restore previous state, correct sth, run service using the
same code as before
Reporting
Restore previous state, redo service using different „path”
E.g., use secondary code instead of primary code
Restore previous state, report failure to error handler, don’t
rerun service
Example — How fault-tolerance enhances security
If security fault destroys important data (availability in CIA),
use f-t to revert to backup data set
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
95
--SKIP-- Developmental Controls for Security (12)
4) Good design – cont.3
iv. Using design rationale and history
Knowing it (incl. knowing design rationale and history
for security mechanisms) helps developers modifying or
maintaining system
v. Using design patterns
Knowing it enables looking for patterns showing what
works best in which situation
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
96
Developmental Controls for Security (13)
Value of Good Design
Easy maintenance
Understandability
Reuse
Correctness
Better testing
=> translates into (saving) BIG bucks !
Section 6 – Computer Security and Information Assurance – Spring 2006
[cf. ©
B.2006
Endicott-Popovsky]
by Leszek T. Lilien
97
--SKIP-- Developmental Controls for Security (14)
5) Risk prediction & management
Predict and manage risks involved in system development
and deployment
Make plans to handle unwelcome events should they
occur
Risk prediction/mgmt are esp. important for security
Bec. unwelcome and rare events can have security
consequences
Risk prediction/mgmt helps to select proper security
controls (e.g., proportional to risk)
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
98
--SKIP-- Developmental Controls for Security (15)
6) Static analysis
Before system is up and running, examine its design and
code to locate security flaws
More than peer review
Examines
Control flow structure
(sequence in which instructions are
executed, incl. iterations and loops)
Data flow structure (trail of data)
Data structures
Automated tools available
Section 6 – Computer Security and Information Assurance – Spring 2006
[cf. B.
Endicott-Popovsky]
© 2006 by Leszek T. Lilien
99
--SKIP-- Developmental Controls for Security (16)
7) Configuration management
= process of controling system modifications during
development and maintenance
Offers security benefits by scrutinizing new/changed code
Problems with system modifications
One change interefering with other change
E.g., neutralizing it
Proliferation of different versions and releases
Older and newer
For different platforms
For different application environments (and/or customers
categories)
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
100
--SKIP-- Developmental Controls for Security (17)
Reasons for software modification
Corrective changes
To maintain control of system’s day-to-day functions
Adaptive changes
To maintain control over system’s modifications
Perfective changes
To perfect existing acceptable system functions
Preventive changes
To prevent system’s performance degradation to
unacceptable levels
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
101
--SKIP-- Developmental Controls for Security (18)
Activities involved in configuration management process
(performed by reps from developers, customers, users, etc.)
1) Baseline identification
Certain release/version (R/v) selected & frozen as
baseline
Other R’s/v’s described as changes to the baseline
2) Configuration control and configuration management
Coordinate separate but related v’s (versions) via:
Separate files - separate files for each R or v
Deltas - main v defined by „full files”
- other v’s defined by main v & deltas
(= difference files)
Conditional compilation
- single source code file F for all v’s
uses begin_version_Vx / end_version_Vx brackets
or begin_not_version_Vx / end_not_version_Vx brackets
- compiler produces each v from F
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
102
--SKIP-- Developmental Controls for Security (19)
3) Configuration auditing
System must be audited regularly — to verify:
Baseline completeness and accuracy
Recording of changes
Accuracy of software documentation for systems in
the field
Peformed by independent parties
4) Status accounting
Records info about system components
Where they come from (purchased, reused, written
from scratch)
Version
Change history
Pending change requests
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
103
--SKIP-- Developmental Controls for Security (20)
All 4 activities performed by
Configuration Control Board (CCB)
Includes reps from developers, customers, users
Reviews proposed changes, approves/rejects
Security benefits of configuration mgmt
Limits unintentional flaws
Limits malicious modifications
by protecting integrity of pgms and documentation
Thanks to:
careful reviewing/auditing, change mgmt
preventing changes (e.g., trapdoors) to system w/o acceptance
by CCB
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
104
--SKIP-- Developmental Controls for Security (21)
8) Additional developmental controls
8a) Learning from mistakes
Avoiding such mistakes in the future enhances security
8b) Proofs of program correctness
Formal methods to verify pgm correctness
Logic analyzer shows that:
initial assertions about inputs...
... through implications of pgm statements...
... lead to the terminal condition (desired output)
Problems with practical use of pgm correctness proofs
Esp. for large pgms/systems
Most successful for specific types of apps
E.g. for communication protocols & security policies
Even with all these developmental controls (1-8) –
still no security guarantees! [cf. B. Endicott-Popovsky]
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
105
c. Operating System Controls for Security (1)
Developmental controls not always used
OR:
Even if used, not foolproof
=> Need other, complementary controls, incl. OS controls
Such OS controls can protect against some pgm flaws
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
106
Operating System Controls for Security (2)
Trusted software
– code rigorously developed an analyzed so we can trust that
it does all and only what specs say
Trusted code establishes foundation upon which untrusted
code runs
Trusted code establishes security baseline for the whole system
In particular, OS can be trusted s/w
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
107
Operating System Controls for Security (3)
Key characteristics determining if OS code is trusted
1) Functional correctness
OS code consistent with specs
2) Enforcement of integrity
OS keeps integrity of its data and other resources even if
presented with flawed or unauthorized commands
3) Limited privileges
OS minimizes access to secure data/resources
Trusted pgms must have „need to access” and proper access rights
to use resources protected by OS
Untrusted pgms can’t access resources protected by OS
4) Appropriate confidence level
OS code examined and rated at appropriate trust level
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
108
Operating System Controls for Security (4)
Similar criteria used to establish if s/w other than OS can be
trusted
Ways of increasing security if untrusted pgms present:
1) Mutual suspicion
2) Confinement
3) Access log
1) Mutual suspicion between programs
Distrust other pgms – treat them as if they were
incorrect or malicious
Pgm protects its interface data
With data checks, etc.
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
109
Operating System Controls for Security (5)
2) Confinement
OS can confine access to resources by suspected pgm
Example 1: strict compartmentalization
Pgm can affect data and other pgms only within its
compartment
Example 2: sandbox for untrusted pgms
Can limit spread of viruses
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
110
Operating System Controls for Security (6)
3) Audit log / access log
Records who/when/how (e.g., for how long)
accessed/used which objects
Events logged: logins/logouts, file accesses, pgm ecxecutions,
device uses, failures, repeated unsuccessful commands (e.g.,
many repeated failed login attempts can indicate an attack)
Audit frequently for unusual events, suspicious patterns
It is a forensic measure not protective measure
Forensics – investigation to find who broke law,
policies, or rules (a posteriori, not a priori)
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
111
d. Administrative Controls for Security (1)
They prohibit or demand certain human behavior via
policies, procedures, etc.
They include:
1) Standards of program development
2) Security audits
3) Separation of duties
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
112
--SKIP-- Administrative Controls for Security (2)
1) Standards and guidelines for program development
Capture experience and wisdom from previous projects
Facilitate building higher-quality s/w (incl. more secure)
They include:
Design S&G – design tools, languages, methodologies
S&G for documentation, language, and coding style
Programming S&G - incl. reviews, audits
Testing S&G
Configuration mgmt S&G
2) Security audits
Check compliance with S&G
Scare potential dishonest programmer from including
illegitimate code (e.g., a trapdoor)
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
113
--SKIP-- Administrative Controls for Security (3)
3) Separation of duties
Break sensitive tasks into 2 pieces to be performed by
different people (learned from banks)
Example 1: modularity
Different developers for cooperating modules
Example 2: independent testers
Rather than developer testing her own code
...More (much) later...
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
114
e. Conclusions
(for Controls for Security)
Developmental / OS / administrative controls help
produce/maintain higher-quality (also more secure) s/w
Art and science - no „silver bullet” solutions
„A good developer who truly understands security will
incorporate security into all phases of development.”
[textbook, p. 172]
Summary:
Control
[cf. B. Endicott-Popovsky]
Purpose
Benefit
Developmental
Limit mistakes
Make malicious code difficult
Produce better software
Operating
System
Limit access to system
Promotes safe sharing of info
Administrative
Limit actions of people
Improve usability, reusability
and maintainability
Section 6 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
115
The End of Section 6 (Ch. 3):
Program Security
This is the longer version of this Section
with OPTIONAL details (which you may SKIP)