Section 3: Program Security - Computer Science
Download
Report
Transcript Section 3: Program Security - Computer Science
CS 5950/6030 –
Computer Security and Information Assurance
Section 3: Program Security
Dr. Leszek Lilien
Department of Computer Science
Western Michigan University
Slides based on Security in Computing. Third Edition by Pfleeger and Pfleeger.
Using some slides courtesy of:
Prof. Aaron Striegel — course taught at U. of Notre Dame
Prof. Barbara Endicott-Popovsky and Prof. Deborah Frincke (U. Idaho) — taught at U. Washington
Prof. Jussipekka Leiwo — taught at Vrije Universiteit (Free U.), Amsterdam, The Netherlands
Slides not created by the above authors are © 2006 by Leszek T. Lilien
Requests to use original slides for non-profit purposes will be gladly granted upon a written request.
Program Security – Outline (1)
3.1. Secure Programs – Defining & Testing
a.
b.
c.
d.
e.
Introduction
Judging S/w Security by Fixing Faults
Judging S/w Security by Testing Pgm Behavior
Judging S/w Security by Pgm Security Analysis
Types of Pgm Flaws
3.2. Nonmalicious Program Errors
a.
b.
c.
d.
Buffer overflows
Incomplete mediation
Time-of-check to time-of-use errors
Combinations of nonmalicious program flaws
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
2
Program Security – Outline (2)
3.3. Malicious Code
3.3.1. General-Purpose Malicious Code incl.
Viruses
a.
b.
c.
d.
e.
f.
g.
h.
Introduction
Kinds of Malicious Code
How Viruses Work
Virus Signatures
Preventing Virus Infections
Seven Truths About Viruses
Case Studies
Virus Removal and System Recovery After Infection
3.3.2. Targeted Malicious Code
a. Trapdoors
b. Salami attack
c. Covert channels
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
3
Program Security – Outline (3)
3.4. Controls for Security
a. Introduction
b. Developmental controls for security
c. Operating System controls for security
d. Administratrive controls for security
e. Conclusions
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
4
3. Program Security (1)
Program security –
Our first step on how to apply security to computing
Protecting programs is the heart of computer security
All kinds of programs, from apps via OS, DBMS, networks
Issues:
How to keep pgms free from flaws
How to protect computing resources from pgms with flaws
Issues of trust not considered:
How trustworthy is a pgm you buy?
How to use it in its most secure way?
Partial answers:
Third-party evaluations
Liability and s/w warranties
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
5
Program Security (2)
Outline:
3.1. Secure Programs – Defining and Testing
3.2. Nonmalicious Program Errors
3.3. Malicious Code
3.3.1. General-Purpose Malicious Code incl. Viruses
3.3.2. Targeted Malicious Code
3.4. Controls Against Program Threats
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
6
3.1. Secure Programs - Defining & Testing
Outline
a. Introduction
b. Judging S/w Security by Fixing Faults
c. Judging S/w Security by Testing Pgm Behavior
d. Judging S/w Security by Pgm Security Analysis
e. Types of Pgm Flaws
Section 3 – Computer Security and Information Assurance – Spring 2006
[cf. B.
Endicott-Popovsky]
© 2006 by Leszek T. Lilien
7
a. Introduction (1)
Pgm is secure if we trust that it provides/enforces:
Confidentiality
Integrity
Availability
What is „Program security?”
Depends on who you ask
user - fit for his task
programmer - passes all „her” tests
manager - conformance to all specs
Developmental criteria for program security include:
Correctness of security & other requirements
Correctness of implementation
Correctness of testing
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
8
Introduction (2)
Fault tolerance terminology:
Error - may lead to a fault
Fault - cause for deviation from intended function
Failure - system malfunction caused by fault
Note:
[cf. A. Striegel]
Faults - seen by „insiders” (e.g., programmers)
Failures - seen by „outsiders” (e.g., independent testers, users)
Error/fault/failure example:
Programmer’s indexing error, leads to buffer overflow fault
Buffer overflow fault causes system crash (a failure)
Two categories of faults w.r.t. duration
[cf. A. Striegel]
Permanent faults
Transient faults – can be much more difficult to diagnose
Section 3 – Computer Security and Information Assurance – Spring 2006
[cf.
A. Striegel]
© 2006 by Leszek T. Lilien
9
b. Judging S/w Security by Fixing Faults
An approach to judge s/w security:
penetrate and patch
Red Team / Tiger Team tries to crack s/w
If you withstand the attack => security is good
Is this true? Rarely.
Too often developers try to quick-fix problems
discovered by Tiger Team
Quick patches often introduce new faults due to:
Pressure – causing narrow focus on fault, not
context
Non-obvious side effects
System performance requirements not allowing
for security overhead
Section 3 – Computer Security and Information Assurance – Spring 2006
[cf.
A. Striegel]
© 2006 by Leszek T. Lilien
10
c. Judging S/w Security
by Testing Pgm Behavior (1)
Better approach to judging s/w security:
testing pgm behavior
Compare behavior vs. requirements (think testing/SW eng)
Program security flaw =
= inappropriate behavior caused by a pgm fault or failure
Flaw detected as a fault or a failure
Important: If flaw detected as a failure (an effect), look for
the underlying fault (the cause)
Recall: fault seen by insiders, failure – by outsiders
If possible, detect faults before they become failures
Note:
Texbook defines flaw-vulnerability-flaw in a circular way
– a terminology soup!
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
11
Judging S/w Security by Testing Pgm Behavior (2)
Any kind of fault/failure can cause a security incident
Misunderstood requirements /
error in coding / typing error
In a single pgm / interaction of k pgms
Intentional flaws or accidental (inadvertent) flaws
Therefore, we must consider security consequences for all
kinds of detected faults/failures
Even inadvertent faults / failures
Inadvertent faults are the biggest source of security
vulnerabilities exploited by attackers
Even dormant faults
Eventually can become failures harming users
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
12
Judging S/w Security by Testing Pgm Behavior (3)
Problems with pgm behavior testing
Limitations of testing
Complexity – malicious attacker’s best friend
Can’t test exhaustively
Testing checks what the pgm should do
Can’t test what the pgm should not do
i.e., can’t make sure that pgm does only what it should do –
nothing more
Too complex to model / to test
Exponential # of pgm states / data combinations
a faulty line hiding in 10 million lines of code
Evolving technology
New s/w technologies appear
Security techniques catching up with s/w technologies
Section 3 – Computer Security and Information Assurance – Spring 2006
[cf.
Striegel]
© A.
2006
by Leszek T. Lilien
13
d. Judging S/w Security
by Pgm Security Analysis
Best approach to judging s/w security:
pgm security analysis
Analyze what can go wrong
At every stage of program development!
After deployment
From requirement definition to testing
Configurations / policies / practices
Protect against security flaws
Specialized security methods and techniques
Specialized security tools
E.g., specialized security meth/tech/tools for switching s/w
Section 3 – Computer Security and Information Assurance – Spring 2006
[cf.©B.2006
Endicott-Popovsky]
by Leszek T. Lilien
14
e. Types of Pgm Flaws
Taxonomy of pgm flaws:
Intentional
Malicious
Nonmalicious
Inadvertent
Validation error (incomplete or inconsistent)
Domain error
e.g., incomplete or inconsistent input data
e.g., using a variable value outside of its domain
Serialization and aliasing
serialization – e.g., in DBMSs or OSs
aliasing - one variable or some reference, when changed, has an
indirect (usually unexpected) effect on some other data
Note: ‘Aliasing’ not in computer graphics sense!
Inadequate ID and authentication (Section 4—on OSs)
Boundary condition violation
Other exploitable logic errors
Section 3 – Computer Security and Information Assurance – Spring 2006
[cf. B.© Endicott-Popovsky]
2006 by Leszek T. Lilien
15
3.2. Nonmalicious Program Errors
Outline
a. Buffer overflows
b. Incomplete mediation
c. Time-of-check to time-of-use errors
d. Combinations of nonmalicious program flaws
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
16
a. Buffer Overflows (1)
Buffer overflow flaw — often inadvertent (=>nonmalicious)
but with serious security consequences
Many languages require buffer size declaration
C language statement: char sample[10];
Execute statement:
sample[i] = ‘A’; where i=10
Out of bounds (0-9) subscript – buffer overflow occurs
Some compilers don’t check for exceeding bounds
C does not perform array bounds checking.
Similar problem caused by pointers
No reasonable way to define limits for pointers
Section 3 – Computer Security and Information Assurance – Spring 2006
[cf. B.
Endicott-Popovsky]
© 2006 by Leszek T. Lilien
17
Buffer Overflows (2)
Where does ‘A’ go?
Depends on what is adjacent to ‘sample[10]’
Affects user’s data - overwrites user’s data
Affects users code - changes user’s instruction
Affects OS data
- overwrites OS data
Affects OS code
- changes OS instruction
This is a case of aliasing (cf. Slide 26)
Section 3 – Computer Security and Information Assurance – Spring 2006
[cf. ©
B.2006
Endicott-Popovsky]
by Leszek T. Lilien
18
Buffer Overflows (3)
Implications of buffer overflow:
Attacker can insert malicious data values/instruction
codes into „overflow space”
Supp. buffer overflow affects OS code area
Attacker code executed as if it were OS code
Attacker might need to experiment to see what
happens when he inserts A into OS code area
Can raise attacker’s privileges (to OS privilege level)
When A is an appropriate instruction
Attacker can gain full control of OS
Section 3 – Computer Security and Information Assurance – Spring 2006
[cf.©B.2006
Endicott-Popovsky]
by Leszek T. Lilien
19
Buffer Overflows (4)
Supp. buffer overflow affects a call stack area
A scenario:
Stack: [data][data][...]
Pgm executes a subroutine
=> return address pushed onto stack
(so subroutine knows where to return control to when finished)
Stack: [ret_addr][data][data][...]
Subroutine allocates dynamic buffer char sample[10]
=> buffer (10 empty spaces) pushed onto stack
Stack: [..........][ret_addr][data][data][...]
Subroutine executes: sample[i] = ‘A’ for i = 10
Stack: [..........][A][data][data][...]
Note: ret_address overwritten by A!
(Assumed: size of ret_address is 1 char)
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
20
Buffer Overflows (5)
Supp. buffer overflow affects a call stack area—CONT
Stack: [..........][A][data][data][...]
Subroutine finishes
Buffer for char sample[10] is deallocated
Stack: [A][data][data][...]
RET operation pops A from stack (considers it ret. addr.)
Stack: [data][data][...]
Pgm (which called the subroutine) jumps to A
=> shifts program control to where attacker wanted
Note: By playing with ones own pgm attacker can specify
any „return address” for his subroutine
Upon subroutine return, pgm transfers control to
attacker’s chosen address A (even in OS area)
Next instruction executed is the one at address A
Could be 1st instruction of pgm that grants
highest access privileges to its „executor”
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
21
Buffer Overflows (6)
Note:
[Wikipedia – aliasing]
C programming language specifications do not specify
how data is to be laid out in memory (incl. stack layout)
Some implementations of C may leave space between
arrays and variables on the stack, for instance, to
minimize possible aliasing effects.
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
22
Buffer Overflows (7)
Web server attack similar to buffer overflow attack:
pass very long string to web server (details: textbook, p.103)
Buffer overflows still common
Used by attackers
to crash systems
to exploit systems by taking over control
Large # of vulnerabilities due to buffer overflows
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
23
b. Incomplete Mediation (1)
Incomplete mediation flaw — often inadvertent (=>
nonmalicious) but with serious security consequences
Incomplete mediation:
Sensitive data are in exposed, uncontrolled condition
Example
URL to be generated by client’s browser to access server,
e.g.:
http://www.things.com/order/final&custID=101&part=555A&qy=20
&price=10&ship=boat&shipcost=5&total=205
Instead, user edits URL directly, changing price and total
cost as follows:
http://www.things.com/order/final&custID=101&part=555A&qy=20
&price=1&ship=boat&shipcost=5&total=25
User uses forged URL to access server
The server takes 25 as the total cost
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
24
Incomplete Mediation (2)
Unchecked data are a serious vulnerability!
Possible solution: anticipate problems
Don’t let client return a sensitive result (like total)
that can be easily recomputed by server
Use drop-down boxes / choice lists for data input
Prevent user from editing input directly
Check validity of data values received from client
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
25
c. Time-of-check to Time-of-use Errors (1)
Time-of-check to time-of-use flaw — often inadvertent (=>
nonmalicious) but with serious security consequences
A.k.a. synchronization flaw / serialization flaw
TOCTTOU — mediation with “bait and switch” in the middle
Non-computing example:
Swindler shows buyer real Rolex watch (bait)
After buyer pays, switches real Rolex to a forged one
In computing:
Change of a resource (e.g., data) between time
access checked and time access used
Q: Any examples of TOCTTOU problems from
computing?
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
26
Time-of-check to Time-of-use Errors (2)
...
TOCTTOU — mediation with “bait and switch” in the middle
...
Q: Any examples of TOCTTOU problems from
computing?
A: E.g., DBMS/OS: serialization problem:
pgm1 reads value of X = 10
pgm1 adds X = X+ 5
pgm2 reads X = 10, adds 3 to X, writes X = 13
pgm1 writes X = 15
X ends up with value 15 – should be X = 18
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
27
Time-of-check to Time-of-use Errors (3)
Prevention of TOCTTOU errors
Be aware of time lags
Use digital signatures and certificates to „lock” data
values after checking them
So nobody can modify them after check & before
use
Q: Any examples of preventing TOCTTOU from
DBMS/OS areas?
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
28
Time-of-check to Time-of-use Errors (4)
Prevention of TOCTTOU errors
...
Q: Any examples of preventing TOCTTOU from
DBMS/OS areas?
A1: E.g., DBMS: locking to enforce proper serialization
(locks need not use signatures—fully controlled by DBMS)
In the previous example:
will force writing X = 15 by pgm 1, before pgm2
reads X (so pgm 2 adds 3 to 15)
OR:
will force writing X = 13 by pgm 2, before pgm1
reads X (so pgm 1 adds 5 to 13)
A2: E.g., DBMS/OS: any other concurrency control
mechanism enforcing serializability
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
29
d. Combinations of Nonmal. Pgm Flaws
The above flaws can be exploited in multiple steps by a
concerted attack
Nonmalicious flaws can be exploited to plant malicious flaws
(next)
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
30
3.3. Malicious Code
Malicious code or rogue pgm is written to exploit flaws in pgms
Malicious code can do anything a pgm can
Malicious code can change
data
other programs
Malicious code has been „oficially” defined by Cohen in 1984
but virus behavior known since at least 1970 Ware’s study for
Defense Science Board (classified, made public in 1979)
Outline for this Subsection:
3.3.1. General-Purpose Malicious Code (incl. Viruses)
3.3.2. Targeted Malicious Code
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
31
3.3.1. General-Purpose Malicious Code
(incl. Viruses)
Outline
a. Introduction
b. Kinds of Malicious Code
c. How Viruses Work
d. Virus Signatures
e. Preventing Virus Infections
f. Seven Truths About Viruses
g. Case Studies
Section 3 – Computer Security and Information Assurance – Spring 2006
[cf. B.
Endicott-Popovsky]
© 2006 by Leszek T. Lilien
32
a. Introduction
Viruses are prominent example of general-purpose malicious
code
Not „targeted” against any user
Attacks anybody with a given app/system/config/...
Viruses
Many kinds and varieties
Benign or harmful
Transferred even from trusted sources
Also from „trusted” sources that are negligent to update
antiviral programs and check for viruses
Section 3 – Computer Security and Information Assurance – Spring 2006
[cf. ©B.2006
Endicott-Popovsky]
by Leszek T. Lilien
33
b. Kinds of Malicious Code (1)
[remember Introduction?]
Trapdoors
Trojan Horses
X
Files
Bacteria
Logic Bombs
Worms
Viruses
[cf. Barbara Edicott-Popovsky and Deborah Frincke, CSSE592/492, U. Washington]
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
34
b. Kinds of Malicious Code (2)
Trojan horse - A computer program that appears to have a
useful function, but also has a hidden and potentially
malicious function that evades security mechanisms,
sometimes by exploiting legitimate authorizations of a
system entity that invokes the program
Virus - A hidden, self-replicating section of computer
software, usually malicious logic, that propagates by
infecting (i.e., inserting a copy of itself into and becoming part of)
another program. A virus cannot run by itself; it requires
that its host program be run to make the virus active.
Worm - A computer program that can run independently,
can propagate a complete working version of itself onto
other hosts on a network, and may consume computer
resources destructively.
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
35
Kinds of Malicious Code (3)
Bacterium - A specialized form of virus which does not attach to a specific file.
Usage obscure.
Logic bomb - Malicious [program] logic that activates when
specified conditions are met. Usually intended to cause
denial of service or otherwise damage system resources.
Time bomb - activates when specified time occurs
Rabbit – A virus or worm that replicates itself without limit
to exhaust resource
Trapdoor / backdoor - A hidden computer flaw known to an
intruder, or a hidden computer mechanism (usually
software) installed by an intruder, who can activate the trap
door to gain access to the computer without being blocked
by security services or mechanisms.
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
36
Kinds of Malicious Code (4)
Above terms not always used consistently, esp. in popular
press
Combinations of the above kinds even more confusing
E.g., virus can be a time bomb
— spreads like virus, „explodes” when time occurs
Term „virus” often used to refer to any kind of malicious
code
When discussing malicious code, we’ll often say „virus”
for any malicious code
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
37
c. How Viruses Work (1)
Pgm containing virus must be executed to spread virus or
infect other pgms
Even one pgm execution suffices to spread virus widely
Virus actions: spread / infect
Spreading – Example 1: Virus in a pgm on installation CD
User activates pgm contaning virus when she runs
INSTALL or SETUP
Virus installs itself in any/all executing pgms present in
memory
Virus installs itself in pgms on hard disk
From now on virus spreads whenever any of the infected
pgms (from memory or hard disk) executes
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
38
How Viruses Work (2)
Spreading – Example 2: Virus in attachment to e-mail msg
User activates pgm contaning virus (e.g. macro in MS
Word) by just opening the attachment
=> Disable automatic opening of attachments!!!
Virus installs itself and spreads ... as in Example 1...
Spreading – Example 3: Virus in downloaded file
File with pgm or document (.doc, .xls, .ppt, etc.)
You know the rest by now...
Document virus
Spreads via picture, document, spreadsheet, slide
presentation, database, ...
E.g., via .jpg, via MS Office documents .doc, .xls, .ppt, .mdb
Currently most common!
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
39
How Viruses Work (3)
Kinds of viruses w.r.t. way of attaching to infected pgms
1) Appended viruses
Appends to pgm
Most often virus code precedes pgm code
Inserts its code before the 1st pgm instruction in
executable pgm file
Executes whenever program executed
2) Surrounding viruses
Surronds program
Executes before and after infected program
Intercepts its input/output
Erases its tracks
The „after” part might be used to mask virus
existence
E.g. if surrounds „ls”, the „after” part removes listing of
virus file produced by „ls” so user can’t see it
... cont. ...
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
40
How Viruses Work (4)
... cont. ...
3) Integrating viruses
Integrates into pgm code
Spread within infected pgms
4) Replacing viruses
Entirely replaces code of infected pgm file
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
41
How Viruses Work (5)
(Replacing) virus V gains control over target pgm T by:
Overwriting T on hard disk
OR
Changing pointer to T with pointer to V
(textbook, Fig. 3-7)
OS has File Directory
File Directory has an entry that points to file with code for T
Virus replaces pointer to T’s file with pointer to V’s file
In both cases actions of V replace actions of T when user
executes what she thinks is „T”
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
42
How Viruses Work (6)
Characteristics of a ‘perfect’ virus (goals of virus writers)
Hard to detect
Not easily destroyed or deactivated
Spreads infection widely
Can reinfect programs
Easy to create
Machine and OS independent
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
43
How Viruses Work (7)
Virus hiding places
1) In bootstrap sector – best place for virus
Bec. virus gains control early in the boot process
Before detection tools are active!
Before infection:
After infection:
[Fig. cf. J. Leiwo & textbook]
2) In memory-resident pgms
TSR pgms (TSR = terminate and stay resident)
Most frequently used OS pgms or specialized user pgms
=> good place for viruses (activated very often)
...cont...
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
44
How Viruses Work (8)
...cont...
3) In application pgms
Best for viruses: apps with macros
(MS Word, MS PowerPoint, MS Excel, MS Access, ...)
One macro: startup macro executed when app starts
Virus instructions attach to startup macro, infect
document files
Bec. doc files can include app macros (commands)
E.g., .doc file include macros for MS Word
Via data files infects other startup macros, etc. etc.
4) In libraries
Libraries used/shared by many pgms => spread virus
Execution of infected library pgm infects
5) In other widely shared pgms
Compilers / loaders / linkers
Runtime monitors
Runtime debuggers
Virus control pgms (!)
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
45
d. Virus Signatures (1)
Virus hides but can’t become invisible – leaves behind a virus
signature, defined by patterns:
1) Storage patterns : must be stored somewhere/somehow
(maybe in pieces)
2) Execution patterns: executes in a particular way
3) Distribution patterns: spreads in a certain way
Virus scanners use virus signatures to detect viruses
(in boot sectior, on hard disk, in memory)
Scanner can use file checksums to detect changes to files
Once scanner finds a virus, it tries to remove it
i.e., tries to remove all pieces of a virus V from target pgm T
Virus scanner and its database of virus signatures must be upto-date to be effective!
Update and run daily!
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
46
Virus Signatures (2)
Detecting Virus Signatures (1)
Difficulty 1 — in detecting execution patterns:
Most of effects of virus execution (see next page) are
„invisible”
Bec. they are normal – any legitimate pgm could cause them
(hiding in a crowd)
=> can’t help in detecion
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
47
Virus Signatures (3)
Detecting Virus Signatures (2)
Virus Goal
How Achieved
Attach to executable
Attach to data/
control file
Remain in memory
Modify file directory / Write to executable pgm file
Modify directory / Rewrite data
Append to data / Append data to self
Intercept interrupt by modifying interrupt handler
address table / Load self in non-transient memory area
Intercept interrupt /Intercept OS call (e.g., to format
disk)
Modify system file / Modify ordinary executable pgm
Intercept system calls that would reveal self and falsify
results / Classify self as “hidden” file
Infect boot sector / Infect systems pgm
Infect ordinary pgm / Infect data ordinary pgm reads to
control its executable
Activate before deactivating pgmand block deactivation
Store copy to reinfect after deactivation
Infect disks
Conceal self
Spread self
Prevent deactivation
[cf. textbook & B. Endicott-Popovsky]
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
48
Virus Signatures (4)
Detecting Virus Signatures (3)
Difficulty 2 — in finding storage patterns:
Polymorphic viruses:
changes from one „form” (storage pattern) to another
Simple virus always recognizable by a certain char pattern
Polymorphic virus mutates into variety of storage patterns
Examples of polymorphic virus mutations
Randomly repositions all parts of itself and randomly
changes all fixed data within its code
Repositioning is easy since (infected) files stored as chains of data
blocks - chained with pointers
Randomly intersperses harmless instructions throughout its
code
(e.g., add 0, jump to next instruction)
Encrypting virus: Encrypts its object code (each time with a
different/random key), decrypts code to run
... More below ...
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
49
Virus Signatures (5)
Detecting Virus Signatures (4)
Encrypting virus structure
stored
encryp
-ted
(informal pseudo-code)
array decr_key;
procedure decrypt(virus_code, decr_key)
...
end /* decrypt */
begin /* virus V in target pgm T */
decrypt (V, decr_key);
infect: if infect_condition met then
find new target pgms NT to infect;
mutate V into V’ for copying;
encrypt V’ with random key into V”;
save new key in file for V”;
attach V” to NT;
hide modification of NT (with stealth
code of V);
damage: if damage_condition met then
execute damage_code of V
else start T
end /* virus V in target pgm T */
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
50
Virus Signatures (6)
Detecting Virus Signatures (5)
Encrypting virus: Encrypts its object code (each time with a
different/random key), decrypts code to run
Q: Is there any signature for encryption virus that a
scanner can see?
Hint: consider 3 parts of encryption virus:
„proper” virus code (infect/damage code)
decr_key
procedure decrypt
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
51
Virus Signatures (7)
Detecting Virus Signatures (6)
...
Q: Q: Is there any signature for encryption virus that a
scanner can see?
A: Lets’ see:
„proper” virus code – encrypted with random key –
polymorphic
decr_key – random key used to encrypt/decrypt –
polymorphic
procedure decrypt (or a pointer to a library decrypt procedure)
– unencrypted, static
=> procedure decrypt of V is its signature
visible to a scanner
But: Virus writer can use polymorphic techniques on
decryption code to make it „less visible” (to hide it)
Virus writers and scanner writers challenge each other
An endless game?
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
52
e. Preventing Virus Infections
Preventing Virus Infections
Use commercial software from
trustworthy sources
But even this is not an absolute
guarantee of virus-free code!
Test new software on isolated computers
Open only safe attachments
Keep recoverable system image in safe place
Backup executable system files
Use virus scanners often (daily)
Update virus detectors daily
Databases of virus signatures change very often
[cf. B. Endicott-Popovsky]
No absolute guarantees even if you follow all the rules –
just much better chances of preventing a virus
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
53
f. Seven Truths About Viruses
Viruses can infect any platform
Viruses can modify “hidden” / “read only” files
Viruses can appear anywhere in system
Viruses spread anywhere sharing occurs
Viruses cannot remain in memory aftera complete power
off/power on on reboot
Viruses infect software that runs hardware
But virus reappears if saved on disk (e.g., in the boot sector)
There are firmware viruses (if firmware writeable by s/w)
Viruses can be malevolent, benign, or benevolent
Hmmm...
Would you like a benevolent virus doing good things (like compressing
pgms to save storage) but without your knowledge?
Section 3 – Computer Security and Information Assurance – Spring 2006
[cf. B.
Endicott-Popovsky]
© 2006 by Leszek T. Lilien
54
g. Case Studies (1)
The Internet Worm
Attacked on 11/2/1988
Invaded VAX and Sun-3 computers running versions of
Berkeley UNIX
Used their resources to attack still more computers
Within hours spread across the U.S
Infected hundreds / thousands of computers – serious
damage to Internet
Some uninfected networks were scared into disconnecting from
Internet => severed connections stopped necessary work
Made many computers unusable via resource exhaustion
Was a rabbit – supposedly by mistake unintended by its writer
Perpetrator was convicted in 1990 ($10,000 fine + 400 hrs of
community service + 3-year suspended jail sentence)
Caused forming Computer Emergency Response Team
(CERT) at CMU
Section 3 – Computer Security and Information Assurance – Spring 2006
[cf. textbook &
B. Endicott-Popovsky]
© 2006 by Leszek T. Lilien
55
Case Studies (2)
Other case studies [textbook – interesting reading]
The Brain (Pakistani) Virus (1986)
Code Red (2001)
Denial-of-service (DoS) attack on www.whitehouse.gov
Web Bugs (generic potentially malicious code on web
pages)
Placing a cookie on your hard drive
Cookie collects statistics on user’s surfing habits
Can be used to get your IP address, which can then be used to
target you for attack
Block cookies or delete cookies periodically (e.g., using browser
command; in MS IE: Tools>Internet Options-General:Delete
Cookies)
Tool: Bugnosis from Privacy Foundation – locates web bugs
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
56
h. Virus Removal and
System Recovery After Infection
Fixing a system after infection by virus V:
1) Disinfect (remove) viruses (using antivirus pgm)
Can often remove V from infected file for T w/o
damaging T
if V code can be separated from T code and V did
not corrupt T
Have to delete T if can’t separate V from T code
2) Recover files:
- deleted by V
- modified by V
- deleted during disinfection (by antivirus pgm)
=> need file backups!
Make sure to have backups of (at least) important files
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
57
3.3.2. Targeted Malicious Code
Targeted = written to attack a particular system, a
particular application, and for a particular purpose
Many virus techniques apply
Some new techniques as well
Outline:
a. Trapdoors
b. Salami attack
c. Covert channels
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
58
a. Trapdoors (1)
Original def:
Trapdoor / backdoor - A hidden computer flaw known to an
intruder, or a hidden computer mechanism (usually
software) installed by an intruder, who can activate the trap
door to gain access to the computer without being blocked
by security services or mechanisms.
A broader definition:
Trapdoor – an undocumented entry point to a module
Inserted during code development
For testing
As a hook for future extensions
As emergency access in case of s/w failure
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
59
Trapdoors (2)
Testing:
With stubs and drivers for unit testing (Fig. 3-10 p. 138)
Testing with debugging code inserted into tested
modules
May allow programmer to modify internal module variables
Major sources of trapdoors:
Left-over (purposely or not) stubs, drivers, debugging code
Poor error checking
E.g., allowing for unacceptable input that causes buffer overflow
Some were used for testing, some random
Undefined opcodes in h/w processors
Not all trapdoors are bad
Some left purposely w/ good intentions
— facilitate system maintenance/audit/testing
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
60
b. Salami attack
Salami attack - merges bits of seemingly inconsequential
data to yield powerful results
Old example: interest calculation in a bank:
Fractions of 1 ¢ „shaved off” n accounts and deposited in
attacker’s account
Nobody notices/cares if 0.1 ¢ vanishes
Can accumulate to a large sum
Easy target for salami attacks: Computer computations
combining large numbers with small numbers
Require rounding and truncation of numbers
Relatively small amounts of error from these op’s are
accepted as unavoidable – not checked unless a strong
suspicion
Attacker can hide „salami slices” within the error margin
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
61
c. Covert Channels (CC) (1)
Outline:
i. Covert Channels - Definition and Examples
ii. Types of Covert Channels
iii. Storage Covert Channels
iv. Timing Covert Channels
v. Identifying Potential Covert Channels
vi. Covert Channels - Conclusions
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
62
i. CC – Definition and Examples (1)
So far: we looked at malicious pgms that perform wrong
actions
Now: pgms that disclose confidential/secret info
They violate confidentiality, secrecy, or privacy of info
Covert channels = channels of unwelcome disclosure of info
Extract/leak data clandestinely
Examples
1) An old military radio communication network
The busiest node is most probably the command center
Nobody is so naive nowadays
2) Secret ways spies recognize each other
Holding a certain magazine in hand
Exchanging a secret gesture when approaching each other
...
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
63
Covert Channels – Definition and Examples (2)
How programmers create covert channels?
Providing pgm with built-in Trojan horse
Uses covert channel to communicate extracted data
Example: pgm w/ Trojan horse using covert channel
Should be:
Protected
Legitimate
Data
<------[ Service Pgm ]------> User
Is:
Protected
Legitimate
Data
<------[ Service Pgm ]------> User
[ w/ Trojan h. ]
covert channel
Spy
(Spy - e.g., programmer who put Trojan into pgm;
directly or via Spy Pgm)
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
64
Covert Channels – Definition and Examples (3)
How covert channels are created?
I.e., How leaked data are hidden?
Example: leaked data hidden in output reports (or displays)
Different ‘marks’ in the report: (cf. Fig. 3-12, p.143)
Varying report format
Changing line length / changing nr of lines per page
Printing or not certain values, characters, or headings
- each ‘mark’ can convey one bit of info
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
65
Covert Channels – Definition and Examples (4)
Example – ctd.
How Trojan within pgm can leak a 4-bit value of a
protected variable X?
cf. Fig. 3-12, p.143
Trojan signals value of X as follows:
Bit-1
Bit-2
Bit-3
Bit-4
=
=
=
=
1
1
1
1
if
if
if
if
>1 space follows ‘ACCOUNT CODE:’; 0 otherwise
last digit in ‘seconds’ field is >5; 0 otherwise
heading uses ‘TOTALS’; 0 otherwise (uses ‘TOTAL’)
no space follows subtotals line; 0 otherwise
=> For the values as in this Fig,
Trojan signaled and spy got: X = ‘1101’
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
66
ii. Types of Covert Channels
Types of covert channels
Storage covert channels
Convey info by presence or absence of an object in
storage
Timing covert channels
Convey info by varying the speed at which things
happen
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
67
iii. Storage Channels (1)
Example of storage channel: file lock covert channel
Protected variable X has n bits: X1, ..., Xn
Trojan within Service Pgm leaks value of X
Trojan and Spy Pgm synchronized, so can „slice” time
into n intervals
File FX (not used by anybody else)
To signal that Xk=1, Trojan locks file FX for interval k (1≤
k ≤ n)
To signal that Xk=0, Trojan unlocks file FX for interval k
Spy Pgm tries to lock FX during each interval
If it succeds during k-th interval, Xk = 0 (FX was unlocked)
Otherwise, Xk = 1 (FX was locked)
(see Fig. 3-13, 3-14 – p.144-145)
Q: Why FX should not be used by anybody else?
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
68
Storage Channels (2)
Example of storage channel: file lock covert channel
...
Q: Why FX should not be used by anybody else?
A: Any other user lockin/unlocking FX would interfere with
Trojan’s covert channel signaling.
Isn’t such bit-by-bit signaling too slow?
No – bec. computers are very fast!
E.g., 10-100 bits/millisecond (10K – 100K b/s) is very slow
for computers
It still can leak entire P&P textbook in just minutes
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
69
Storage Channels (3)
Examples of covert storage channels (synchronized intervals!)
Covert channels can use:
File locks (discussed above)
Disk storage quota
To signal Xk=1, Trojan create enormous file (consuming
most of available disk space)
Spy Pgm attempts to create enormous file. If Spy fails
(bec. no disk space available), Xk = 1; otherwise, Xk = 0
Existence of a file
To signal Xk=1, Trojan creates file FX (even empty file)
Spy Pgm atempts to create file named FX. If Spy fails
(bec. FX already exists), Xk = 1; otherwise, Xk = 0
Other resources - similarly
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
70
Storage Channels (4)
Covert storage channels require:
Shared resource
To indicate Xk=1 or Xk=0
Synchronized time
To know which bit is signaled:
in interval k, Xk is signaled
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
71
iv. Timing Channels
Recall: Timing channels convey info by varying the speed
at which things happen
Simple example of timing channel:
Multiprogramming system „slices” processor time for
programs running on the processor
2 processes only: Trojan (Pgm w/ Trojan) and Spy Pgm
Trojan receives all odd slices (unless abstains)
Spy Pgm receives all even slices (unless abstains)
Trojan signals Xk=1 by using its time slice,
signals Xk=0 by abstaining from using its slice
see: Fig.3-15, p.147 – how ‘101’ is signaled
Details: Trojan takes Slice 1 (its 1st slice) signaling X1=1
Trojan abstains from taking Slice 3 (its 2nd slice) signaling X2=0
Trojan takes Slice 5 (its 3rd slice) signaling X3=1
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
72
v. Identifying Potential Covert Channels
Covert channels are not easy to identify
Otherwise wouldn’t be covert, right?
Two techniques for locating covert channels:
1) Shared Resource Matrix
2) Information Flow Method
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
(1)
73
Identifying Potential Covert Channels (2)
1) The Shared Resource Matrix method
Shared resource is basis for a covert channel
=> identify shared resources and processes
reading/writing them
Step 1: Construct Shared Resource Matrix
Rows — resources
Columns — processes that access them:
R = observe resource M = modify/set/create/delete resource
Example
Lock on FX
X (confid.)
Process 1
Process 2
R, M
R
R, M
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
74
Identifying Potential Covert Channels (3)
...
Pgm 1
Pgm 2
Lock on FX
R, M
R, M
X (confid.)
R
Step 2: Look for pattern:
Meaning of this pattern:
Process Pj can get value of
Resource Rn via Process Pi
(and a covert channel)
Pi
Pj
Rm
M
R
Rn
R
Q: Do you see such a pattern in SRM above?
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
75
Identifying Potential Covert Channels (4)
...
Process 1
Process 2
Lock on FX
R, M
R, M
X (confid.)
R
Step 2: Look for pattern:
Meaning of this pattern:
Process Pj can get value of
Resource Rn via Process Pi
(and a covert channel)
i
j
m
M
R
n
R
Q: Do you see such a pattern in SRM above?
A: Yes. Process 2 can get value of X via Process 1
(no surprise: Proc. 1 & 2 are Trojan & Spy
from earlier example)
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
76
Identifying Potential Covert Channels (5)
2) Information Flow Method
Flow analysis of pgm’s syntax
Can be automated within a compiler
Identifies non-obvious flows of info between pgm
statements
Examples of flows of info between pgm stmts
B:= A – an explicit flow from A to B
B:= A; C:=B – an explicit flow from A to C (via B)
IF C=1 THEN B:=A
– an explicit flow from A to B
– an implicit flow from C to B (bec. B can change iff C=1)
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
77
Identifying Potential Covert Channels (6)
More examples of flows of info between pgm stmts
[textbook and J. Leiwo]
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
78
Identifying Potential Covert Channels (7)
Steps of Information Flow Method (IFM)
1) Analyze statements
2) Integrate results to see which outputs affected by which
inputs
Variants of IFM:
1) IFM during compilation
2) IFM on design specs
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
79
Covert Channels - Conclusions
Covert channels are a serious threat to confidentiality and
thus security
(„CIA” = security)
Any virus/Trojan horse can create a covert channel
In open systems — no way to prevent covert channels
Very high security systems require a painstaking and
costly design preventing (some) covert channels
Analysis must be performed periodically as high security
system evolves
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
80
3.4. Controls for Security
How to control security of pgms during their development
and maintenance
Outline:
a. Introduction
b. Developmental controls for security
c. Operating system controls for security
d. Administrative controls for security
e. Conclusions
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
81
a. Introduction
„Better to prevent than to cure”
Preventing security flaws
We have seen a lot of possible security flaws
How to prevent (some of) them?
Software engineering concentrates on developing and
maintaining quality s/w
We’ll take a look at some techniques useful specifically
for developing/ maintaining secure s/w
Three types of controls for security (against pgm flaws):
1) Developmental controls
2) OS controls
3) Administrative controls
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
82
b. Developmental Controls for Security (1)
Nature of s/w development
Collaborative effort
Team of developers, each involved in 1 of stages:
Requirement specification
Regular req. specs: „do X”
Security req. specs: „do X and nothing more”
Design
Implementation
Testing
Documenting at each stage
Reviewing at each stage
Managing system development thru all stages
Maintaining deployed system (updates, patches, new versions,
etc.)
Both product and process contribute to overall quality
— incl. security dimension of quality
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
83
Developmental Controls for Security (2)
Fundamental principles of s/w engineering
1) Modularity
2) Encapsulation
3) Info hiding
1) Modularity
Modules should be:
Single-purpose - logically/functionally
Small - for a human to grasp
Simple - for a human to grasp
Independent – high cohesion, low coupling
High cohesion – highly focused on (single) purpose
Low coupling – free from interference from other modules
Modularity should improve correctness
Fewer flaws => better security
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
84
Developmental Controls for Security (3)
2) Encapsulation
Minimizing info sharing with other modules
=> Limited interfaces reduce # of covert channels
Well documented interfaces
„Hiding what should be hidden and showing what should
be visible.”
3) Information hiding
Module is a black box
Well defined function and I/O
Easy to know what module does but not how it does it
Reduces complexity, interactions, covert channels, ...
=> better security
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
85
Developmental Controls for Security (4)
Techniques for building solid software
1) Peer reviews
2) Hazard analysis
3) Testing
4) Good design
5) Risk prediction & mangement
6) Static analysis
7) Configuration management
8) Additional developmental controls
... Please read on your own ...
..Also see slides—all discussed below ...
Section 3 – Computer Security and Information Assurance – Spring 2006
[cf. B.
Endicott-Popovsky]
© 2006 by Leszek T. Lilien
86
Developmental Controls for Security (5)
1) Peer reviews - three types
Reviews
Informal
Team of reviewers
Gain consensus on solutions
before development
Walk-throughs
Developer walks team through code/document
Discover flaws in a single design document
Inspection
Formalized and detailed
Statistical measures used
Various types of peer reviews can be highly effective
Section 3 – Computer Security and Information Assurance – Spring 2006
[cf. ©
B.2006
Endicott-Popovsky]
by Leszek T. Lilien
87
Developmental Controls for Security (6)
2) Hazard analysis
= systematic techniques to expose
potentially hazardous system states,
incl. security vulnerabilities
Components of HA
Hazard lists
What-if scenarios – identifies non-obvious hazards
System-wide view (not just code)
Begins Day 1
Continues throughout SDLC (= s/w dev’t life cycle)
Techniques
HAZOP – hazard and operability studies
FMEA – failure modees and effects analysis
FTA – fault tree analysis
Section 3 – Computer Security and Information Assurance – Spring 2006
[cf. B.
Endicott-Popovsky]
© 2006 by Leszek T. Lilien
88
Developmental Controls for Security (7)
3) Testing – phases:
Module/component/unit testing of indiv. modules
Integration testing of interacting (sub)system modules
(System) function testing checking against requirement specs
(System) performance testing
(System) acceptance testing – with customer against
customer’s requirements — on seller’s or customer’s premises
(System) installation testing after installation on customer’s
system
Regression testing after updates/changes to s/w
Types of testing
Black Box testing – testers can’t examine code
White Box / Clear box testing – testers can examine design and
code, can see inside modules/system
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
89
Developmental Controls for Security (8)
4) Good design
Good design uses:
i. Modularity / encapsulation / info hiding
ii. Fault tolerance
iii. Consistent failure handling policies
iv. Design rationale and history
v. Design patterns
i. Using modularity / encapsulation / info hiding
- as discussed above
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
90
Developmental Controls for Security (9)
4) Good design – cont.1a
ii. Using fault tolerance for reliability and security
System tolerates component failures
System more reliable than any of its components
Different than for security, where system is as secure as its
weakest component
[cf. B. Endicott-Popovsky]
Fault-tolerant approach:
Anticipate faults
(car: anticipate having a flat tire)
Active fault detection rather than pasive fault detection
(e.g., by use of mutual suspicion: active input data checking)
Use redundancy
Isolate damage
Minimize disruption
(car: have a spare tire)
(car: replace flat tire, continue your trip)
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
91
Developmental Controls for Security (10)
4) Good design – cont.1b
Example 1: Majority voting (using h/w redundancy)
3 processor running the same s/w
E.g., in a spaceship
Result accepted if results of 2 processors agree
Example 2: Recovery Block (using s/w redundancy)
Primary Code
e.g., Quick Sort
Secondary
Code
e.g., Bubble Sort
Acceptance Test
Section 3 – Computer Security and Information Assurance – Spring 2006
Quick Sort –
– new code (faster)
Bubble Sort –
– well-tested code
© 2006 by Leszek T. Lilien
92
Developmental Controls for Security (11)
4) Good design – cont.2
iii. Using consistent failure handling policies
Each failure handled by one of 3 ways:
Retrying
Correcting
Restore previous state, correct sth, run service using the
same code as before
Reporting
Restore previous state, redo service using different „path”
E.g., use secondary code instead of primary code
Restore previous state, report failure to error handler, don’t
rerun service
Example — How fault-tolerance enhances security
If security fault destroys important data (availability in CIA),
use f-t to revert to backup data set
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
93
Developmental Controls for Security (12)
4) Good design – cont.3
iv. Using design rationale and history
Knowing it (incl. knowing design rationale and history
for security mechanisms) helps developers modifying or
maintaining system
v. Using design patterns
Knowing it enables looking for patterns showing what
works best in which situation
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
94
Developmental Controls for Security (13)
Value of Good Design
Easy maintenance
Understandability
Reuse
Correctness
Better testing
=> translates into (saving) BIG bucks !
Section 3 – Computer Security and Information Assurance – Spring 2006
[cf. ©
B.2006
Endicott-Popovsky]
by Leszek T. Lilien
95
Developmental Controls for Security (14)
5) Risk prediction & management
Predict and manage risks involved in system development
and deployment
Make plans to handle unwelcome events should they
occur
Risk prediction/mgmt are esp. important for security
Bec. unwelcome and rare events can have security
consequences
Risk prediction/mgmt helps to select proper security
controls (e.g., proportional to risk)
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
96
Developmental Controls for Security (15)
6) Static analysis
Before system is up and running, examine its design and
code to locate security flaws
More than peer review
Examines
Control flow structure
(sequence in which instructions are
executed, incl. iterations and loops)
Data flow structure (trail of data)
Data structures
Automated tools available
Section 3 – Computer Security and Information Assurance – Spring 2006
[cf. B.
Endicott-Popovsky]
© 2006 by Leszek T. Lilien
97
Developmental Controls for Security (16)
7) Configuration management
= process of controling system modifications during
development and maintenance
Offers security benefits by scrutinizing new/changed code
Problems with system modifications
One change interefering with other change
E.g., neutralizing it
Proliferation of different versions and releases
Older and newer
For different platforms
For different application environments (and/or customers
categories)
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
98
Developmental Controls for Security (17)
Reasons for software modification
Corrective changes
To maintain control of system’s day-to-day functions
Adaptive changes
To maintain control over system’s modifications
Perfective changes
To perfect existing acceptable system functions
Preventive changes
To prevent system’s performance degradation to
unacceptable levels
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
99
Developmental Controls for Security (18)
Activities involved in configuration management process
(performed by reps from developers, customers, users, etc.)
1) Baseline identification
Certain release/version (R/v) selected & frozen as
baseline
Other R’s/v’s described as changes to the baseline
2) Configuration control and configuration management
Coordinate separate but related v’s (versions) via:
Separate files - separate files for each R or v
Deltas - main v defined by „full files”
- other v’s defined by main v & deltas
(= difference files)
Conditional compilation
- single source code file F for all v’s
uses begin_version_Vx / end_version_Vx brackets
or begin_not_version_Vx / end_not_version_Vx brackets
- compiler produces each v from F
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
100
Developmental Controls for Security (19)
3) Configuration auditing
System must be audited regularly — to verify:
Baseline completeness and accuracy
Recording of changes
Accuracy of software documentation for systems in
the field
Peformed by independent parties
4) Status accounting
Records info about system components
Where they come from (purchased, reused, written
from scratch)
Version
Change history
Pending change requests
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
101
Developmental Controls for Security (20)
All 4 activities performed by
Configuration Control Board (CCB)
Includes reps from developers, customers, users
Reviews proposed changes, approves/rejects
Security benefits of configuration mgmt
Limits unintentional flaws
Limits malicious modifications
by protecting integrity of pgms and documentation
Thanks to:
careful reviewing/auditing, change mgmt
preventing changes (e.g., trapdoors) to system w/o acceptance
by CCB
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
102
Developmental Controls for Security (21)
8) Additional developmental controls
8a) Learning from mistakes
Avoiding such mistakes in the future enhances security
8b) Proofs of program correctness
Formal methods to verify pgm correctness
Logic analyzer shows that:
initial assertions about inputs...
... through implications of pgm statements...
... lead to the terminal condition (desired output)
Problems with practical use of pgm correctness proofs
Esp. for large pgms/systems
Most successful for specific types of apps
E.g. for communication protocols & security policies
Even with all these developmental controls (1-8) –
still no security guarantees! [cf. B. Endicott-Popovsky]
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
103
c. Operating System Controls for Security (1)
Developmental controls not always used
OR:
Even if used, not foolproof
=> Need other, complementary controls, incl. OS controls
Such OS controls can protect against some pgm flaws
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
104
Operating System Controls for Security (2)
Trusted software
– code rigorously developed an analyzed so we can trust that
it does all and only what specs say
Trusted code establishes foundation upon which untrusted
code runs
Trusted code establishes security baseline for the whole
system
In particular, OS can be trusted s/w
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
105
Operating System Controls for Security (3)
Key characteristics determining if OS code is trusted
1) Functional correctness
OS code consistent with specs
2) Enforcement of integrity
OS keeps integrity of its data and other resources even
if presented with flawed or unauthorized commands
3) Limited privileges
OS minimizes access to secure data/resources
Trusted pgms must have „need to access” and proper
access rights to use resources protected by OS
Untrusted pgms can’t access resources protected by OS
4) Appropriate confidence level
OS code examined and rated at appropriate trust level
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
106
Operating System Controls for Security (4)
Similar criteria used to establish if s/w other than OS can be
trusted
Ways of increasing security if untrusted pgms present:
1) Mutual suspicion
2) Confinement
3) Access log
1) Mutual suspicion between programs
Distrust other pgms – treat them as if they were
incorrect or malicious
Pgm protects its interface data
With data checks, etc.
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
107
Operating System Controls for Security (5)
2) Confinement
OS can confine access to resources by suspected pgm
Example 1: strict compartmentalization
Pgm can affect data and other pgms only within its
compartment
Example 2: sandbox for untrusted pgms
Can limit spread of viruses
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
108
Operating System Controls for Security (6)
3) Audit log / access log
Records who/when/how (e.g., for how long)
accessed/used which objects
Events logged: logins/logouts, file accesses, pgm
ecxecutions, device uses, failures, repeated
unsuccessful commands (e.g., many repeated failed
login attempts can indicate an attack)
Audit frequently for unusual events, suspicious patterns
Forensic measure not protective measure
Forensics – investigation to find who broke law,
policies, or rules
...Much more on OS controls soon...
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
109
d. Administrative Controls for Security (1)
They prohibit or demand certain human behavior via
policies, procedures, etc.
They include:
1) Standards of program development
2) Security audits
3) Separation of duties
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
110
Administrative Controls for Security (2)
1) Standards and guidelines for program development
Capture experience and wisdom from previous projects
Facilitate building higher-quality s/w (incl. more secure)
They include:
Design S&G – design tools, languages, methodologies
S&G for documentation, language, and coding style
Programming S&G - incl. reviews, audits
Testing S&G
Configuration mgmt S&G
2) Security audits
Check compliance with S&G
Scare potential dishonest programmer from including
illegitimate code (e.g., a trapdoor)
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
111
Administrative Controls for Security (3)
3) Separation of duties
Break sensitive tasks into 2 pieces to be performed by
different people (learned from banks)
Example 1: modularity
Different developers for cooperating modules
Example 2: independent testers
Rather than developer testing her own code
...More (much) later...
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
112
e. Conclusions
(for Controls for Security)
Developmental / OS / administrative controls help
produce/maintain higher-quality (also more secure) s/w
Art and science - no „silver bullet” solutions
„A good developer who truly understands security will
incorporate security into all phases of development.”
[textbook, p. 172]
Summary:
Control
[cf. B. Endicott-Popovsky]
Purpose
Benefit
Developmental
Limit mistakes
Make malicious code difficult
Produce better software
Operating
System
Limit access to system
Promotes safe sharing of info
Administrative
Limit actions of people
Improve usability, reusability
and maintainability
Section 3 – Computer Security and Information Assurance – Spring 2006
© 2006 by Leszek T. Lilien
113
End of:
Section 3: Program Security