(o`). - DU Department of Computer Science Home Page

Download Report

Transcript (o`). - DU Department of Computer Science Home Page

Reference Monitors
(Credits: Slides from
Gollmann)
Agenda

Reference monitor, security kernel, and TCB



Status information & controlled invocation
Security features in microprocessors



Placing the reference monitor
Confused deputy problem
Memory management and access control
Historic examples, to keep matters simple
2
Security Mechanisms


How can computer systems enforce
operational policies in practice?
Questions that have to be answered:



Where should access control be located?
(Second Fundamental Design Decision)
Are there any additional security requirements
your solution forces you to consider?
The following definitions are taken from the
Orange Book.
3
Reference Monitor (RM)


Reference monitor: access control concept
that refers to an abstract machine that
mediates all accesses to objects by subjects.
Security Kernel: The hardware firmware,
and software elements of a TCB that
implement the reference monitor concept. It
must mediate all accesses, be protected from
modification, and be verifiable as correct.
4
Placing the RM





Hardware: access control mechanisms in
microprocessors
Operating system kernel: e.g. hypervisor, i.e. a virtual
machine that emulates the host computer it is
running on, or the Nexus operating system
considered for Microsoft’s NGSCB architecture.
Operating system: e.g. access control in Unix and
Windows 2000.
Services layer: access control in database systems,
Java Virtual Machine,.NET Common Language
Runtime, or CORBA middleware architecture.
Application: security checks in the application code to
address application specific requirements.
5
RM – Design Choices
kernel supported
(e.g. in O/S)
interpreter
RM
program
modified
application (IRM)
program
program
RM
RM
kernel
kernel
kernel
6
Trusted Computing Base (TCB)



The totality of protection mechanisms within a
computer system – including hardware, firmware, and
software – the combination of which is responsible for
enforcing a security policy.
A TCB consists of one or more components that
together enforce a unified security policy over a
product or system.
The ability of the TCB to correctly enforce a security
policy depends solely on the mechanisms within the
TCB and on the correct input by system administrative
personnel of parameters related to the security policy.
7
Operating System Integrity





Assume that your O/S prevents unauthorized access
to resources (as long as it works as intended).
To bypass protection, an attacker may try to disable
the security controls by modifying the O/S.
Whatever your initial concern was, you are now
facing an integrity problem. The O/S is not only the
arbitrator of access requests, it is itself an object of
access control.
New security policy: Users must not be able to
modify the operating system.
This generic security policy needs strong and efficient
support.
8
Operating System Integrity



To make life more complicated, you have to address
two competing requirements.
 Users should be able to use (invoke) the O/S.
 Users should not be able to misuse the O/S.
Two important concepts commonly used to achieve
these goals are:
 status information
 controlled invocation, also called restricted
privilege
These concepts can be used in any layer of an IT
system, be it application software, O/S, or hardware.
9
Modes of Operation



To protect itself, an O/S must be able to distinguish
computations ‘on behalf’ of the O/S from
computations ‘on behalf’ of a user.
Status flag allows the system to work in different
modes.
 Intel 80x86: two status bits and four modes
 Unix distinguishes between user and superuser
(root)
Example: To stop users from writing directly to
memory and corrupting the logical file structure, the
O/S could grant write access to memory locations
only if the processor is in supervisor mode.
10
Why are such Modes Useful?



E.g., to stop users from writing directly to
memory and corrupting the logical file
structure, the O/S could grant write access to
memory locations only if the processor is in
supervisor mode.
We continue our example: A user wants to
execute an operation requiring supervisor
mode, e.g. write to memory.
To deal with this request, the processor has
to switch between modes, but how should
this switch be performed?
11
Controlled Invocation





Example continued: A user wants to write to memory
(requires supervisor mode).
The system has now to switch between modes, but
how should this switch be performed?
Simply changing the status bit to supervisor mode
would give all supervisor privileges to the user
without any control on what the user actually does.
Thus, the system should only perform a predefined
set of operations in supervisor mode and then return
to user mode before handing control back to the
user.
Let’s refer to this process as controlled invocation.
12
Core Security Mechanisms
applications
services
operating system
OS kernel
hardware
13
Why Mechanisms at the Core?




For security evaluation at a higher level of assurance.
Security mechanisms in a given layer can be
compromised from a layer below.
To evaluate security, you must check that security
mechanisms cannot be bypassed.
The more complex a system, the more difficult this
check becomes. At the core of a system you may find
simple structures which are amenable to thorough
analysis.
14
Why Mechanisms at the Core?


Putting security mechanisms into the core of
the system can reduce performance
overheads caused by security.
Processor performance depends on the right
choice and efficient implementation of a
generic set of operations that is most useful
to the majority of users. The same holds for
security mechanisms.
Note: Some sources assume that TCBs and
security kernels must enforce multi-level
security policies.
15
Computer Architecture

Simple schematic description:
 central processing unit (CPU)
 memory
 bus connecting CPU and
memory
 input/output devices
CPU
I/O
Bus
Memory
16
Core CPU Components


Registers: general purpose registers and dedicated
registers like:
 program counter: points to the memory location
containing the next instruction to be executed.
 stack pointer: points to the top of the system
stack.
 status register: allows the CPU to keep essential
state information.
Arithmetic Logic Unit (ALU): executes instructions
given in a machine language; executing an
instruction may also set bits in the status register.
17
Memory Structures
Security characteristics of different types of memory:




RAM (random access memory): read/write memory;
no guarantee of integrity or confidentiality.
ROM (read-only memory): provides integrity but not
confidentiality; ROM may store (part of) the O/S.
EPROM (erasable & programmable read-only
memory): could store parts of the O/S or
cryptographic keys; technologically more
sophisticated attacks threaten security.
WROM (write-once memory): memory contents are
frozen once and for all, e.g. by blowing a fuse placed
on the write line; WROM could hold cryptographic
keys or audit logs.
18
Memory Structures

Volatile memory loses its contents when power is
switched off.




Memory contents still present after a short power loss.
Can be reconstructed by special electronic techniques if power
has been switched off for some time.
To counter such attacks, memory has to be overwritten repeatedly
with suitable bit patterns.
Non-volatile (permanent) memory keeps its content when
power is switched off. If attacker has direct access to
memory bypassing the CPU, cryptographic or physical
measures are needed to protect sensitive data.

E.g., a light sensor in a tamper resistant module may detect an
attempted manipulation and trigger the deletion of the data kept
in the module.
19
Processes and Threads

Process: a program in execution, consisting of
executable code, data, and the execution context,
e.g. the contents of certain CPU registers.





A process has its own address space and communicates with
other processes only through O/S primitives.
Logical separation of processes as a basis for security.
A context switch between processes can be an expensive
operation.
Threads: strands of execution within a process.
Threads share an address space to avoid the
overheads of a full context switch, but they also
avoid potential security controls.
Processes and threads are important units of control
for the O/S, and for security. They are the ‘subjects’
of access control.
20
Traps – Interrupts





CPU deals with interruptions of executions created by
errors in the program, user requests, hardware failure,
etc., through exceptions, interrupts, and traps.
These terms refer to different types of events; we use
trap as the generic term.
A trap is a special input to the CPU that includes an
address (interrupt vector) in an interrupt vector table
giving the location of the program (interrupt handler)
that deals with the condition specified in the trap.
When a trap occurs, the CPU saves its current state on
the stack and then executes the interrupt handler,
taking control away from the user.
The interrupt handler has to restore the CPU to a
proper state, e.g. by clearing the supervisor status bit,
before returning control to the user.
21
Interrupt Vectors
Interrupt
Interrupt vector table
Memory
TRAP #n
Interrupt vector
Interrupt
handler
22
Interrupting Interrupts



A further interrupt may arrive while the CPU deals
with a current interrupt, so the CPU may have to
interrupt the current interrupt handler.
Improper handling of such a situation can cause
security failures. Consider a system where a user can
interrupt the execution of a program by typing
CTRL-C so that the CPU returns to the O/S prompt
with the status bit of the current process. A user
could then enter supervisor mode by interrupting the
execution of an O/S call.
The interrupt table is a particularly interesting point
of attack and has to be protected adequately.
Redirecting pointers is a very efficient way of
compromising the integrity of the O/S.
23
Example: Intel 80x86





Support for access control at machine language level
based on protection rings.
Two-bit field in the status register: four privilege
levels; Unix, Windows 2000 use levels 0 (O/S) and 3
(user).
Privilege levels can only be changed through POPF.
Processes can only access objects in their ring or in
outer rings; processes can invoke subroutines only
within their ring; processes need gates to execute
procedures in an inner ring.
Information about system objects like memory
segments, access control tables, or gates is stored in
descriptors. The privilege level of an object is stored in
the DPL field of its descriptor.
24
Intel 80x86 – Access Control



Descriptors held in descriptor table; accessed via selectors.
Selector: 16-bit field, contains index for the object’s entry in the
descriptor table and a requested privilege level (RPL) field; only O/S
has access to selectors.
Current privilege level (CPL): code segment register stores selector of
current process; access control decisions can be made by comparing
CPL (subject) and DPL (object).
INDEX
Descriptor DPL
RPL
selector
Descriptor table
25
Intel 80x86: Controlled
Invocation


Gate: system object pointing to a procedure, where
the gate has a privilege level different from that of
the procedure it points to.
Allow execute-only access to procedures in an inner
ring.
outer ring procedure
Gate
inner ring procedure
26
Intel 80x86: Controlled
Invocation

A subroutine call saves state information
about the calling process and the return
address on the stack.



Should this stack be in the inner ring? Violates the
security policy forbidding write to an inner ring.
Should this stack be in the outer ring? The return
address could be manipulated from the outer ring.
Therefore, part of the stack (how much is
described in the gate’s descriptor) is copied to
a more privileged stack segment.
27
A Loophole?



When invoking a subroutine through a gate,
the CPL changes to the level of the code the
gate is pointing to; on returning from the
subroutine, the CPL is restored to that of the
calling process.
The outer-ring process may ask the inner-ring
procedure to copy an inner ring object to the
outer ring; this will not be prevented by any
of the mechanisms presented so far, nor does
it violate the stated security policy.
Known as luring attack, or as confused
deputy problem.
28
Remedy


To take into account the level of the calling
process, use the adjust privilege level (ARPL)
instruction.
This instruction changes the RPL fields of all
selectors to the CPL of the calling process.
The system then compares the RPL (in the
selector) and the DPL (in the descriptor) of
an object when making access control
decisions.
29
Comparing RPL and DPL
Descriptor table
selector
INDEX
RPL
Descriptor DPL
?
=
30
Security Mechanisms in O/S

O/S manages access to data and resources;
multitasking O/S interleaves execution of processes
belonging to different users. It has to




Logical separation of users at two levels:



separate user space from O/S space,
logically separate users,
restrict the memory objects a process can access.
file management, deals with logical memory objects
memory management, deals with physical memory objects
For security, this distinction is important.
31
Segments and Pages


Segmentation divides memory into logical units of
variable lengths.
+ A division into logical units is a good basis for
enforcing a security policy.
 Units of variable length make memory
management more difficult.
Paging divides memory into pages of equal length.
+ Fixed length units allow efficient memory
management.
 Paging is not a good basis for access control as
pages are not logical units. One page may contain
objects requiring different protection. Page faults
can create a covert channel.
32
A Covert Channel




When a process accesses a logical object stored on
more than one page, a page fault occurs whenever a
new page is requested.
A covert channel exists if page faults are observable.
Consider a password scheme where the password
entered is compared character by character with the
reference password stored in memory. Access is
denied the moment an incorrect match is found.
If a password is stored across a page boundary, then
observing a page fault indicates that the piece of the
password on the first page has been guessed
correctly. If the attacker can control where the
password is stored on the page, password guessing
becomes easy.
33
Exploiting the Covert Channel
Page 1
Page 2
P ASSWORD
PA SSWORD
PAS SWORD
PASS WORD
1st guess
2nd guess
3rd guess
4th guess
…
34
Memory Protection




O/S controls access to data objects in memory.
A data object is represented by a collection of bits
stored in certain memory locations.
Access to a logical object is ultimately translated into
access operations at machine language level.
Three options for controlling access to memory:



operating system modifies the addresses it receives from
user processes;
operating system constructs the effective addresses from
relative addresses it receives from user processes;
operating system checks whether the addresses it receives
from user processes are within given bounds.
35
Address Sandboxing (Modification)
Don’t memorize


Address consists of segment identifier and offset. When the
operating system receives an address, it sets the correct
segment identifier as follows:
Bitwise AND of the address with mask_1 clears the segment
identifier; bitwise OR with mask_2 sets the segment identifier to
the intended value SEG_ID.
address seg_id
offset
mask_1
0....0
1….1
0….0
offset
mask_2 SEG_ID
0….0
effective address SEG_ID
offset
} bitwise AND
} bitwise OR
36
Relative Addressing


Clever use of addressing modes can keep processes out of
forbidden memory areas.
Fence registers: base register addressing keeps users out of O/S
space; fence register points to top of user space.
Bounds register define the bottom of the user space. Base and
bounds registers allow to separate program from data space.
memory
offset
+

user space
base
fence register
O/S space
37
Function codes

Motorola 68000 function codes indicate processor
status so that address decoder may select between
user and supervisor memory or between data and
programs.
FC2 FC1 FC0
0
0
0
0
0
1
0
1
0
0
1
1
1
0
0
1
0
1
1
1
0
1
1
1
(undefined,reserved)
user data
user program
(undefined,reserved)
(undefined,reserved)
supervisor data
supervisor program
interrupt acknowledge
38
General Lessons



The ability to distinguish between data and programs
is a useful security feature, providing a basis for
protecting programs from modification.
From a more abstract point of view, memory has
been divided into different regions. Access control
can then refer to the location a data object or
program comes from.
This can serve as a first example for location based
access control. Distributed systems or computer
networks may use location based access control at
the level of network nodes.
39
Tagged Architectures

Tagged architectures indicate type of each memory
object.
tag
data
INT
………………
OP
………………
STR
………………
…
………………
…
………………
…
………………
40
Summary




Security policies can be enforced in any layer
of a computer system.
Mechanisms at lower layers tend to be more
generic and are universally applied to all
“applications” above, but might not quite
match the requirements of the application.
Mechanisms at upper layers are more
application specific, but applications have to
be secured individually.
This fundamental dilemma is a recurring
theme in information security.
41
Bell-LaPadula Model
Why Security Models?




When we have implemented a security policy,
do we know that it will (and can) be
enforced?
E.g., if policies get too intricate, contradicting
rules may apply to a given access request.
To prove that a policy will be enforced, the
policy has to be formally specified.
A security model is a formal description of a
security policy.
43
Security Models



Models are used in high assurance security
evaluations (smart cards are currently a
fruitful area of application).
Models are important historic milestones in
computer security (e.g. Bell-LaPadula).
The models presented today are not recipes
for security but can be a starting point when
you have to define a model yourself.
44
Agenda


State Machine Models (short review)
Bell-LaPadula (BLP) model





BLP security policies
BLP Basic Security Theorem
Tranquility
Covert channels
Case Study: Multics interpretation of BLP
45
State Machine Models



State Machines (automata) are abstract
models that record relevant features, like the
security of a computer system, in their state.
States change at discrete points in time, e.g.
triggered by a clock or an input event.
State machine models have numerous
applications in computer science: processor
design, programming languages, or security.
46
Examples

Switch:



Ticket machine:




two states, ‘on’ and ‘off’
One input ‘press’
State: ticket requested, money to be paid
inputs: ticket requests, coins;
output: ticket, change
Microprocessors:


state: register contents,
inputs: machine instructions
47
Basic Security Theorems



To design a secure system with the help of
state machine models,
 define state set so that it captures ‘security’,
 check that all state transitions starting in a
‘secure’ state yield a ‘secure’ state,
 check that initial state of system is ‘secure’.
Security is then preserved by all state
transitions. The system will always be ‘secure’.
This Basic Security Theorem has been derived
without any definition of ‘security’!
48
Bell-LaPadula Model (BLP)





State machine model developed in the 1970s
for the analysis of MLS operating systems.
Subjects and objects labeled with security
levels that form a partial ordering.
The policy: No information flow from ‘high’
security levels down to ‘low’ security level
(confidentiality).
Only considers information flows that occur
when a subject observes or alters an object.
Access permissions defined through an access
control matrix and security levels.
49
Constructing the State Set
1. All current access operations:
 an access operation is described by a triple (s,o,a), s
 S, o  O, a  A
E.g.: (Alice, fun.com, read)
 The set of all current access operations is an element
of P(S  O  A)
E.g.: {(Alice, fun.com, read), (Bob, fun.com, write),
…}
50
Constructing the State Set
2. Current assignment of security levels:
 maximal security level: fS: S  L (L … labels)
 current security level: fC: S  L
 classification: fo: O  L



The security level of a user is the user’s clearance.
Current security level allows subjects to be downgraded temporarily (more later).
F  LS  LS  LO is the set of security level
assignments; f = (fS, fC, fO) denotes an element of
F.
51
Constructing the State Set
3. Current permissions:
 defined by the access control matrix M.
 M is the set of access control matrices.

The state set of BLP: V = B M  F



B is our shorthand for P(S  O  A)
b denotes a set of current access operations
a state is denoted by (b,M,f)
52
BLP Policies



Discretionary Security Property (ds-property):
Access must be permitted by the access
control matrix: (s,o,a)  Mso
Simple Security (ss)-Property (no read-up):
if (s,o,a)  b, then fS(s)  fO(o) if access is in
observe mode.
The ss-property is a familiar policy for
controlling access to classified paper
documents.
53
On Subjects





In the ss-property, subjects act as observers.
In a computer system, subjects are processes
and have no memory of their own.
Subjects have access to memory objects.
Subjects can act as channels by reading one
memory object and transferring information
to another memory object.
In this way, data may be declassified
improperly.
54
Subjects as Channels
illegal
information flow
to a lower level
high
observe
alter
low
55
Star Property



-Property (star property) (no write-down): if
(s,o,a)  b and access is in alter mode then
fC(s)  fO(o); also, if subject s has access to
object o in alter mode, then fO(o’)  fO(o) for
all objects o’ accessed by s in observe mode.
The very first version of BLP did not have the
-property.
Mandatory BLP policies: ss-property and
-property.
56
Blocking the Channel
blocked by
-property
high
observe
alter
low
57
No Write-Down



The -property prevents high level subjects from
sending legitimate messages to low level subjects.
Two ways to escape from this restriction:
 Temporarily downgrade high level subject;
hence the current security level fC; BLP subjects
have no memory of their own!
 Exempt trusted subjects from the -property.
Redefine the -property and demand it only for
subjects that are not trusted.
58
Trusted Subjects
Trusted subjects may violate security
policies! Distinguish between trusted
subjects and trustworthy subjects.
59
Basic Security Theorem
A state is secure, if all current access tuples
(s,o,a) are permitted by the ss-, -, and dsproperties.
 A state transition is secure if it goes from a
secure
stateTheorem:
to a secure
state.
Basic
Security
If the
initial state of

a system is secure and if all state transitions are
secure, then the system will always be secure.
60
Basic Security Theorem
This Basic Security Theorem has nothing
to do with the BLP security policies, only
with state machine modeling.
61
BLP & Security

Construct system with operation downgrade:




downgrades all subjects and objects to system low.
enters all access rights in all positions of the access
control matrix.
As a result, any state is secure in the BLP
model.
Should such a system be regarded secure?


McLean: no, everybody is allowed to do everything.
Bell: yes, if downgrade was part of the system
specification.
62
Tranquility





No BLP policies for changing access control
data.
BLP assumes tranquility: access control data
do not change.
Operational model: users get clearances and
objects are classified following given rules.
The system is set up to enforce MLS policies
for the given clearances and classifications.
Changes to clearances and classifications
requires external input.
63
Covert Channels

Communications channels that allow transfer
of information in a manner that violates the
system’s security policy.



Storage channels: e.g. through operating system
messages, file names, etc.
Timing channels: e.g. through monitoring system
performance
Orange Book: 100 bits per second is ‘high’
bandwidth for storage channels, no upper
limit on timing channels.
64
Covert Channels


The bandwidth of some covert channels can be
reduced by reducing the performance of the system.
Covert channels are not detected by BLP
modeling.
65
Applying BLP—Multics




Multics was designed to be a secure, reliable,
..., multi-user O/S.
Multics became too cumbersome for some
project members, who then created
something much simpler, viz Unix.
The history of the two systems illustrates for
relation between commercial success and the
balance between usability and security.
We will sketch how the Bell-LaPadula model
can be used in the design of a secure O/S.
66
Multics Interpretation of BLP


The inductive definition of security in BLP
makes it relatively easy to check whether a
system is secure.
To show that Multics is secure, we have to
find a description of the O/S which is
consistent with BLP, and verify that all state
transitions are secure.
67
Subjects



Subjects in Multics are processes. Each
subject has a descriptor segment containing
information about the process
The security level of subjects are kept in a
process level table and a current-level table.
The active segment table records all active
processes; only active processes have access
to an object.
68
Objects


For each object the subject currently has
access to, there is a segment descriptor word
(SDW) in the subject’s descriptor segment.
The SDW contains the name of the object, a
pointer to the object, and flags for read,
execute, and write access.
segment
descriptor
word
Segment_id
r: on
e: off
pointer
w: on
69
Directories




Objects are memory segments, I/O devices, ...
Objects are organized hierarchically in a
directory tree; directories are again segments.
Information about an object, like its security
level or its access control list (ACL), are kept in
the object’s parent directory.
To change an object’s access control
parameters and to create or delete an object
requires write or append access rights to the
parent directory.
70
Compatibility



To access an object, a process has to traverse
the directory tree from the root directory to
the target object.
If any directory in this path is not accessible
to the process, the target object is not
accessible either.
Compatibility: The security level of an object
must always dominate the security level of its
parent directory.
71
BLP State in Multics



Current access b: stored in the SDWs in the
descriptor segments of the active processes; the
active processes are found in the active segment
table. The descriptor segment base register (DSBR)
points to the descriptor segment of the current
process.
Level function f: security levels of the subjects are
stored in the process level table and the current-level
table; the security level of an object is stored in its
parent directory.
Access control matrix M: represented by the ACLs;
for each object, the ACL is stored in its parent
directory; each ACL entry specifies a process and the
access rights the process has on that object.
72
segment-id
DSBR
segment-id ptr
current
process
w:off r:on e:off
object
subject
descriptor segment
of current process
current pro. Lc
current-level table
LC  LO?
segment-id LO
parent directory
73
MAC in Multics

Multics access attributes for data segments
with translation to BLP access rights:




read
execute
read & write
write
r
e, r
w
a
74
The -property for Multics

For any SDW in the descriptor segment of an
active process, the current level of the
process



dominates the level of the segment if the read or
execute flags are on and the write flag is off,
is dominated by the level of the segment if the
read flag is off and the write flag is on,
is equal to the level of the segment if the read flag
is on and the write flag is on.
75
Kernel Primitives




Kernel primitives are the input operations in
Multics
Example: the get-read primitive requests read
access to an object
It takes as its parameters a process-id and a
segment-id.
If the state transitions in an abstract model of
the Multics kernel preserve the BLP security
policies, then the BLP Basic Security Theorem
proves the ‘security’ of Multics.
76
Conditions for get-read

The O/S has to check whether




the ACL of segment-id, stored in the segment's
parent directory, lists process-id with read
permission,
the security level of process-id dominates the
security level of segment-id,
process-id is a trusted subject, or the current
security level of process-id dominates the security
level of segment-id.
If all three conditions are met, access is
permitted and a SDW in the descriptor
segment of process-id is added/updated.
77
More Kernel Primitives



release-read: release an object; the read flag
in the corresponding SDW is turned off; if
thereafter no flag is on, the SDW is removed
from the descriptor segment.
give-read: grant read access to another
process (DAC).
rescind-read: withdraw a read permission
given to another process.
78
More Kernel Primitives



create-object: create an object; the O/S has to check
that write access on the object's directory segment is
permitted and that the security level of the segment
dominates the security level of the process.
change-subject-current-security-level: the O/S has to
check that no security violations are created by the
change
This kernel primitive, as well as the primitive changeobject-security-level were not intended for
implementation (tranquility).
79
Aspects of BLP




Descriptive capability of its state machine model: can
be used for other properties, e.g. for integrity.
Its access control structures, access control matrix
and security levels: can be replaced by other
structures, e.g. by S  S  O to capture ‘delegation’.
The actual security policies, the ss-,  -, and dsproperties: can be replaced by other policies (see
Biba model).
A specific application of BLP, e.g. its Multics
interpretation.
80
Limitations of BLP




Restricted to confidentiality.
No policies for changing access rights; a
complete general downgrade is secure; BLP
intended for systems with static security
levels.
BLP contains covert channels: a low subject
can detect the existence of high objects when
it is denied access.
Sometimes, it is not sufficient to hide
only the contents of objects. Also their
existence may have to be hidden.
81
Security Models—Introduction





Bell-LaPadula model designed to capture a specific
‘military’ security policy.
At one time treated as ‘the model of security’.
However, security requirements dependent on the
application; many applications do not need multilevel security.
We will now look at models for ‘commercial’ integrity
policies.
We will also examine some theoretical foundations of
access control.
82
Agenda




Biba model
Chinese Wall model
Clark Wilson Model
Harrison-Ruzo-Ullman model



Reminder: Turing machines& decidability, NPcompleteness
Information flow models
Enforcement monitors
83
Biba Model

Integrity policies prohibit the corruption of ‘clean’ high
level entities by ‘dirty’ low level entities.





Clean and dirty shorthand for high integrity and low integrity.
Concrete meaning of integrity levels is application dependent.
Subjects and objects labelled with elements from a
lattice (L, ) of integrity levels by functions fS:S  L
and fO:O  L.
Information may only flow downwards in the integrity
lattice; only information flows caused directly by
access operations considered.
Biba model: state machine model similar to BLP; no
single high-level integrity policy.
84
Biba With Static Integrity Levels


Simple Integrity Property (no write-up): If
subject s can modify (alter) object o, then
fS(s)  fO(o).
Integrity -Property: If subject s can read
(observe) object o, then s can have write
access to some other object o’ only if fO(o) 
fO(o’).

Invoke Property: A ‘dirty’ subject s1 must not
touch a ‘clean’ object indirectly by invoking
s2: Subject s1 can invoke subject s2 only if
85
fS(s1)  fS(s2).
Biba: Dynamic Integrity Levels



Low watermark policies automatically adjust
levels (as in the Chinese Wall model):
Subject Low Watermark Policy: Subject s can
read (observe) an object o at any integrity
level. The new integrity level of s is
g.l.b.(fS(s),fO(o)).
Object Low Watermark Policy: Subject s can
modify (alter) an object o at any integrity
level. The new integrity level of o is
g.l.b.(fS(s),fO(o)).
86
Biba for Protection Rings



Ring Property: A ‘dirty’ subject s1 may invoke a ‘clean’
tool s2 to touch a ‘clean’ object:
Subject s1 can read objects at all integrity levels,
modify objects o with fS(s1)  fO(o), and invoke a
subject s2 only if fS(s1)  fS(s2).
The ring property is the opposite of the invoke
property!
Captures integrity protection in operating systems
based on protection rings.
87
Chinese Wall Model


In financial institutions analysts deal with a
number of clients and have to avoid conflicts
of interest.
Components:






subjects: analysts
objects: data item for a single client
company datasets: y:O  C gives for each object
its company dataset
conflict of interest classes: companies that are
competitors; x: O  P(C) gives for each object o
the companies with a conflict of interest on o
‘labels’: company dataset + conflict of interest
class
sanitized information: no access restrictions
88
Chinese Wall Model – Policies

Simple Security Property: Access is only
granted if the object requested



is in the same company dataset as an object
already accessed by that subject;
does not belong to any of the conflict of interest
classes of objects already accessed by that
subject.
Formally:


N = (Nso)sS,oO , Boolean matrix, Nso = true if s
has accessed o;
ss-property: subject s gets access to object o only
if for all objects o’ with Nso’ = true, y(o) = y(o’) or
y(o) x(o’).
89
Chinese Wall:  - Property



Indirect information flow: A and B are competitors having
accounts with the same Bank.
Analyst_A, dealing with A and the Bank, updates the Bank
portfolio with sensitive information about A.
Analyst_B, dealing with B and the Bank, now has access to
information about a competitor.
read
conflict
of interest
class
A
write
Analyst_A
Bank
B
Analyst_B
write
read
90
Chinese Wall:  - Property

 - Property: A subject s is permitted write
access to an object only if s has no read access
to any object o’, which is in a different
company dataset and is unsanitized.


subject s gets write access to object o only if s has
no read access to an object o’ with y(o)  y(o’) or
x(o’)  {}
Access rights of subjects change dynamically
with every access operation.
91
Chinese Wall:  - Property
blocked by
-property
read
A
write
Analyst_A
Bank
B
Analyst_B
write
read
blocked by
-property
92
Clark-Wilson Model


Addresses security requirements of
commercial applications. ‘Military’ and
‘commercial’ are shorthand for different ways
of using computers.
Emphasis on integrity



internal consistency: properties of the internal
state of a system
external consistency: relation of the internal state
of a system to the outside world.
Mechanisms for maintaining integrity: wellformed transactions & separation of duties.
93
Clark-Wilson: Access Control



Subjects & objects are ‘labeled’ with programs.
Programs serve as intermediate layer between
subjects and objects.
Access control:



define access operations (transformation procedures)
that can be performed on each data item (data
types).
define the access operations that can be performed
by subjects (roles).
Note the difference between a general purpose
operating system (BLP) and an application
oriented IT system (Clark-Wilson).
94
Access Control in CW
user
authentication
authorization
TP
append
Log
CDI
must be validated
integrity checks,
permissions checked
CDIa
UDI
CDIb
95
CW: Certification Rules

Five certification rules suggest how one should check
that the security policy is consistent with the
application requirements.





CR1: IVPs (initial verification procedures) must ensure that
all CDIs (constrained data items) are in a valid state when
the IVP is run.
CR2: TPs (transformation procedures) must be certified to be
valid, i.e. valid CDIs must always be transformed into valid
CDIs. Each TP is certified to access a specific set of CDIs.
CR3: Access rules must satisfy any separation of duties
requirements.
CR4: All TPs must write to an append-only log.
CR5: Any TP that takes an UDI (unconstrained data item) as
input must either convert the UDI into a CDI or reject the
UDI and perform no transformation at all.
96
CW: Enforcement Rules

Describe mechanisms within the computer
system that should enforce the security policy:




ER1: For each TP maintain and protect the list of
entries (CDIa,CDIb,...) giving the CDIs the TP is
certified to access.
ER2: For each user maintain and protect the list of
entries (TP1, TP2,...)} specifying the TPs user can
execute.
ER3: The system must authenticate each user
requesting to execute a TP.
ER4: Only subjects that may certify an access rule for a
TP may modify the respective list; this subject must
not have execute rights on that TP.
97
Reminder



Is it better to have a very expressive policy
language or a simple language where it is
easier to check the impact of policy decisions?
In an expressive language, it is easier to
capture the intended policy (what we think we
want) but more difficult to verify that the
policy actually does what it is supposed to do.
In a simple language we may not be able to
capture our intentions precisely, but there are
efficient means to check what the policy does
98
Access Control



Scenario: security policy given as access control
matrix; beyond read/write operations there are also
operations for changing access rights (discretionary
access control) and for creating new subjects and
objects.
Question: Given a policy, can we answer the question
“Will this particular principal ever be allowed to
access this resource?”
Access rights can change so we have to do more
than simply check the current access control matrix.
99
Harrison-Ruzzo-Ullman Model


Harrison-Ruzzo-Ullman model (HRU, 1976): defines
authorization systems where we can explore answers
to our question.
Components of the HRU model:
 set of subjects S
 set of objects O
 set of access rights R
 access matrix M = (Mso)sS,oO : entry Mso is a
subset of R defining the rights subject s has on
object o
100
Primitive Operations in HRU


Six primitive operations for
manipulating subjects,
objects, and the access
matrix:
 enter r into Mso
 delete r from Mso
 create subject s
 delete subject s
 create object o
 delete object o
Commands in HRU model
(examples):
command create_file(s,f)
create f
enter o into Ms,f
enter r into Ms,f
enter w into Ms,f
end
command grant_read(s,p,f)
if o in Ms,f
then enter r in Mp,f
end
101
HRU Policies



Policy management question: Is this subject
allowed to access this object?
The HRU access matrix describes the state of
the system; commands effect changes in the
access matrix.
HRU can model policies for allocating access
right; to verify compliance with a given policy,
you have to check that no undesirable access
rights can be granted.
102
‘Leaking’ of Rights in HRU



An access matrix M is said to leak the right r
if there exists a command c that adds r into a
position of the access matrix that previously
did not contain r.
M is safe with respect to the right r if no
sequence of commands can transform M into
a state that leaks r.
Do not expect the meaning of ‘leak’ and
‘safe’ to match your own intuition.
103
Safety Properties of HRU




The safety problem cannot be tackled in its
full generality.
Theorem. Given an access matrix M, a right
r, and a set of commands, verifying the
safety of M with respect to r is undecidable.
There does not exist a general algorithm that
answers our policy question for all instances
of the HRU model.
For restricted models, the chances of success
are better.
104
Restricted Models



Mono-operational commands contain a single
operation:
Theorem. Given a mono-operational authorization
system, an access matrix M, and a right r, verifying
the safety of M with respect to r is decidable.
With two operations per command, the safety
problem is undecidable.
Limiting the size of the authorization system also
makes the safety problem tractable.
Theorem. The safety problem for arbitrary
authorization systems is decidable if the number of
subjects is finite.
105
HRU – Summary




Do not memorize the details of HRU.
Do not memorize the individual undecidability
theorems for the HRU variants.
Remember the important message: The more
expressive the security model, the more difficult it is
to verify security.
You don’t have to know the basics of computational
complexity theory for this course but it helps to
appreciate the challenges in formally verifying
security.
106
Information Flow Models



Similar framework as BLP: objects are labeled with
security classes (form a lattice), information may flow
upwards only.
Information flow described in terms of conditional
entropy (equivocation  information theory)
Information flows from x to y if we learn something
about x by observing y:




explicit information flow: y:= x
implicit information flow: IF x=0 THEN y:=1
covert channels
Proving security is undecidable.
107
Non-interference Models



A group of users, using a certain set of
commands, is non-interfering with another
group of users if what the first group does
with those commands has no effect on what
the second group of users can see.
Take a state machine where low users only
see outputs relating to their own inputs. High
users are non-interfering with low users if the
low users see the same no matter whether the
high users had been providing inputs or not.
Active research area in formal methods.
108
Execution Monitors
Security Policies (again)




Three classes of security policies:
Access control: restricts what operations
principals can perform on objects.
Information flow: restricts what principals can
infer about objects from observing system
behaviour.
Availability: restrict principals from denying
others the use of a resource.
109
Execution Monitoring



The practicality of a security policy depends
on whether it is enforceable and at what cost.
Execution Monitoring (EM): enforcement
mechanisms that monitor execution steps of
a target system and terminate the target’s
execution if it is about to violate the security
policy being enforced.
EM includes security kernels, reference
monitors, firewalls, most other operating
system, …
110
Beyond EM



Enforcement mechanisms that use more
information than is available only from
observing the steps of a target’s execution only
are outside EM.
Information provided to an EM mechanism is
insufficient to predict future steps the target
might take, alternative possible executions, or
all possible target executions.
Compilers and theorem-provers that analyze a
static representation of a target to deduce
information about all of its possible executions
are not EM mechanisms.
111
Beyond EM




Mechanisms that modify a target before
executing it are also outside EM.
The modified target must be equivalent to the
original, except for aborting executions that
violate the security policy of interest.
A definition of equivalence is thus required to
analyze this class of mechanisms.
In-line reference monitors and reflection
techniques fall in this category.
112
Executions & Properties





We are interested in the executions of a target
system.
Executions are sequences of steps, e.g. machine
instructions.
We use Ψ to denote the set of all executions, finite
and infinite, of our target system
Let σ [ ..i ] denote the first i steps of σ.
A set Γ of executions is called a property if
membership of an element is determined by the
element alone, not by other elements.
113
EM & Properties





A security policy must be a property to have
an enforcement mechanism in EM.
Not every security policy is a property.
Some security policies cannot be defined as a
predicate on individual executions.
Information flow policies: information flows
from “high” to “low” if a low user can
somehow detect actions by a high user.
We have to compare executions where the
high user is active with executions where the
high user is inactive.
114
EM & Properties




Not every property is EM enforceable.
Enforcement mechanisms in EM cannot look
into the future when making decisions on an
execution.
Consider an execution that reaches a state
that satisfies the security policy but goes
through “insecure” states
An EM has to prohibit such an insecure prefix
of a secure execution.
115
Safety & Liveness




In the HRU model, we were talking about “safe”
access matrices.
In discussions about security you may find further
references to safety & liveness.
Safety properties: nothing bad can happen.
Liveness properties: something good will happen
eventually.
116
EM & Safety



Non EM-Enforceable Security Policies: If the
set of executions for a security policy is not a
safety property, then that policy does not
have an enforcement mechanism from EM.
EM enforcement mechanisms enforce security
policies that are safety properties.
It is not the case that all safety properties
have EM enforcement mechanisms.
117
Safety & Security Policies




Access control defines safety properties: partial
executions that end with attempting an unacceptable
operation will be prohibited.
Information flow does not define sets that are
properties; so information flow cannot be a safety
property and in turn cannot be enforced by EM.
Availability is not a safety property: any partial
execution can be extended in a way that allows a
principal to access the resource.
Availability defined in respect to a Maximum Waiting
Time (MWT) is a safety property; once an execution
has waited beyond MWT, any extension will also wait
beyond MWT.
118
The 3rd Design Principle





If you design complex systems that can only be
described by complex models, finding proofs of
security becomes difficult.
In the worst case (undecidability), no universal
algorithm exists that verifies security in all cases.
If you want verifiable security properties, you are
better off with a security model of limited complexity.
Such a model may not describe all desirable security
properties, but you may gain efficient methods for
verifying ‘security’.
In turn, you are advised to design simple systems
that can be adequately described in the simple model.
119
The more expressive a security model
is, both with respect to the security
properties and the systems it can
describe, the more difficult it is
usually to verify security properties.
120
Summary




The theoretical foundations for access control are
relevant in practice.
It helps to know in which complexity class your policy
language and enforcement algorithm put you in.
Powerful description languages may leave you with
undecidable enforcement problems.
Much of current efforts on policy languages in ‘trust
management’ and web services access control
revolves around these issues.
121