Lecture 5 - The University of Texas at Dallas

Download Report

Transcript Lecture 5 - The University of Texas at Dallas

Security Architecture and Design
Dr. Bhavani Thuraisingham
The University of Texas at Dallas (UTD)
June 2011
Domain Agenda
• System and Components Security
• Architectural Security Concepts and Models
• Information Systems Evaluation Models
Domain Agenda
• System and Components Security
– Architectural Concepts and Definitions
• Architectural Security Concepts and Models
• Information Systems Evaluation Models
Common Security Architecture Terms
•
•
•
•
•
•
•
Information Security Management System
Information Security Architecture
Best Practice
Architecture
Blueprint
Framework
Infrastructure
•
•
•
•
•
Objectives of Enterprise
Security Architecture
Guidance
Strategically aligned business and security decisions
Provide security-related guidance
Apply security best practices
Define security zones
Benefits of an Enterprise
Security Architecture
•
•
•
•
•
Consistently manage risk
Reduce the costs of managing risk
Accurate security-related decisions
Promote interoperability, integration and ease-of-access
Provide a frame of reference
Characteristics of a Good
Security Architecture
• Strategic
• Holistic
• Multiple implementations
Effects of Poor Architectural Planning
•
•
•
•
•
Inability to efficiently support new business services
Unidentified security vulnerabilities
Increased frequency and visibility of security breaches
Poorly understood or coordinated compliance requirements
Poor understanding of security goals and objectives
Enterprise Security
Architecture Components
• Common Architecture Language
• Architecture Model
• Zachman Framework
Zachman Framework
•
•
•
•
•
Complete overview of IT business alignment
Two-dimensional
Intent
Scope
Principles
SABSA
• What are the business requirements?
–
–
–
–
–
Contextual
Conceptual
Logical
Physical
Component
ISO 7498-2
• OSI second part
• About secure communications
• NOT an implementation
ISO/IEC 4010:2007
• Systems and software engineering
• Practice for architectural description of software-intensive
systems
The Open Group
Architecture Framework
•
•
•
•
•
Governance
Business
Application
Data
Technology
Department of Defense
Architecture Framework
•
•
•
•
•
OMB A-130 requirement
All view
Operational view
Systems view
Technical standards view
Which Framework is Right?
• Starting place
• Culture
• Template
System and Component Security
• Components that provide basic security services
• Hardware components
• Software components
CPU and Processor Privilege States
• Supervisor state
• Problem state
CPU Process States
•
•
•
•
Running
Ready
Blocked
Masked/interruptible
Common Computer
Architecture Layers
•
•
•
•
Application programs
Utilities
Operating system
Computer hardware
Common Computer Architecture
•
•
•
•
•
•
Program execution
Access to input/output devices
Controlled access to files and data
Error detection and response
Accounting and tracking
Access for maintenance and troubleshooting
Hardware: Computers
•
•
•
•
•
Mainframe
Minicomputer
Desktop / server
Laptop / notebook
Embedded
Hardware: Communication Devices
• Modem
• Network Interface Card (NIC)
Hardware: Printers
• Network-aware
• More than output device
• Full operating systems
Hardware: Wireless
•
•
•
•
•
Network interface card
Access point
Ethernet bridge
Router
Range extender
Input/Output (I/O) Devices
•
•
•
•
I/O Controller
Managing memory
Hardware
Operating system
Firmware: Pre-programmed Chips
•
•
•
•
ROMs (Read-only memory)
PROMs (Programmable read-only memory)
EPROMs (Erasable, programmable, read-only memory)
EEPROMs (Electrically erasable, programmable, read-only
memory
• Field Programmable Gate Arrays (FPGAs)
• Flash chips
Software: Operating System
• Hardware control
• Hardware abstraction
• Resource manager
CPU and OS Support for Applications
• Applications were originally self-contained
• OS capable of accommodating more than one application at a
time
CPU and OS Support for Applications Today
• Today’s applications are portable
• Execute multiple process threads
• Threads
Operating Systems Support for
Applications
•
•
•
•
•
Multi-tasking
Multi-programming
Multi-processing
Multi-processor
Multi-core
Software: Vendor
• Commercial off the shelf (COTS)
• Function first
• Evaluation
Software: Custom
• Minimal scripting
• Business application
• System life cycle
Software: Customer-relationship
Management Systems
• Business to customer interactions
• Tracking habits
Systems Architecture Approaches
•
•
•
•
•
•
Open
Closed
Dedicated
Single level
Multi-level
Embedded
Architectures: Middleware
• Interoperability
• Post implementation
• Distributed
Types of System Memory Resources
•
•
•
•
•
CPU registers
Cache
Main memory
Swap space
Disk storage
Requirements for
Memory Management
• Relocation
• Protection
• Sharing
Three Types of Memory Addressing
• Logical
• Relative
• Physical
Memory Protection Benefits
•
•
•
•
Memory reference
Different data classes
Users can share access
Users cannot generate addresses
Virtual Memory
• Extends apparent memory
• Paging includes
– Splitting physical memory
– Splitting programs (processes)
– Allocating the required number page frames
• Swapping
Virtual Machines
• Mimic the architecture of the actual system
• Provided by the operating system
Domain Agenda
• System and Components Security
• Architectural Security Concepts and Models
• Information Systems Evaluation Models
Ring Protection
0.
1.
2.
3.
O/S Kernel
I/O
Utilities
User Apps
Layering and Data Hiding
• Layering
• Data Hiding
Privilege Levels
• Identifying, authenticating and authorizing subjects
• Subjects of higher trust
• Subjects with lower trust
Process Isolation
•
•
•
•
Object’s integrity
Prevents interaction
Independent states
Process isolation method
Security Architecture
•
•
•
•
•
•
Security critical components of the system
Trusted Computing Base
Reference Monitor and Security Kernel
Security Perimeter
Security Policy
Least Privilege
Trusted Computing Base (TCB)
• Trusted Computing Base
–
–
–
–
–
Hardware
Firmware
Software
Processes
Inter-process communications
• Simple and Testable
Trusted Computing Base (TCB)
• Enforces security policy
• Monitors four basic functions
–
–
–
–
Process activation
Execution domain switching
Memory protection
Input/output operations
Trusted Computing Base (TCB)
• The trusted computing base (TCB) of a computer system is the set of all
hardware, firmware, and/or software components that are critical to its
security, in the sense that bugs or vulnerabilities occurring inside the TCB
might jeopardize the security properties of the entire system. By contrast,
parts of a computer system outside the TCB must not be able to misbehave in
a way that would leak any more privileges than are granted to them in
accordance to the security policy.
• The careful design and implementation of a system's trusted computing base
is paramount to its overall security. Modern operating systems strive to reduce
the size of the TCB so that an exhaustive examination of its code base (by
means of manual or computer-assisted software audit or program verification)
becomes feasible.
Reference Monitor Concept
• Abstract machine concept
– Must be tamper-proof
– Always invoked
– Verifiable
• Security kernel
• Subject
• Object
Reference Monitor and Security Kernel
• In operating systems architecture, a reference monitor is a tamperproof,
always-invoked, and small-enough-to-be-fully-tested-and-analyzed module
that controls all software access to data objects or devices (verifiable).
• The reference monitor verifies that the request is allowed by the access
control policy.
• For example, Windows 3.x and 9x operating systems were not built with a
reference monitor, whereas the Windows NT line, which also includes
Windows 2000 and Windows XP, was designed to contain a reference monitor,
although it is not clear that its properties (tamperproof, etc.) have ever been
independently verified, or what level of computer security it was intended to
provide.
Domain Agenda
• System and Components Security
• Architectural Security Concepts and Models
– Security Models
• Information Systems Evaluation Models
Bell-LaPadula Confidentiality Model
•
•
•
•
Hierarchical state machine model
Three fundamental modes
Secure state
Defines access rules
Bell and LaPadula
•
A system state is defined to be "secure" if the only permitted access modes of
subjects to objects are in accordance with a security policy. To determine whether a
specific access mode is allowed, the clearance of a subject is compared to the
classification of the object (more precisely, to the combination of classification and
set of compartments, making up the security level) to determine if the subject is
authorized for the specific access mode. The clearance/classification scheme is
expressed in terms of a lattice. The model defines two mandatory access control
(MAC) rules and one discretionary access control (DAC) rule with three security
properties:
– The Simple Security Property - a subject at a given security level may not read an
object at a higher security level (no read-up).
– The *-property (read "star"-property) - a subject at a given security level must not
write to any object at a lower security level (no write-down). The *-property is also
known as the Confinement property.
– The Discretionary Security Property - use of an access matrix to specify the
discretionary access control.
Biba
• In general, preservation of data integrity has three goals:
– Prevent data modification by unauthorized parties
– Prevent unauthorized data modification by authorized parties
– Maintain internal and external consistency (i.e. data reflects the real world)
• Biba security model is directed toward data integrity (rather than
confidentiality) and is characterized by the phrase: "no read down, no write up".
This is in contrast to the Bell-LaPadula model which is characterized by the
phrase "no write down, no read up".
• The Biba model defines a set of security rules similar to the Bell-LaPadula
model. These rules are the reverse of the Bell-LaPadula rules:
– The Simple Integrity Axiom states that a subject at a given level of integrity must not
read an object at a lower integrity level (no read down).
– The * (star) Integrity Axiom states that a subject at a given level of integrity must not
write to any object at a higher level of integrity (no write up).
Clark Wilson Model
• The Clark-Wilson integrity model provides a foundation for specifying and
analyzing an integrity policy for a computing system.
• The model is primarily concerned with formalizing the notion of information
integrity.
• Information integrity is maintained by preventing corruption of data items in a
system due to either error or malicious intent.
• An integrity policy describes how the data items in the system should be kept
valid from one state of the system to the next and specifies the capabilities of
various principals in the system.
• The model defines enforcement rules and certification rules.
• The model’s enforcement and certification rules define data items and
processes that provide the basis for an integrity policy. The core of the model
is based on the notion of a transaction.
Clark Wilson Model
• A well-formed transaction is a series of operations that transition a system from
one consistent state to another consistent state.
• In this model the integrity policy addresses the integrity of the transactions.
• The principle of separation of duty requires that the certifier of a transaction
and the implementer be different entities.
• The model contains a number of basic constructs that represent both data items
and processes that operate on those data items. The key data type in the ClarkWilson model is a Constrained Data Item (CDI). An Integrity Verification
Procedure (IVP) ensures that all CDIs in the system are valid at a certain state.
Transactions that enforce the integrity policy are represented by Transformation
Procedures (TPs). A TP takes as input a CDI or Unconstrained Data Item (UDI)
and produces a CDI. A TP must transition the system from one valid state to
another valid state. UDIs represent system input (such as that provided by a
user or adversary). A TP must guarantee (via certification) that it transforms all
possible values of a UDI to a “safe” CDI.
Clark Wilson Model
• At the heart of the model is the notion of a relationship between an
authenticated principal (i.e., user) and a set of programs (i.e., TPs) that operate
on a set of data items (e.g., UDIs and CDIs). The components of such a relation,
taken together, are referred to as a Clark-Wilson triple. The model must also
ensure that different entities are responsible for manipulating the relationships
between principals, transactions, and data items. As a short example, a user
capable of certifying or creating a relation should not be able to execute the
programs specified in that relation.
• The model consists of two sets of rules: Certification Rules (C) and Enforcement
Rules (E). The nine rules ensure the external and internal integrity of the data
items. To paraphrase these:
• C1—When an IVP is executed, it must ensure the CDIs are valid. C2—For some
associated set of CDIs, a TP must transform those CDIs from one valid state to
another. Since we must make sure that these TPs are certified to operate on a
particular CDI, we must have E1 and E2.
Clark Wilson Model
•
E1—System must maintain a list of certified relations and ensure only TPs certified to run
on a CDI change that CDI. E2—System must associate a user with each TP and set of CDIs.
The TP may access the CDI on behalf of the user if it is “legal.” This requires keeping track
of triples (user, TP, {CDIs}) called “allowed relations.”
•
C3—Allowed relations must meet the requirements of “separation of duty.” We need
authentication to keep track of this.
•
E3—System must authenticate every user attempting a TP. Note that this is per TP
request, not per login. For security purposes, a log should be kept.
•
C4—All TPs must append to a log enough information to reconstruct the operation.
When information enters the system it need not be trusted or constrained (i.e. can be a
UDI). We must deal with this appropriately.
•
C5—Any TP that takes a UDI as input may only perform valid transactions for all possible
values of the UDI. The TP will either accept (convert to CDI) or reject the UDI. Finally, to
prevent people from gaining access by changing qualifications of a TP:
•
E4—Only the certifier of a TP may change the list of entities associated with that TP.
Other Fundamental Models
•
•
•
•
•
•
Information flow model
Non-interference model
State machine
Lattice-based model
Graham-Denning
Harrison-Ruzzo-Ullman result
Domain Agenda
• System and Components Security
• Architectural Security Concepts and Models
• Information Systems Evaluation Models
Evaluation Roles
• Buyer/Customer
• Seller/Vendor
• Lab/Certifier
Documents & Organizations
• TCSEC (U.S DoD)
• ITSEC (European Union)
• Common Criteria (ISO Standard I5408)
TCSEC or Orange Book
• DoD-centric
• Security and functionality
• Product evaluation
Secure System Evaluation: TCSEC
• Trusted Computer System Evaluation Criteria (TCSEC) is a United States
Government Department of Defense (DoD) standard that sets basic
requirements for assessing the effectiveness of computer security controls built
into a computer system. The TCSEC was used to evaluate, classify and select
computer systems being considered for the processing, storage and retrieval of
sensitive or classified information.
• The TCSEC, frequently referred to as the Orange Book, is the centerpiece of the
DoD Rainbow Series publications. Initially issued in 1983 by the National
Computer Security Center (NCSC), an arm of the National Security Agency, and
then updated in 1985,.
•
TCSEC was replaced by the Common Criteria international standard originally
published in 2005.
Secure System Evaluation: TCSEC
• Policy - The security policy must be explicit, well-defined and enforced by the
computer system. There are two basic security policies:
• Mandatory Security Policy - Enforces access control rules based directly on an
individual's clearance, authorization for the information and the confidentiality
level of the information being sought. Other indirect factors are physical and
environmental. This policy must also accurately reflect the laws, general policies
and other relevant guidance from which the rules are derived.
– Marking - Systems designed to enforce a mandatory security policy must
store and preserve the integrity of access control labels and retain the labels
if the object is exported.
• Discretionary Security Policy - Enforces a consistent set of rules for controlling
and limiting access based on identified individuals who have been determined
to have a need-to-know for the information.
Secure System Evaluation: TCSEC
• Accountability - Individual accountability regardless of policy must be enforced.
A secure means must exist to ensure the access of an authorized and competent
agent which can then evaluate the accountability information within a
reasonable amount of time and without undue difficulty. There are three
requirements under the accountability objective:
– Identification - The process used to recognize an individual user.
– Authentication - The verification of an individual user's authorization to specific
categories of information.
– Auditing - Audit information must be selectively kept and protected so that actions
affecting security can be traced to the authenticated individual.
• The TCSEC defines four divisions: D, C, B and A where division A has the highest
security. Each division represents a significant difference in the trust an
individual or organization can place on the evaluated system. Additionally
divisions C, B and A are broken into a series of hierarchical subdivisions called
classes: C1, C2, B1, B2, B3 and A1.
Secure System Evaluation: TCSEC
• Assurance: The computer system must contain hardware/software mechanisms
that can be independently evaluated to provide sufficient assurance that the
system enforces the above requirements. By extension, assurance must include
a guarantee that the trusted portion of the system works only as intended. To
accomplish these objectives, two types of assurance are needed with their
respective elements:
– Assurance Mechanisms : Operational Assurance: System Architecture, System
Integrity, Covert Channel Analysis, Trusted Facility Management and Trusted
Recovery
– Life-cycle Assurance : Security Testing, Design Specification and Verification,
Configuration Management and Trusted System Distribution
ITSEC
•
•
•
•
International origin
ITSEM
Functionality
Assurance
Secure System Evaluation: ITSEC
• The Information Technology Security Evaluation Criteria (ITSEC) is a structured
set of criteria for evaluating computer security within products and systems.
The ITSEC was first published in May 1990 in France, Germany, the Netherlands,
and the United Kingdom based on existing work in their respective countries.
Following extensive international review, Version 1.2 was subsequently
published in June 1991 by the Commission of the European Communities for
operational use within evaluation and certification schemes.
• Levels E1 – E6
Common Criteria
•
•
•
•
•
•
•
Origins
ISO
Documents
EAL I-7
PP
TEO
ST
Secure System Evaluation:
Common Criteria
• The Common Criteria for Information Technology Security Evaluation
(abbreviated as Common Criteria or CC) is an international standard (ISO/IEC
15408) for computer security certification.
• Common Criteria is a framework in which computer system users can specify
their security functional and assurance requirements, vendors can then
implement and/or make claims about the security attributes of their products,
and testing laboratories can evaluate the products to determine if they
actually meet the claims. In other words, Common Criteria provides assurance
that the process of specification, implementation and evaluation of a
computer security product has been conducted in a rigorous and standard
manner.
• Levels: EAL 1 – EAL 7 (Evaluation Assurance Levels)
EAL = $
1.
2.
3.
4.
5.
In natural language
A user has only one password and is assigned only one role.
A valid password is a case-sensitive string of from six to
eight single-byte alphanumeric characters.
A user must set up a password at the 1st-time login.
The system allows the authenticated user to change his
password.
The new password cannot be the same as the old one.
Comparison of Evaluation Levels
Common
Criteria
--
US TCSEC
European ITSEC
D: Minimal Protection
EO
EAL 1
--
--
EAL 2
C1: Discretionary Security Protection
E1
EAL 3
C2: Controlled Access Protection
E2
EAL 4
B1: Labeled Security Protection
E3
EAL 5
B2: Structure Prote4ction
E4
EAL 6
B3: Security Domains
E5
EAL 7
A1: Verified Design
E6
Certification and Accreditation
• Certification and Accreditation (C&A) is a process for implementing
information security. It is a systematic procedure for evaluating, describing,
testing and authorizing systems prior to or after a system is in operation.
• Certification is a comprehensive assessment of the management, operational,
and technical security controls in an information system, made in support of
security accreditation, to determine the extent to which the controls are
implemented correctly, operating as intended, and producing the desired
outcome with respect to meeting the security requirements for the system.
• Accreditation is the official management decision given by a senior agency
official to authorize operation of an information system and to explicitly
accept the risk to agency operations (including mission, functions, image, or
reputation), agency assets, or individuals, based on the implementation of an
agreed-upon set of security controls.
Popular Management Frameworks
•
•
•
•
ISO 27001
ITIL
COSO
CMMI
ISO 27001 – Stages in
Implementing an ISMS
1.
2.
3.
4.
5.
6.
Define information security policy
Define scope of ISMS
Perform risk assessment
Manage risks
Select controls
Prepare statement of applicability
IT Infrastructure Library (ITIL)
• Focuses on IT services
• Seven main sections
• Supporting products
Committee of Sponsoring
Organizations
• Emphasizes the importance of identifying and managing risks
–
–
–
–
Process
People
Reasonable assurance
Objectives
Capability Maturity Model
•
•
•
•
Developed by SEI
Based on TQM concepts
Framework for improving process
Benefits
Open vs. Closed System
•
•
Open systems allow users to reuse, edit, manipulate, and contribute to the system
development
– Open source software is an example of Open systems
• Licensed to the public
– Freeware is also an example of Open systems
Closed systems permits users to use the system as it is
Some Security Threats
•
•
•
Buffer Overflow
Maintenance Hooks
Time of check / Time of use attacks