Introduction CS 239 Security for Networks and System

Download Report

Transcript Introduction CS 239 Security for Networks and System

Operating System Security
CS 136
Computer Security
Peter Reiher
October 18, 2011
CS 136, Fall 2011
Lecture 8
Page 1
Outline
• What does the OS protect?
• Authentication for operating systems
• Memory protection
– Buffer overflows
CS 136, Fall 2011
Lecture 8
Page 2
Introduction
• Operating systems provide the lowest layer
of software visible to users
• Operating systems are close to the hardware
– Often have complete hardware access
• If the operating system isn’t protected, the
machine isn’t protected
• Flaws in the OS generally compromise all
security at higher levels
CS 136, Fall 2011
Lecture 8
Page 3
Why Is OS Security So Important?
• The OS controls access to application
memory
• The OS controls scheduling of the processor
• The OS ensures that users receive the
resources they ask for
• If the OS isn’t doing these things securely,
practically anything can go wrong
• So almost all other security systems must
assume a secure OS at the bottom
CS 136, Fall 2011
Lecture 8
Page 4
Single User Vs. Multiple User
Machines
• The majority of today’s computers usually
support a single user
• Some computers are still multi-user
– Often specialized servers
• Single user machines often run multiple
processes, though
– Often through downloaded code
• Increasing numbers of embedded machines
– Effectively no (human) user
CS 136, Fall 2011
Lecture 8
Page 5
Trusted Computing
• Since OS security is vital, how can we
be sure our OS is secure?
• Partly a question of building in good
security mechanisms
• But also a question of making sure
you’re running the right OS
– And it’s unaltered
• That’s called trusted computing
CS 136, Fall 2011
Lecture 8
Page 6
Booting Issues
• A vital element of trusted computing
• The OS usually isn’t present in
memory when the system powers up
– And isn’t initialized
• Something has to get that done
• That’s the bootstrap program
• Security is a concern here
CS 136, Fall 2011
Lecture 8
Page 7
The Bootstrap Process
• Bootstrap program is usually very
short
• Located in easily defined place
• Hardware finds it, loads it, runs it
• Bootstrap then takes care of initializing
the OS
CS 136, Fall 2011
Lecture 8
Page 8
Security and Bootstrapping
• Most machine security relies on OS being
trustworthy
• That implies you must run the OS you think
you run
• The bootstrap loader determines which OS
to run
• If it’s corrupted, you’re screwed
• Bootkit attacks (e.g., the Evil Maid attack)
CS 136, Fall 2011
Lecture 8
Page 9
Practicalities of Bootstrap
Security
• Most systems make it hard to change
bootstrap loader
– But must have enough flexibility to load
different OSes
– From different places on machine
• Malware likes to corrupt the bootstrap
• Trusted computing platforms can help
secure bootstrapping
CS 136, Fall 2011
Lecture 8
Page 10
TPM and Bootstrap Security
• Trusted Platform Module (TPM1)
– Special hardware designed to
improve OS security
• Proves OS was booted with a particular
bootstrap loader
– Using tamperproof HW and
cryptographic techniques
1Confusingly,
“TPM” also refers to “technical protection mechanisms,” i.e.,
technical copyright protections
CS 136, Fall 2011
Lecture 8
Page 11
TPM and the OS Itself
• Once the bootstrap loader is operating,
it uses TPM to check the OS
• Essentially, ensures that expected OS
was what got booted
• If expected OS is trusted, then your
system is “secure”
– Or, at least, “trusted”
CS 136, Fall 2011
Lecture 8
Page 12
TPM and Applications
• The TPM can be asked by the OS to
check applications
– Again, ensuring they are of a certain
version
• TPM can produce remotely verifiable
attestations of applications
• Remote machine can be sure which
web server you run, for example
CS 136, Fall 2011
Lecture 8
Page 13
This Should Sound Familiar . . .
• Remember transitive trust from an
earlier lecture?
• The use of TPM to verify modules up
the stack is form of transitive trust
• Higher levels trusted because lower
levels vouch for them
CS 136, Fall 2011
Lecture 8
Page 14
Transitive Trust in TPM
• You trust the app, because the OS says
to trust it
• You trust the OS, because the
bootstrap says to trust it
• You trust the bootstrap, because
somebody claims it’s OK
• You trust the whole chain, because you
trust the TPM hardware’s attestations
CS 136, Fall 2011
Lecture 8
Page 15
What Is TPM Really Doing?
• Essentially, securely hashing software
• Then checking to see if hashes match
securely stored versions
• Uses its own keys and hardware
– Which are tamper-resistant
• PK allows others to cryptographically
verify its assertions
CS 136, Fall 2011
Lecture 8
Page 16
What Can You Do With TPM?
• Be sure you’re running particular
versions of software
• Provide remote sites with guarantees of
what you did locally
• Digital rights management
• All kinds of other stuff
CS 136, Fall 2011
Lecture 8
Page 17
TPM Controversy
• TPM provides guarantees to remote parties
– Takes security out of the hands of machine’s
owner
• Could be used coercively
– E.g., web pages only readable by browser X
– Documents only usable with word processor Y
• Much of original motivation came from digital
rights management community
• Only “guarantees” what got run
CS 136, Fall 2011
Lecture 8
Page 18
Trust vs. Security
• TPM doesn’t guarantee security
– It (to some extent) verifies trust
• It doesn’t mean the OS and apps are secure,
or even non-malicious
• It just verifies that they are versions you
have said you trust
• Offers some protection against tampering
with software
• But doesn’t prevent other bad behavior
CS 136, Fall 2011
Lecture 8
Page 19
Status of TPM
• Hardware widely installed
– Not widely used
• Microsoft Bitlocker uses it
– When available
• A secure Linux boot loader and OS
work with it
• Some specialized software uses TPM
CS 136, Fall 2011
Lecture 8
Page 20
Authentication in Operating
Systems
• The OS must authenticate all user
requests
– Otherwise, can’t control access to
critical resources
• Human users log in
– Locally or remotely
• Processes run on their behalf
– And request resources
CS 136, Fall 2011
Lecture 8
Page 21
In-Person User Authentication
• Authenticating the physically present
user
• Most frequently using password
techniques
• Sometimes biometrics
• To verify that a particular person is
sitting in front of keyboard and screen
CS 136, Fall 2011
Lecture 8
Page 22
Remote User Authentication
•
•
•
•
•
Many users access machines remotely
How are they authenticated?
Most typically by password
Sometimes via public key crypto
Sometimes at OS level, sometimes by a
particular process
– In latter case, what is their OS identity?
– What does that imply for security?
CS 136, Fall 2011
Lecture 8
Page 23
Process Authentication
• Successful login creates a primal process
– Under ID of user who logged in
• The OS securely ties a process control block to the
process
– Not under user control
– Contains owner’s ID
• Processes can fork off more processes
– Usually child process gets same ID as parent
• Usually, special system calls can change a
process’ ID
Lecture 8
CS 136, Fall 2011
Page 24
For Example,
• Process X wants to open file Y for read
• File Y has read permissions set for user
Bill
• If process X belongs to user Bill,
system ties the open call to that user
• And file system checks ID in open
system call to file system permissions
• Other syscalls (e.g., RPC) similar
CS 136, Fall 2011
Lecture 8
Page 25
Protecting Memory
• What is there to protect in memory?
• Page tables and virtual memory
protection
• Special security issues for memory
• Buffer overflows
CS 136, Fall 2011
Lecture 8
Page 26
What Is In Memory?
• Executable code
– Integrity required to ensure secure
operations
• Copies of permanently stored data
– Secrecy and integrity issues
• Temporary process data
– Mostly integrity issues
CS 136, Fall 2011
Lecture 8
Page 27
Mechanisms for Memory
Protection
• Most general purpose systems provide some
memory protection
– Logical separation of processes that run
concurrently
• Usually through virtual memory methods
• Originally arose mostly for error
containment, not security
CS 136, Fall 2011
Lecture 8
Page 28
Paging and Security
• Main memory is divided into page frames
• Every process has an address space divided
into logical pages
• For a process to use a page, it must reside in
a page frame
• If multiple processes are running, how do
we protect their frames?
CS 136, Fall 2011
Lecture 8
Page 29
Protection of Pages
• Each process is given a page table
– Translation of logical addresses into
physical locations
• All addressing goes through page table
– At unavoidable hardware level
• If the OS is careful about filling in the page
tables, a process can’t even name other
processes’ pages
CS 136, Fall 2011
Lecture 8
Page 30
Page Tables and Physical Pages
Process Page Tables
Process A
Process B
CS 136, Fall 2011
Physical Page Frames
Any address
Process A
names goes
through the
green table
Any address
Process B
names goes
through the
blue table
They can’t
even name
each other’s
pages
Lecture 8
Page 31
Security Issues of Page Frame
Reuse
• A common set of page frames is shared by
all processes
• The OS switches ownership of page frames
as necessary
• When a process acquires a new page frame,
it used to belong to another process
– Can the new process read the old data?
CS 136, Fall 2011
Lecture 8
Page 32
Reusing Pages
Process Page Tables
Process A
Physical Page Frames
What
happens now
if Process A
requests a
page?
Can Process
A now read
Process B’s
deallocated
data?
Process B
deallocates
a page
Process B
CS 136, Fall 2011
Lecture 8
Page 33
Strategies for Cleaning Pages
•
•
•
•
•
Don’t bother
Zero on deallocation
Zero on reallocation
Zero on use
Clean pages in the background
CS 136, Fall 2011
Lecture 8
Page 34
Special Interfaces to Memory
• Some systems provide a special interface to
memory
• If the interface accesses physical memory,
– And doesn’t go through page table
protections,
– Then attackers can read the physical
memory
– Letting them figure out what’s there and
find what they’re looking for
CS 136, Fall 2011
Lecture 8
Page 35
Buffer Overflows
• One of the most common causes for
compromises of operating systems
• Due to a flaw in how operating
systems handle process inputs
– Or a flaw in programming languages
– Or a flaw in programmer training
– Depending on how you look at it
CS 136, Fall 2011
Lecture 8
Page 36
What Is a Buffer Overflow?
• A program requests input from a user
• It allocates a temporary buffer to hold
the input data
• It then reads all the data the user
provides into the buffer, but . . .
• It doesn’t check how much data was
provided
CS 136, Fall 2011
Lecture 8
Page 37
For Example,
int main(){
char name[32];
printf(“Please type your name:
gets(name);
printf(“Hello, %s”, name);
return (0);
}
“);
• What if the user enters more than 32 characters?
CS 136, Fall 2011
Lecture 8
Page 38
Well, What If the User Does?
• Code continues reading data into memory
• The first 32 bytes go into name buffer
– Allocated on the stack
– Close to record of current function
• The remaining bytes go onto the stack
– Right after name buffer
– Overwriting current function record
– Including the instruction pointer
CS 136, Fall 2011
Lecture 8
Page 39
Why Is This a Security Problem?
• The attacker can cause the function to
“return” to an arbitrary address
• But all attacker can do is run different code
than was expected
• He hasn’t gotten into anyone else’s
processes
– Or data
• So he can only fiddle around with his own
stuff, right?
CS 136, Fall 2011
Lecture 8
Page 40
Is That So Bad?
• Well, yes
• That’s why a media player can write
configuration and data files
• Unless roles and access permissions set
up very carefully, a typical program
can write all its user’s files
CS 136, Fall 2011
Lecture 8
Page 41
The Core Buffer Overflow
Security Issue
• Programs often run on behalf of others
– But using your identity
• Maybe OK for you to access some data
• But is it OK for someone who you’re
running a program for to access it?
– Downloaded programs
– Users of web servers
– Many other cases
CS 136, Fall 2011
Lecture 8
Page 42
Using Buffer Overflows to
Compromise Security
• Carefully choose what gets written into
the instruction pointer
• So that the program jumps to
something you want to do
– Under the identity of the program
that’s running
• Such as, execute a command shell
• Usually attacker provides this code
CS 136, Fall 2011
Lecture 8
Page 43
Effects of Buffer Overflows
• A remote or unprivileged local user runs a
program with greater privileges
• If buffer overflow is in a root program, it
gets all privileges, essentially
• Can also overwrite other stuff
– Such as heap variables
• Common mechanism to allow attackers to
break into machines
CS 136, Fall 2011
Lecture 8
Page 44
Stack Overflows
•
•
•
•
The most common kind of buffer overflow
Intended to alter the contents of the stack
Usually by overflowing a dynamic variable
Usually with intention of jumping to exploit
code
– Though it could instead alter parameters
or variables in other frames
– Or even variables in current frame
CS 136, Fall 2011
Lecture 8
Page 45
Heap Overflows
• Heap is used to store dynamically
allocated memory
• Buffers kept there can also overflow
• Generally doesn’t offer direct ability to
jump to arbitrary code
• But potentially quite dangerous
CS 136, Fall 2011
Lecture 8
Page 46
What Can You Do With Heap
Overflows?
• Alter variable values
• “Edit” linked lists or other data structures
• If heap contains list of function pointers,
can execute arbitrary code
• Generally, heap overflows are harder to
exploit than stack overflows
• But they exist
– E.g., Microsoft CVE-2007-0948
• Allowed VM to escape confinement
CS 136, Fall 2011
Lecture 8
Page 47
Are Buffer Overflows Common?
• You bet!
• Weekly occurrences in major
systems/applications
– Mostly stack overflows
• Probably one of the most common
security bugs
CS 136, Fall 2011
Lecture 8
Page 48
Some Recent Buffer Overflows
• OpenSSH
– Popular security software
• Adobe Flash Player and Shockwave player
– Fourth class in a row where Adobe had a
recent buffer overflow
• Windows Bluetooth stack
• 23 in September 2011 alone
– In code written by everyone from IBM
and Apple to tiny software shops
CS 136, Fall 2011
Lecture 8
Page 49
Fixing Buffer Overflows
• Write better code (check input lengths, avoid dangerous
language features, etc.)
• Use programming languages that prevent them
• Add OS controls that prevent overwriting the stack
• Put things in different places on the stack, making it hard
to find the return pointer (e.g., Microsoft ASLR)
• Don’t allow execution from places in memory where
buffer overflows occur (e.g., Windows DEP)
– Or don’t allow execution of writable pages
• Why aren’t these things commonly done?
– Sometimes they are
• When not, presumably because programmers and
designers neither know nor care about security
CS 136, Fall 2011
Lecture 8
Page 50
Return-Oriented Programming
• A technique that allows buffer
overflows to succeed
• Even in the face of many defenses
– E.g., not allowing execution of code
on stack
– Or marking all pages as either write
or execute, not both
CS 136, Fall 2011
Lecture 8
Page 51
Basic Idea Behind ReturnOriented Programming
• Use buffer overflows just to alter
control flow
– By overwriting stack frame return
addresses
• Return to a piece of code already lying
around
– Which does what you want
CS 136, Fall 2011
Lecture 8
Page 52
That Doesn’t Sound Very Likely
• How likely is it that there’s code in
memory that does exactly what an
attacker wants?
• Well, perhaps there isn’t one piece of
code that does that
• But maybe he can stitch together
several code segments . . .
CS 136, Fall 2011
Lecture 8
Page 53
Practical Return-Oriented
Programming
• Use a target piece of code you know is
likely to be in memory
– Such as standard C libraries
• Build a “compiler” that converts what you
want to do into binary code segments
– Chosen from your target
• Demonstrable that you can build any
program you want this way
CS 136, Fall 2011
Lecture 8
Page 54
How Practical is Return-Oriented
Programming?
• Clearly challenging
• But has been used to hack a “secure”
voting machine
– In a research setting
• Can build tools that make it a lot easier
• Currently a threat from sophisticated
sources only
CS 136, Fall 2011
Lecture 8
Page 55