RM-week5 - Lyle School of Engineering

Download Report

Transcript RM-week5 - Lyle School of Engineering

Week 5 – Special Topics
Risk Management Best
Practices
Traps, Alarms & Escapes

From Navy Best Practices



Traps - think have risks covered by
following procedures
Alarms – assumptions that cause trouble
Consequences – if nothing is done
• Personnel
• Availability
• Experience Levels
• Mix of Disc iplines
• Requirements
• Definition
• Stability
• Complexity
• Resource Availability
• Facilities
• Hardware
• Personnel
• Funding Profile
• Communications
• Tec hnology
• Availability
• Maturity Levels
• System Complexity
• Operating Environment
• Design
• Methods
• Complexity
• Software Tools
• Tes ting, Modeling
• Language
• Operational Interfaces
• Hardware
• Sensitivities to Other Risks
• Estimating Error
• Schedule
• Cost
• Number of Critic al Path Items
Sc
he
du
le
of
Im
pa
Te
ct
ch
Pe nic
rfo
a
rm l
an
ce
Su
pp
ort
ab
ilit
y
Pr
og
ram
ma
t ic
s
Co
st
Potential Risk Item
Ar
ea
Risk Checklist
Risk Management Training
Why RM Training is Needed


Weak backgrounds in risk management
Mistaken concepts




Assessment is RM – skip planning &
monitoring
Mitigation is only handling strategy
All risk can be eliminated
Focus on performance, omit cost &
schedule
Tailoring RM Training

Adjust to different project team groups




Sr. management
Working level engineers
Address issues each group is likely to
face
Address tailoring RM activities to meet
program needs – not one-size fits all
Software Risk Management
SW Risk as a Special Case

Virtual



Many interfaces




Can’t physically touch/feel to assess
History of latent defects
Hardware-to-software
Operating system-to-operating code
Incompatible combinations
Update frequency


Multiple versions
Upward/downward compatibility
Taxonomy of SW Risk
Risk Grouping
Project Level
Project Attributes
Risk Issue
Requirements - excessive, immature, unstable, unrealistic
Lack of user involvement
Under estimation of complexity or dynamic nature
Performance - errors, quality
Unrealistic cost or schedule
Management
Ineffective project management
Engineering
Ineffective integration, assembly, test
Unanticipated difficulties across user interface
Work Environment
Immature design, process, technologies
Inadequate configuration control
Inappropriate methods, inaccurate metrics
Poor trainiing
Legal, contractual issues
Obsolescence (incl. excessive duration)
Difficulties with subcontracted items
Unanticipated maintenance & support costs
Other
Boehm’s Top 10 Software Risks
Risk Item
Personnel shortfalls
Unrealistic schedules and
budgets
COTS; external components
Requirements mismatch;
gold plating
User Interface mismatch
Management Techniques
Staffing with top talent, tailoring processes
to skill mix, training, peer reviews, team
building
Design to cost, business case analysis,
reuse, requirements descoping,
incremental development, adding schedule
and budget
Qualification testing, benchmarking,
prototyping, compatibility analysis, vendor
analysis
Mission analysis, CONOPS formulation,
user surveys, prototyping, early user’s
manuals
Prototyping, scenarios, user
characterization (functionality, style,
workload
Boehm’s Top 10 Software Risks
Risk Item
Management Techniques
Architecture, performance,
quality
Architecture trade-off analysis, simulations,
benchmarking, modeling, prototyping
Requirements changes
Change thresholds, incremental
development, information hiding
Legacy software
Restructuring, design recovery, wrappers,
phase-out analysis
Externally-performed tasks
Straining computer science
capabilities
Reference checking, pre-award audits,
award fee contracts, competitive design or
prototyping, team building
Technical analysis, cost benefit analysis,
prototyping
Identification Strategies







Review Schedule & Networks
Review Cost Estimation Parameters
Perform Interviews
Revisit Lessons Learned
Develop a Risk taxonomy
Brainstorm and play “what if”
Re-sort the watch list (e.g., by source)
Identification- Schedule Risks



Use the planning or baseline schedule
Evaluate the activity network view
Look for nodes with:



High fan-in (many activities terminate at a single
node)
High fan-out (many activities emanate from a single
node)
No predecessors
ID- Cost Risk Drivers

Consider specific areas of concern that
can lead to problems:




Personnel experience, availability
Requirement complexity, firmness
Scheduling and prediction of task and partition
times
Hardware requirements, interfaces, constraints
ISO Risk Probability Table
Maturity Factor
0.1
Low
Complexity
Factor
Dependency
Factor
Stability
Factor
Technology exists and Simple relative to
can be used “as is”
current environment
Entirely within
project control
0.3
Moderate
Technology requires
minor change before
use (<25%)
Minor complexity
relative to current
environment
Depends on existing
External factors
product supplied from will make minor
outside organization
changes (<25%)
0.5
High
Technology requires
major change before
use (<50%)
Moderately complex
relative to current
environment
Depends on supply
and modification of
existing product from
outside organization
External factors
will make major
changes (<50%)
0.7
Very High
Technology requires
significant design and
engineering before
use (<75%)
Significantly complex
relative to current
environment
Depends on new
development from
outside organization
External factors will
make significant
changes (<75%)
0.9
Extremely
High
State of the art, some
research done
Extremely complex
relative to current
environment
Depends on finding
development from
outside organization
External factors
will make constant
changes
External factors
will not make any
changes
ISO Risk Consequence Table
Magnitude
Technical Factor
Cost Factor
0.1
Low
Small reduction in
technical performance
Budget estimates not
exceeded, some transfer
of money
0.3
Minor
Small reduction in
technical performance
Cost estimates exceed
budget by 1 to 5%
0.5
Moderate
0.7
Significant
Some reduction in
technical performance
Significant degradation
in technical performance
Cost estimates increased
by 5 to 20%
Cost estimates increased
by 20 to 50%
0.9
Catastrophic
Technical goal cannot be
achieved
Cost estimates increased
in excess of 50%
Schedule Factor
Negligible impact on
program, slight
development schedule
change compensated by
available schedule slack
Minor slip in schedule
(less than 1 month),
some adjustment in
milestones required
Small slip in schedule
Development schedule
slip in excess of 3
months
Large schedule slip that
affects segment
milestones or has
possible effect on
system milestones
ISO Risk Contour
ISO-Risk Contours
1 .0
0 .9
0 .8
R = .1
0 .7
R = .2
R = .3
0 .6
R = .4
R = .5
Probabi li ty 0 .5
R = .6
R = .7
R = .8
0 .4
R = .9
R = . 95
0 .3
R = . 98
0 .2
0 .1
0 .0
0 .0
0 .1
0 .2
0 .3
0 .4
0 .5
C onsequence
0 .6
0 .7
0 .8
0 .9
1 .0
SW Risk Handling




Avoidance - de-scoping objectives
Assumption – latent defects
Control – user acceptance testing
Transfer – from software to firmware or
hardware
SW Metrics
C ount
S o f t w a r e R is k It e m s
N e w R is k s
C lo s e d R is k s
T im e
Commercial vs. DoD/NASA
Perspective on Risk
Management
Commercial vs. Gov’t Perspective



Different market conditions
Different best practices
Different likelihoods for similar issues
As always, tailor RM to program needs
Market Differences

How is risk impacted?
Commercial
DoD / NASA
Many small buyers
Fewer large buyers
Many small suppliers
Typically few suppliers of a given item
Market sets price
Oligopoly pricing - biased to availble budget
Free movement in/out market
Barriers to entry
Prices set by marginal costs
Prices proportional to total cost
Once funding secured, usually stable
May have unanticipated disruptions to funding
Capcity to supply adjusts to demand
Moderate to large excess capacity
SW Development Best Practices

How is risk impacted?
Commercial
DoD / NASA
Evolutionary upgrades of existing systems
Little reuse, many unique systems
Heavy buyer involvement (as team member)
Formal development model - buyer oversees
Informal reviews
Very formal reviews
Heavy user involvement
Limited user involvement; buyer involved
Based on one or more industry stds
Use gov't and industry stds
Prototyping common
Limited prototyping
Risk Category Likelihood
Category
Cost
Commercial
Highly Likely
DoD / NASA
Highly Likely
Design
Possible
Likely
Integration
Support
Possible
Possible
Likely
Likely
Manufacturing
Likely
Likely
Technology
Management
Possible
Possible
Likely
Possible
Political
Unlikely
Likely
Comment
Whenever new development is
required
Degree of design enhancement
required
Driven by complexity
Commercial life cycles generally
short
Varies with production rate &
resource availability
Increases as push state of art
Depends on program complexity,
performance expectations
External issue to program
Overview of
Risk Management Tools
Cautions in Tool Selection

A good tool for one organization may
not be a good match for another


The tool should never dictate the
process


Tailor RM to the program needs
Define process, then choose compatible
tool
Be compatible with program culture
Effective Use of a Tool


RM is more than using a RM tool
Tool must efficiently & effectively
integrate into program



Resources required
Level of detail, complexity
Focus of tool – e.g., program phase
RM Data Base Considerations

Provide sufficient configuration control



Support program needs





Accessible to all team members
Ability to accept anonymous comments
Reporting
Monitoring
Captures lessons learned
Fulfills contractual requirements
Balance costs/value
Tools Comparison

@Risk & Crystal Ball – licensed software

Monte Carlo simulation add-in for Excel




Select desired distribution function & define
parameters
Provide data and generate a plausible
distribution function
Provides statistics and graphical output
User provides risk analysis structure
Probability-Consequence Screening

Developed by the Air Force





Risk events assigned a probability & consequence
for performance, schedule & cost
Position in consequence screening matrix
determines risk score
User assigns Hi, Med, Low ranges
Generates reports and graphical output
www.pmcop.dau.mil
Probability-Consequence Screening
Probability-Consequence Screening
Risk Matrix

Excel – based model





Collects inputs in watch list format
Uses best practices (ordinal) breakout for probability
& consequence
Orders events by Borda rank & assigns risk level
Generates action plan reports and graphical
output
www.mitre.org
Risk Matrix
Risk Matrix
Risk Radar

Access – based, licensed software





Can establish standard values for risk categorization
Manual or automatic risk prioritization
Complies with ISO, SEI CMMI & Government
standards
Generates detailed, summary & metrics reports
Demo available:
www.iceincusa.com/products_tools.htm
Risk Radar
Risk Radar
Risk Radar
TRIMS
Technical Risk ID & Mitigation

Knowledge-based system






Utilizes SEI & Navy Best Practices to collect data on
past experiences
Measures technical risk rather than cost & schedule
Most applicable to design efforts
Can tailor categories, templates & questions
Generates status, next action & overdue action
reports
www.bmpcoe.org
TRIMS
Technical Risk ID & Mitigation
TRIMS
Technical Risk ID & Mitigation
DSM – Design Structure Matrix

Knowledge & simulation-based tool






Assesses complexity of dependency relationships
between project tasks
Measures risk in terms of schedule impact
Most applicable to design efforts
Ongoing development effort at MIT
Generates suggested sub-team groupings, &
probability curves for task duration ranges
www.dsmweb.org
DSM – Design Structure Matrix
DSM – Design Structure Matrix
DSM – Design Structure Matrix
Final Exam




Closed book, closed notes
You have 90 minutes for exam.
Any questions?
Turn in Part II of your project according
to the schedule discussed last week