Transcript Document
R&D SDM 1
Metrics
How to measure and assess software
engineering?
2009
Theo Schouten
Contents
•What are “software metrics”
•Why is measurement important?
•Software Quality
•Qualitative measures
•Quantitative measures
•Final remarks
Book chapter 15
Metrics
•What is a metric?
–“A quantitative measure of the degree to which a system,
component or process pocesses a given attribute” (IEEE
Software Engineering Standards 1993) : Software Quality
•Different from
–Measure (size of a system, component e.d)
–Measurement (act of determining a measure)
•Metrics
–Qualitative Metrics
–Quantitative Metrics
Why important, difficult
Why is measurement important
•to characterize
•to evaluate
•to predict
•to improve
•
Why is measurement difficult?
–No “exact” measure (‘measure the unmeasurable’,
subjective factors)
–Dependent on technical environment
–Dependent on organizational environment
–Dependent on application and ‘fitness for use’
Quality definition
(degree of) conformance to:
•explicitly stated functional and performance requirements
•explicitly documented developments standards
•implicit characteristics that are expected of all professional
developed software.
Requirements are the foundation to measure software quality
Standards define the development criteria for software
engineering
Software quality should conform to explicit and implicit
requirements
Software quality attributes
Software Quality Factors (McCall, 1977):
Maintainability
Flexibility
Testability
Portability
Reusability
Interoperability
Product Revision
Product Transition
Product Operation
Correctness
Reliability
Usability
Integrity
Efficiency
Software Quality: Qualitative Measures
McCall: Metrics that affect cq influence software quality factors:
–Software Quality Factors are the dependent, metrics are the
independent variable
–Metrics: audibility, accuracy, communication commonality,
completeness, consistency, data commonality, error tolerance,
execution efficiency, expandability, generality, hardware
independence, instrumentation, modularity, operability, security, selfdocumentation, simplicity, software system independence,
traceability, training.
–Software quality factor = c1 m1 + c2 m2 + … + cn mn
–Cn is a regression coefficient based on empirical data
–Software quality factor gives an indication of the quality of the
software
McCall Matrix
FURPS, Quality Factors
•Developed at Hewlett-Packard (Grady, Caswell, 1987)
•Functionality:
–Feature set and capability of the system
–Generality of the functions - Security of the overall system
•Usability:
–Human factors (aesthetics, consistency and documentation)
•Reliability:
–Frequency and severity of failure
–Accuracy of output
–MTTF - Failure recovery and predictability
•Performance:
–Speed, response time, resource consumption, throughput and efficiency
•Supportability:
–Extensibility, Maintainability, Configurability, Etc.
ISO 9126 Quality Factors
6 key quality attributes, each with several sub-attributes
•Functionality
•Reliability
•Usability
•Efficiency
•Maintainability
•Portability
Also often not direct measurable, but gives ideas for indirect
measures and checklists.
defect : The nonfulfilment of intended usage requirements
nonconformity : The nonfulfilment of specified requirements
superseded by the new project SQuaRE, ISO 25000:200
Quantitative Metrics
Desired attributes of Metrics (Ejiogu, 1991)
–Simple and computable
–Empirical and intuitively persuasive
–Consistent and objective
–Consistent in the use of units and dimensions
–Independent of programming language, so directed at models (analysis,
design, test, etc.) or structure of program
–Effective mechanism for quality feedback
Type of Metrics:
–Size oriented
•Focused on the size of the software (LinesOfCode, Errors, Defects, size
of documentation, etc.)
•independent of programming language?
–Function oriented
•Focused on the realization of a function of a system
Product Metrics Landscape
•
•
•
•
•
See chapter 15.2.6
Metrics for the analysis model
Metrics for the design model
Metrics for the source code
Metrics for testing
Function Oriented Metrics
Function Point Analysis (FPA)
a method for the measurement of the final functions of an information system
from the perspective of an end user
–method to determine size of a system on basis of a functional specification
–method is independent of programming language and operational environment
–its empirical parameters are not.
–Usable for
•estimate cost or effort to design, code and test the software
•predict number of components or Lines of Code
•predict the number of errors encountered during testing
•determining a ‘productivity measure’ after delivery
Function Point Analysis
Basis is the Function Punt Index (FPI) of a system to build:
.
Amount of work = Function Point Index * Resource factor
Resource factor depends on:
–development environment
–experience of project team with environment and tools
–size of the team
Benchmarks and measurements of previous projects are used to estimate the
resource factor.
Function Point Index
•How do you calculate the FPI of a system to be build?
•Three steps are used:
Determine
System
Attributes
Determine
‘Value
Adjustment”
Determine
Function
Point
Index
FPA: System Attributes
• Count the number of each ‘system attribute’ :
1. User (Human or other system) External Inputs (EI)
2. User External Outputs (EO)
3. User External Inquiries (EQ)
4. Internal Logical Master Files (MF)
5. Interfaces to other systems (IF)
External User
Transactions
EI
MFs
EO
EQ
EQ
EO
EI
System Environment
IF
Interface
Transactions
Other Systems
FPA: System Atributes weighting
•Determine per system attribute how difficult it is:
–Low
–Medium
–High
•Use the following matrix to determine the weighting factor:
System Attribute
Low
Medium
High
User External Input
3
4
6
User External Output
4
5
7
User External Inquiry
3
4
6
User Logical Master File
7
10
15
Interfaces
5
7
10
•Calculate the weighted sum of the system attributes:
the ‘Unadjusted Function Point Index’ (UFPI)
FPA: Value Adjustment
•The UFPI needs to be adapted to the environment in which the system has to
operate. The ‘degree of influence’ is determined for the 14 ‘values adjustment ‘
factors:.
–Data Communications
–Distributed Processing
–Performance Objectives
–Tight Configuration
–Transaction Volume
–On-line Data Entry
–End User Efficiency
Value between 0 and 5
–Logical File Updates
–Complex Processing
–Design for Re-usability
–Conversion and Installation Ease
–Multiple Site Implementation
–Ease of Change and Use
FPA: final function point index
1. Total sum of the ‘degree of influence’ (DI) (0-70
2. Determine Value Adjustment : VA= 0.65+ (0.01*DI)
(0.65-1.35)
3. Function Point Index = VA*UF
Historical data can then be used, e.g.
• 1 FP -> 60 lines of code in object oriented language
• 12 FP's are produced in 1 person-month
• 3 errors per FP during analysis, etc.
Example
Functie:
Test Sensor
Password
Sensor opvragen
Gebruiker
Panic button
Alarm
Gebruiker’s
Interactie Module
Zone opvragen
Aan/Uit
Sensors
Zone instellen
Berichten
Sensor Status
Gebruiker
Aan/Uit
Password, sensors etc.
Alarm Alert
Systeem Configuratie Data
Externe
Monitoring
Unadjusted Function Point Index
Determine the number of system attributes:
1. User External Inputs (EI)
2. User External Outputs (EO)
3. User External Inquiries (IQ)
4. User Logical Master Files (MF)
5. Interfaces to other systems (IF)
System Attribute
Number Low
Medium
High
Total
User External Input
User Logical Master File
3
2
2
1
3
4
3
7
4
5
4
10
6
7
6
15
9
8
6
7
Interfaces
4
5
7
10
20
50
User External Output
User External Inquiry
Total
End remarks
•Metrics are usable for a relative view on a system, not an absolute
view.
•Metrics have qualitative and quantitative aspects.
•Realize you try to measure the unmeasurable
•Use metrics to determine the functionality of a system on basis of
the wishes of the end user
•Use metrics for:
To characterize
– To evaluate
– To predict
– To improve
–