Acknowledgments |
|
xv | |
|
|
1 | (6) |
|
|
1 | (1) |
|
|
2 | (1) |
|
|
3 | (2) |
|
|
5 | (1) |
|
|
6 | (1) |
|
|
7 | (8) |
|
2.1 Method 1: The Goal Question Metrics Approach |
|
|
9 | (1) |
|
2.2 Method 2: Decision Maker Model |
|
|
10 | (1) |
|
2.3 Method 3: Standards Driven Metrics |
|
|
10 | (1) |
|
2.4 Extension to GQM: Metrics Mechanism |
|
|
11 | (1) |
|
2.5 What to Measure Is a Function of Time |
|
|
12 | (1) |
|
|
12 | (1) |
|
|
13 | (1) |
|
|
13 | (1) |
|
|
13 | (2) |
|
3. Measurement Fundamentals |
|
|
15 | (19) |
|
3.1 Initial Measurement Exercise |
|
|
15 | (1) |
|
3.2 The Challenge of Measurement |
|
|
16 | (1) |
|
|
16 | (4) |
|
|
16 | (2) |
|
3.3.2 Diagrammatic Models |
|
|
18 | (1) |
|
|
18 | (1) |
|
3.3.4 Model Examples: Response Time |
|
|
18 | (1) |
|
3.3.5 The Pantometric Paradigm: How to Measure Anything |
|
|
19 | (1) |
|
3.4 Meta-Model for Metrics |
|
|
20 | (1) |
|
3.5 The Power of Measurement |
|
|
21 | (1) |
|
|
22 | (8) |
|
3.6.1 Introduction to Measurement Theory |
|
|
22 | (1) |
|
|
23 | (1) |
|
3.6.3 Measures of Central Tendency and Variability |
|
|
24 | (4) |
|
3.6.3.1 Measures of Central Tendency |
|
|
25 | (1) |
|
3.6.3.2 Measures of Variability |
|
|
25 | (2) |
|
3.6.4 Validity and Reliability of Measurement |
|
|
27 | (1) |
|
|
28 | (2) |
|
3.7 Accuracy Versus Precision and the Limits of Software Metrics |
|
|
30 | (1) |
|
|
31 | (1) |
|
|
31 | (2) |
|
|
33 | (1) |
|
|
33 | (1) |
|
|
34 | (20) |
|
4.1 Physical Measurements of Software |
|
|
34 | (6) |
|
4.1.1 Measuring Lines of Code |
|
|
35 | (1) |
|
4.1.2 Language Productivity Factor |
|
|
35 | (2) |
|
4.1.3 Counting Reused and Refactored Code |
|
|
37 | (2) |
|
4.1.4 Counting Nonprocedural Code Length |
|
|
39 | (1) |
|
4.1.5 Measuring the Length of Specifications and Design |
|
|
39 | (1) |
|
4.2 Measuring Functionality |
|
|
40 | (11) |
|
|
41 | (9) |
|
4.2.1.1 Counting Function Points |
|
|
41 | (4) |
|
4.2.1.2 Function Point Example |
|
|
45 | (2) |
|
4.2.1.3 Converting Function Points to Physical Size |
|
|
47 | (1) |
|
4.2.1.4 Converting Function Points to Effort |
|
|
47 | (1) |
|
4.2.1.5 Other Function Point Engineering Rules |
|
|
48 | (1) |
|
4.2.1.6 Function Point Pros and Cons |
|
|
49 | (1) |
|
|
50 | (1) |
|
|
51 | (1) |
|
|
51 | (1) |
|
|
52 | (1) |
|
|
53 | (1) |
|
|
54 | (25) |
|
5.1 Structural Complexity |
|
|
55 | (18) |
|
5.1.1 Size as a Complexity Measure |
|
|
55 | (3) |
|
5.1.1.1 System Size and Complexity |
|
|
55 | (1) |
|
5.1.1.2 Module Size and Complexity |
|
|
56 | (2) |
|
5.1.2 Cyclomatic Complexity |
|
|
58 | (5) |
|
|
63 | (2) |
|
5.1.4 Information Flow Metrics |
|
|
65 | (2) |
|
|
67 | (4) |
|
5.1.5.1 Maintainability Index |
|
|
67 | (2) |
|
5.1.5.2 The Agresti–Card System Complexity Metric |
|
|
69 | (2) |
|
5.1.6 Object-Oriented Design Metrics |
|
|
71 | (2) |
|
5.1.7 Structural Complexity Summary |
|
|
73 | (1) |
|
5.2 Conceptual Complexity |
|
|
73 | (1) |
|
5.3 Computational Complexity |
|
|
74 | (1) |
|
|
75 | (1) |
|
|
75 | (2) |
|
|
77 | (1) |
|
|
78 | (1) |
|
|
79 | (39) |
|
6.1 Effort Estimation: Where Are We? |
|
|
80 | (1) |
|
6.2 Software Estimation Methodologies and Models |
|
|
81 | (26) |
|
|
82 | (3) |
|
6.2.1.1 Work and Activity Decomposition |
|
|
82 | (1) |
|
6.2.1.2 System Decomposition |
|
|
83 | (1) |
|
6.2.1.3 The Delphi Methods |
|
|
84 | (1) |
|
6.2.2 Using Benchmark Size Data |
|
|
85 | (3) |
|
6.2.2.1 Lines of Code Benchmark Data |
|
|
85 | (2) |
|
6.2.2.2 Function Point Benchmark Data |
|
|
87 | (1) |
|
6.2.3 Estimation by Analogy |
|
|
88 | (3) |
|
6.2.3.1 Traditional Analogy Approach |
|
|
89 | (2) |
|
|
91 | (1) |
|
6.2.4 Proxy Point Estimation Methods |
|
|
91 | (10) |
|
6.2.4.1 Meta-Model for Effort Estimation |
|
|
91 | (1) |
|
|
92 | (2) |
|
|
94 | (1) |
|
6.2.4.4 Use Case Sizing Methodologies |
|
|
95 | (6) |
|
|
101 | (2) |
|
|
103 | (5) |
|
|
103 | (2) |
|
6.2.6.2 Estimating Project Duration |
|
|
105 | (1) |
|
6.2.6.3 Tool-Based Models |
|
|
105 | (2) |
|
|
107 | (1) |
|
|
108 | (4) |
|
6.4.1 Targets Versus Estimates |
|
|
108 | (1) |
|
6.4.2 The Limitations of Estimation: Why? |
|
|
109 | (1) |
|
6.4.3 Estimate Uncertainties |
|
|
109 | (3) |
|
6.5 Estimating Early and Often |
|
|
112 | (1) |
|
|
113 | (1) |
|
|
114 | (2) |
|
|
116 | (1) |
|
|
116 | (2) |
|
7. In Praise of Defects: Defects and Defect Metrics |
|
|
118 | (26) |
|
7.1 Why Study and Measure Defects? |
|
|
118 | (1) |
|
7.2 Faults Versus Failures |
|
|
119 | (1) |
|
7.3 Defect Dynamics and Behaviors |
|
|
120 | (3) |
|
7.3.1 Defect Arrival Rates |
|
|
120 | (1) |
|
7.3.2 Defects Versus Effort |
|
|
120 | (1) |
|
7.3.3 Defects Versus Staffing |
|
|
120 | (1) |
|
7.3.4 Defect Arrival Rates Versus Code Production Rate |
|
|
121 | (1) |
|
7.3.5 Defect Density Versus Module Complexity |
|
|
122 | (1) |
|
7.3.6 Defect Density Versus System Size |
|
|
122 | (1) |
|
7.4 Defect Projection Techniques and Models |
|
|
123 | (10) |
|
7.4.1 Dynamic Defect Models |
|
|
123 | (6) |
|
|
124 | (3) |
|
7.4.1.2 Exponential and S-Curves Arrival Distribution Models |
|
|
127 | (1) |
|
7.4.1.3 Empirical Data and Recommendations for Dynamic Models |
|
|
128 | (1) |
|
7.4.2 Static Defect Models |
|
|
129 | (4) |
|
7.4.2.1 Defect Insertion and Removal Model |
|
|
129 | (1) |
|
7.4.2.2 Defect Removal Efficiency: A Key Metric |
|
|
130 | (2) |
|
7.4.2.3 Static Defect Model Tools |
|
|
132 | (1) |
|
7.5 Additional Defect Benchmark Data |
|
|
133 | (3) |
|
7.5.1 Defect Data by Application Domain |
|
|
133 | (1) |
|
7.5.2 Cumulative Defect Removal Efficiency (DRE) Benchmark |
|
|
134 | (1) |
|
7.5.3 SEI Levels and Defect Relationships |
|
|
134 | (1) |
|
|
135 | (1) |
|
7.5.5 A Few Recommendations |
|
|
135 | (1) |
|
7.6 Cost Effectiveness of Defect Removal by Phase |
|
|
136 | (1) |
|
7.7 Defining and Using Simple Defect Metrics: An Example |
|
|
136 | (3) |
|
7.8 Some Paradoxical Patterns for Customer Reported Defects |
|
|
139 | (1) |
|
7.9 Answers to the Initial Questions |
|
|
140 | (1) |
|
|
140 | (1) |
|
|
141 | (1) |
|
|
142 | (1) |
|
|
142 | (2) |
|
8. Software Reliability Measurement and Prediction |
|
|
144 | (23) |
|
8.1 Why Study and Measure Software Reliability? |
|
|
144 | (1) |
|
|
144 | (1) |
|
|
145 | (1) |
|
8.4 Failure Severity Classes |
|
|
145 | (1) |
|
|
146 | (1) |
|
8.6 The Cost of Reliability |
|
|
147 | (1) |
|
8.7 Software Reliability Theory |
|
|
148 | (4) |
|
8.7.1 Uniform and Random Distributions |
|
|
148 | (2) |
|
8.7.2 The Probability of Failure During a Time Interval |
|
|
150 | (1) |
|
8.7.3 F(t): The Probability of Failure by Time T |
|
|
151 | (1) |
|
8.7.4 R(t): The Reliability Function |
|
|
151 | (1) |
|
8.7.5 Reliability Theory Summarized |
|
|
152 | (1) |
|
|
152 | (3) |
|
|
152 | (2) |
|
8.8.2 Predicting Number of Defects Remaining |
|
|
154 | (1) |
|
8.9 Failure Arrival Rates |
|
|
155 | (6) |
|
8.9.1 Predicting Failure Arrival Rates Using Historical Data |
|
|
155 | (1) |
|
8.9.2 Engineering Rules for MTTF |
|
|
156 | (1) |
|
|
157 | (1) |
|
8.9.4 Operational Profile Testing |
|
|
158 | (3) |
|
8.9.5 Predicting Reliability Summary |
|
|
161 | (1) |
|
|
161 | (1) |
|
8.11 System Configurations: Probability and Reliability |
|
|
161 | (2) |
|
8.12 Answers to Initial Question |
|
|
163 | (1) |
|
|
164 | (1) |
|
|
164 | (1) |
|
|
165 | (1) |
|
|
166 | (1) |
|
9. Response Time and Availability |
|
|
167 | (14) |
|
9.1 Response Time Measurements |
|
|
168 | (2) |
|
|
170 | (7) |
|
9.2.1 Availability Factors |
|
|
172 | (1) |
|
|
173 | (1) |
|
9.2.3 Complexities in Measuring Availability |
|
|
173 | (1) |
|
9.2.4 Software Rejuvenation |
|
|
174 | (14) |
|
|
175 | (1) |
|
9.2.4.2 Classification of Faults |
|
|
175 | (1) |
|
9.2.4.3 Software Rejuvenation Techniques |
|
|
175 | (1) |
|
9.2.4.4 Impact of Rejuvenation on Availability |
|
|
176 | (1) |
|
|
177 | (1) |
|
|
178 | (1) |
|
|
179 | (1) |
|
|
180 | (1) |
10. Measuring Progress |
|
181 | (16) |
|
|
182 | (3) |
|
|
185 | (2) |
|
|
187 | (1) |
|
10.4 Defects Discovery and Closure |
|
|
188 | (4) |
|
|
189 | (1) |
|
|
190 | (2) |
|
10.5 Process Effectiveness |
|
|
192 | (2) |
|
|
194 | (1) |
|
|
195 | (1) |
|
|
196 | (1) |
|
|
196 | (1) |
11. Outsourcing |
|
197 | (11) |
|
|
197 | (1) |
|
11.2 Defining Outsourcing |
|
|
198 | (3) |
|
11.3 Risk Management and Outsourcing |
|
|
201 | (2) |
|
11.4 Metrics and the Contract |
|
|
203 | (3) |
|
|
206 | (1) |
|
|
206 | (1) |
|
|
207 | (1) |
|
|
207 | (1) |
12. Financial Measures for the Software Engineer |
|
208 | (23) |
|
12.1 It's All About the Green |
|
|
208 | (1) |
|
|
209 | (1) |
|
12.3 Building the Business Case |
|
|
209 | (15) |
|
12.3.1 Understanding Costs |
|
|
210 | (6) |
|
|
210 | (1) |
|
|
210 | (1) |
|
|
211 | (2) |
|
12.3.1.4 Capital Versus Expense |
|
|
213 | (3) |
|
12.3.2 Understanding Benefits |
|
|
216 | (2) |
|
12.3.3 Business Case Metrics |
|
|
218 | (6) |
|
12.3.3.1 Return on Investment |
|
|
218 | (1) |
|
|
219 | (1) |
|
12.3.3.3 Cost/Benefit Ratio |
|
|
220 | (1) |
|
12.3.3.4 Profit and Loss Statement |
|
|
221 | (1) |
|
|
222 | (1) |
|
|
223 | (1) |
|
12.4 Living the Business Case |
|
|
224 | (1) |
|
|
224 | (3) |
|
|
227 | (1) |
|
|
228 | (2) |
|
|
230 | (1) |
13. Benchmarking |
|
231 | (7) |
|
13.1 What Is Benchmarking? |
|
|
231 | (1) |
|
|
232 | (1) |
|
|
232 | (1) |
|
13.4 Identifying and Obtaining a Benchmark |
|
|
233 | (1) |
|
13.5 Collecting Actual Data |
|
|
233 | (1) |
|
|
234 | (1) |
|
|
234 | (2) |
|
|
236 | (1) |
|
|
236 | (1) |
|
|
236 | (1) |
|
|
237 | (1) |
14. Presenting Metrics Effectively to Management |
|
238 | (14) |
|
14.1 Decide on the Metrics |
|
|
239 | (1) |
|
|
240 | (3) |
|
|
243 | (1) |
|
14.4 Drilling for Information |
|
|
243 | (4) |
|
14.5 Example for the Big Cheese |
|
|
247 | (2) |
|
|
249 | (1) |
|
|
250 | (1) |
|
|
250 | (1) |
|
|
251 | (1) |
|
|
251 | (1) |
Index |
|
252 | |