HistoryViewLinks to this page 2014 February 25 | 02:16 pm

Contents


Status

This document is a Draft Specification.

Introduction

Estimation plays an important role at several phases of the software development lifecycle. We’ll focus on the following use cases:

  • Initiating
    • Develop the initial project parameters of scope, cost, schedule, and quality and see if they align with business owner expectations.
    • Ideally base the estimate on similar completed projects.
  • Monitoring and Controlling
    • Capture metrics and compare them with control limits.
    • Take corrective action when an unacceptable variance occurs
  • Reestimating
    • Periodically, e.g. at the end of each iteration, reestimate the project based on the actuals.
  • Closing
    • Capture the final project metrics and record them for use by future projects.
  • Calibrating
    • Analyse past projects to define estimation model parameters that reflect the actual capabilities of a development organization.

Each of these use cases is described in more detail below.

Initiating

A project proposal requires a business case which includes the cost, schedule, and risk estimates for these. The risk estimates are typically provided as a decile step function of the probability distribution of the estimated quanity, e.g. duration, cost, size.

The main actors are business owner and the estimator. For future reference, a business owner is the requestor of the proposed functionality and an estimator is the person responsible for estimating the project. This individual is skilled in the use of estimation models and tools, but may not have detailed domain knowledge of the project being estimated. The estimator will work with individuals who understand the project scope, complexity, architecture, and team capabilties to establish the assumptions upon which the estimate it based. These assumptions, e.g. project duration, peak staffing, new and changed lines of code, etc., become part of the project proposal and are compared to business owner expectations for alignment. Proposal risk can be determined based on the alignment of te estimate and the business owner expectations.

The estimator pulls the project parameters from the proposal, works with an estimation tool, and attaches the estimates to the proposal. The estimate typically provides the time and cost elements of the business case. The uncertainly in these estimates contribute to the overall investment risk of the project. The cost and risk are then used by the portfolio manager judge the desirability of the project. Projects that have high risk must also have a high return on investment (ROI) in order to be attractive.

Monitoring and Controlling

All projects have measurable properties, i.e. metrics, that can be monitored throughout the project lifecycle in order to assess progress, health, and risk. The project plan describes how generic metrics such as labor expense, staffing levels, and task completion are expected to change over the lifecycle of the project. Since these metrics cannot be predicted with complete accuracy, a certain amount of variance is normal. Project managers control projects by monitoring these metrics, detecting unacceptable variances, diagnosing the root cause, and taking corrective action.

Software projects have a large additional set of metrics, such as code size, defect rate, test success, etc., that provide additional insight into the project as it evolves. It is therefore natural to treat the estimates for these software metrics as also being part of the project plan.

A software metric estimate is a time series of values that predict the lifecycle of a metric over some part of the project lifecycle. The estimate includes both the predicted value (most likely) and the error bounds (high and low values) to establish the control limits for the measurement. Project actuals for these metrics are automatically collected from software tool repositories, e.g. defect metrics are collected from change management systems, source code size metrics are collected from software configuration management systems, effort metrics are collected from time accounting systems and schedule metrics are collected from project management tools.

The main actor is the project manager or an analyst in a project office. Here we regard teams leads who are responsible for monitoring and controlling their teams as effectively acting in the capacity of a project manager.

The project manager or analyst periodically compares the actuals to the estimates. They may use business intelligence techniques such as dashboards and charts with automatic alerts to detect unacceptable variances. The control limits can be used to colour dashboard widget ranges as green, yellow, and red to indicate the degree to which the control limits are approached or exceeded. If the actuals diverge from the estimates, then the project manager diagnoses the cause and takes corrective actions.

Business intelligence techniques such as drill down and drill through allow the project manager to explore the metric space and understand the root cause of variances. Drilling down into the data enables the project manager to isolate the source of the variance. Estimates are typically performed using a coarse grained product breakdown structure. In order to establish estimates and control limits below this level of granularity, components could use average values from their parent component. For example, suppose the total defect estimate for a 100 KLOC component is 2000 +/- 200. The average defect density is therefore 20 defects per KLOC. If this component is composed of two child components of sizes 70 KLOC and 30 KLOC, then their defect totals could be estimated at 1400 +/- 140 and 600 +/- 60. If 80 defects were reported for the 30 KLOC component, then an alert would be raised even though the density for the parent component might be within the control limits.

Monitoring and controlling iterative projects have an ongoing need for estimation of the time-to-complete ( TTC). Ideally, each iteration should reduce the uncertainty of ontime delivery. This uncertainty is measured by the variance in the current estimate of the TTC. Iterative programs begin with an intial iteration plan ideally designed to reduce the TTC as rapidly as possible. At each iteration boundary the iteration plan is reset based on the results achieved to-date and the remaining variance in the TTC. Hence, the project team needs to have an ongoing view of the variance in the estimate of the TTC along with the remaining program content (components, work items, …) that most contribute to the variance.

Some large programs, essentially a team of project teams do iteration planning at the program level at a given time frequency. Each team then adopts a higher frequency iteration plan or adopts an agile process to ensure they meet their commitments to the current iteration. In this case, they need to roll-up the team estimate of TTC to the overall program estimate. In particular, the relationship of program estimate variance and the teams’ estimate variances is needed to support the program iterations. In some cases the teams may be adopting bottom-up estimation methods based on their work items, while the program may be using top-down approaches. This calls for a method of relating bottom-up and top-down estimate.

Reestimating

Software metric estimates are not goals – they are simply a tool to make project outcomes more predictable. For example, the estimated defect discovery rate for a project depends on both the defect injection rate and the efficiency of the defect removal processes. If in the course of a project, few defects are discovered, this could be good news or bad news. It’s good news if the development team is injecting fewer defects and bad news if the testing is inadequate. The project manager must determine which is the case. If in fact the testing is good and the defect injection rate is low, then the defect estimates should be redone. Knowing the actual defect rates affects the cost of the project and the downstream support costs.

Estimates should therefore be revised to incorporate actuals so that they reflect reality better.

The main actor is either the project manager, if the project assumptions are still valid, or the estimator if the project assumptions need to be updated. In both cases we need to incorporate the actuals to produce a better estimate to completion.

At selected points in the lifecycle of the project, e.g. the end of an iteration, the project manager or estimator pulls the project parameters and actuals into an estimation tool, produces an updated estimate, and revises the project plan. This new plan becomes the baseline against which future actuals are compared.

As the project progresses it will grow a baseline history. The project manager may also wish to view and compare the baselines to determine if the estimates are converging and the uncertainty is being reduced. If the project is under control, then the estimates should converge. At the end of the project, all the metrics will atain actual values so the uncertainty is zero.

See Reestimation use case description.

Closing

The safest way to estimate a project is to base it on a similar completed project. For example, a quarterly maintenance release of a product will likely be similar to the previous quarterly maintenance release of the same product if it is executed by the same time. Estimation models should therefore incorporate the actual performance of the development organization on past projects.

The main actor is the estimator or metrics analyst since knowledge of estimation models, software metrics and tools is required.

When a project finishes, the project manager formally closes it. This may involve collecting summary data. At this point it may be of interest to analyse the project by benchmarking it against the other projects in the historical database:

  • Did the project perform better than the norm?
  • If the project improved or got worse, what was the cause?
  • Have there been any significant trends?

After the project manager closes the project, the estimator pulls the final project data into a metrics repository and the estimation tool and recalibrates the model. The updated model is now ready for use in estimating future projects.

Calibrating

Estimation models contain parameters that can be selected to more closely describe the actual performance of a development organization. The process of selecting the best parameters is known as calibration. This task is performed by assembling a database of past projects, classifying them into groups that share similar characteristics, and computing the set of model parameters that fit each group best.

The main actor is the estimator since skill with an estimation tool is required. The estimator collects the historical data, loads it into an estimation tools, and performs the classification. The estimation tool then applies algorithms to compute the model parameters that best fit the data. Each group of similar projects will have its own set of model parameters.

Calibration is typically done when a development organization adopts an estimation tool. In the absence of calibrating the tool using the organization’s own data, users of the tool will have to rely on model parameters computed by the tool vendors based on data collected from other organizations, which may not accurately reflect the organization’s capabilities. Calibration is therefore highly desirable.

After the initial bulk calibration, an organization should incrementally update the model parameters when new projects are completed. See the Closing use case.

Comments

Added business owner to the initiating scenario and brought in to other actors like analysts in project offices or metrics analysts to controlling and closing scenarios.

Added Calibrating use case.