Early estimates of schedule, resourcing and cost are generally required for most software projects. Despite it’s criticality to successful project completion, software estimation is most often done poorly, often arbitrarily. How can this be improved for early project estimation?
Despite many years of research into the topic, many (probably most) project exceed their budget and schedule. According to Capers Jones 1, data shows that many projects whose size is about 1000 function points are delayed or cancelled (~38%). For those projects around 10K FP most projects are delayed or cancelled (~72%). See the full table below for the probability of selected outcomes.
|Size (FP)||Early||On Time||Delayed||Cancelled|
To understand this table, you will need to know a little about function point conversion into your language of choice. LOC to FP conversion tables have been published 2 3 4, however using backfiring to produce function point counts is known to not be accurate. We will accept this restriction for the purpose of this post. I have published selected values from 4 below.
To make calculations simple, assume the Java/C++ is 50 LOC/FP. Therefore a 50K LOC Java program is about 1000 FP and about a 60% chance of on time delivery.
The most common form of software estimation is still expert based, usually from similar projects from memory or by intuition, often with little evidence to backup the validity of the estimate. Expert based opinion is:
- often wrong
- hard to validate
- lessons learned are usually not learned
Assuming an initial estimate is need early in the project lifecycle, what data can we use to give an estimate that is not required to be precise due to the cone of uncertainty?
Initial software size can be performed by category or analogy. See my previous post on Fast Software Sizing
Effort and Schedule
Although software estimation is a complex activity based on many factors (the COCOMO model lists more than 20) various rules of thumb can be used early in the project. Jones’ rule of thumb for staff size is the size in FP divided by 150 is the number of personnel for the application. For our example of 1000 FP, this gives ~6.7 people.
staff = FP size / 150
The schedule is approximated by:
schedule (months) = FP ^ 0.4
Although 0.4 is the average, web based software uses the exponent of 0.35. For our example, using 0.35, the schedule is then ~11.2 months.
Staff effort is then calculated by multiply the schedule by the staff size. For our example this gives ~75 staff months. If we make an assumption that staff cost $72K/year, then the project will cost ~$450K. Of course this excludes typical overhead costs and other factors. The productivity on this project would then be 1000 FP / 75 staff months = 13.3 FP / month. For comparison, the U.S. average was ~ 13.6 for 1000 FP applications 5.
A word of warning that the staffing in this example is reasonably different than table 3-22 in 5 where the average staff for a web based application of 1000 FP is 3.9 and schedule is 10 months in table 3-25. Initially I suspected this would primarily be due to the lower staff size required for web based applciations. However, table 3-30 in 5 lists productivity in web based applications to be 25.6 FP/month, almost double the productivity figure of 13.3 given above. If we use this higher productivity figure then the cost of a 1000 FP web based application is 1000 / 25.6 = 39.1 staff months. The full table of productivity for various application types of various sizes is given below.
All estimation should be done using various techniques, ideally converging and reinforcing the appropriateness of each other, negating bias from any particular technique.
Despite producing fairly inaccurate results, early software sizing is possible and useful. This is a nice supplement to formal estimation models and expert opinion for software estimates early in the lifecycle.
1 Jones, Estimating Software Costs: Bringing Realism to Estimating, 2007
4 Jones, Software Engineering Best Practices, 2010
5 Jones, Applied Software Measurement: Global Analysis of Productivity and Quality, 2008
6 Infosys, Practical Software Estimation
7 Molokken, A Review of Surveys on Software Effort Estimation
8 The Data & Analysis Center for Software (DACS), Fast Function Points Overview
9 McConnell, Software Estimation: Demystifying the Black Art, 2006.