September 22, 2011

The Cone of Uncertainty

  —Applying the cone of uncertainty to software estimates.

I have been re-reading Mike Cohn's book "Agile Estimating and Planning" in an effort to deepen my understanding of Agile practices for estimating software. The first chapter, "The Purpose of Planning" introduces the cone of uncertainty to discuss software estimates and provide data on the accuracy of estimates over the various phases of a project.

I was intrigued because the cone and the approach used by the PMI are different—the cone views estimate accuracy as symmetric with initial estimates of ±4 times the actual cost of delivering a project. The PMI views initial estimates as asymmetric and places these estimates at +75% through -25% of accuracy.

Some research on the cone lead to insights that weren't clear from reading chapter one. The most important is that the cone of uncertainty represents the best you can do when estimating software. It says nothing about how bad you can do.

A good resource is Brad Appleton's blog post on The Cone of Uncertainty. This is an old post from 2006. Of the links provided with this post the following are still live and useful.

Coding Horror says:
An important--and difficult--concept is that The Cone of Uncertainty represents the best-case accuracy that is possible to have in software estimates at different points in a project. It is easily possible to do worse. It isn't possible to be more accurate; it's only possible to be more lucky.
Best case estimates are created by expert estimators. Worse case estimates are not considered. Furthermore, you must be diligent in improving and reassessing your estimates otherwise your cone of uncertainty becomes a cloud. The cloud of uncertainty is characterized as that notion of being 99% of the way through a project and having been there for a long time.

Construx says:
If you’re working on a project that does a full development cycle each iteration—that is, requirements definition through release—then you’ll go through a miniature Cone on each iteration. Before you do the requirements work for the iteration, you’ll be at the “Approved Product Definition” part of the Cone, subject to 4x variability from high to low estimates. With short iterations (less than a month), you can move from “Approved Product Definition” to “Requirements Complete” and “User Interface Design Complete” in a few days, reducing your variability from 4x to 1.6x. If your schedule is fixed, the 1.6x variability will apply to the specific features you can deliver in the time available rather than to the effort or schedule. 
You have to go through a full development cycle for each iteration. Doing so can reduce estimate variability from 4x to 1.6x. For a fixed schedule that 1.6x variability applies to features that can be delivered in the time available rather than effort or schedule.

Most teams reach a compromise and establish most of the requirements up front. They use iterations to tackle the other phases of the projects. This approach tends to reduce the variability of estimates to ±25%.

I think it important to emphasize the cone only shows the best accuracy you can achieve, even with expert estimators and only if they continually reassess their estimates as the project progresses. Worse case estimates can always be much worse than what is shown by the cone.

comments powered by Disqus