Thursday 16 October 2014

All Features Implemented does not equal Project Success

Dear Junior

Setting goals for agile projects is trickier that setting goals for waterfall projects. As I mentioned in an earlier letter, for waterfall projects it is possible to set the goal in the form of a feature list to be delivered. This approach has several drawbacks - it does not guide the thousands of micro-decisions that are made, it gives no sense of purpose to support the drive or inner motivation, and it gives no guidance for evaluating whether the money, time, and effort was well spent.

However, although all these problems, and although the approach is not advisable - it is still possible. The project management mindset and tools for waterfall projects are applicable for reporting project status or following up visavi a feature list. It is also possible to evaluate success by checking whether everything on the feature list is implemented.

Well, I still think it would be better to evaluate whether the project created some value.

Now, ridiculous as this seems it is still industry standard.

The CHAOS report 1995 from Standish Group might be one of the most cited reports in system development. According to its statistics only 16% of software projects succeed. In the 2012 report this has been updated to 39%. And they collect data from tens of thousands of projects so the figures should be pretty reliable. 

Now, this seems scary and low, but only until you check out their definition of success.
"Resolution Type 1, or project success: The project is completed on-time and on-budget, with all features and functions as initially specified."
OK. Not a word about actually getting value for the effort. Remember the on-line bookstore where they wanted others-have-bought recommendations to increase the number of books customers. Now, imagine they implemented the others-have-bought recommendations, but the customers did not buy more books anyway. Is this project a success?

Well, you have spent lots of money building something, but made no money from it. To me it sounds like money, time, and effort down the drain - not as a success.

To Standish Group, it is considered a success.

On the flip side, imagine the project realising that a simplified "search similar" would increase the number of books each customer buys. Imagine further that this feature would be much easier to implement. So, the project decides to implement that feature instead. And the number of books sold per customer increases 0.4 on average.

Spending less money, time, and effort on something else than originally envisioned, but still getting the benefit - that sounds like success to me.

To Standish Group, it is considered a failure.

Thus, the figure 16% or 39% actually says nothing at all about the state of software projects.

So, Standish Group is probably filled with smart people. Why did they not evaluate software projects according to some meaningful metric instead? Most probably "generated business value as specified upon funding" would be more interesting. 

My guess is that too few projects actually stated an envisioned business effect, so analysing those projects would give so little data that it would not be possible to make any significant conclusions.

So, instead of measuring something that would actually matter ("fulfilled envisioned business effect") they just measured something that could be measured.

Now, from the second example, where the team implemented another feature than originally envisioned, it is also clear that this approach is an absolutely worthless way of evaluating agile projects.  

Yours

   Dan

PS The way to measure agile projects is to set a target for a business effect.

PS The CHAOS 1995 report has been republished openly available for academic purposes. It can be found at http://www.projectsmart.co.uk/docs/chaos-report.pdf. The 2013 version can be found at http://www.versionone.com/assets/img/files/CHAOSManifesto2013.pdf