27 Feb
27Feb

I was prompted to turn my answer to an excellent set of questions Aram Petrosyan posed me in response to my popular article "YOU DON'T NEED STORY POINTS" into a new blog post, to provide greater visibility for my readers.

My original post describes an estimation/forecasting model which agile teams can use as an alternative to story points and velocity. It is based on the concepts of:

  • T-shirt sizes for all backlog items above the "business commitment line" (the line above which the items are expected for delivery rather than them being options)

  • Measuring throughput (aka story count rather than story points), and

  • Gauging the variance of the throughput to ensure the forecasts incorporate uncertainty and confidence levels

Henrik Kniberg described essentially the same model in his seminal "Agile Product Ownership in a Nutshell" video. I thank Henrik for this brilliant and widely admired introduction to agile ways of working, along with his influence on my thinking in this area and on my work in general.

Here is Aram's comment on my original post:

Hi Neil,
Regarding pessimistic and optimistic projections…This representation is better from the perspective that there is an implicit acknowledgment of a probabilistic outcome; however, there is still no mention of the percentage probability of the future outcome falling between the two lines. You might assume that the high-low lines were drawn using the “mean plus and minus one standard deviation”(which is a dangerous assumption) and, therefore, the percentage range between the two lines is 68% (even more dangerous). Even if our historical Throughput was normally distributed, and the standard deviation range really was 68%, don’t you think we can do better than a forecast with a 68% probability? Further, what if we wanted to know the probability of each discrete possible outcome? How would this chart visualize that? Therefore, I suggest to use use MCS (Monte Carlo) with its Results Histogram.

Thanks Aram! I’m actually really glad someone has raised this statistical topic in a public forum, and I feel the need to address it likewise.

To be brutally honest with myself and my readers, I am certainly no expert on statistics or statistical forecasting (check out Troy Magennis's work if you're looking for a real guru in that area). However, with the knowledge I do have from my Uni days, and the reading (and practical work) I have done on the topic during my career, I'm pretty sure that, from a pure estimation/forecasting point of view, Aram is spot on.

There is statistical frailty — or at least risks and assumptions to be aware of — in a throughput with variance model such as the one I am describing. Aram is further correct that — given a good enough dataset — using Monte Carlo forecasting would provide a more explicit view of probability and uncertainty, and be a more statistically sound way to estimate a project’s outcome.

However, much like Henrik warns against using "statistical hoodoo" in his video, I would respond to the above points by saying that the intent of this model is actually NOT to be a pure (or perfect) forecasting model, but rather a tool which:

  • Introduces the idea of navigating and managing uncertainty to traditional thinking teams and organisations, where deterministic thinking and “being right” are the orders of the day

  • Is simple enough to use that people will actually use it on a frequent basis

  • Promotes the tracking of actual throughput data and cycle times rather than (or in addition to):
    • Story point data, which is more abstract and deterministic in nature, or
    • Nothing at all

  • Can support a product owner and their team in making early and deliberate scope management — or other corrective — decisions when things look like they are (or are at risk of) going off-course — to steer toward success

  • Is used collaboratively (between "business people and developers", as the Agile Manifesto calls it) rather than as a way to give someone else a uni-directional estimate, with little or no knowledge of what they are going to do with that estimate, and little chance to update the estimate as we move through the project

Usually teams choose (or or forced to use) a far frailer model — a burn-up chart with a pleasing looking trend line showing one possible outcome — or nothing at all other than “gut feel” and guesswork (or even false information in situations where there is not enough mutual trust for people to be transparent and honest with each other — a sadly common scenario).

I like the way my model shows very clearly the impact of moving the vertical time line, or the horizontal scope line, on our ability (and confidence) to deliver x amount of scope by y date. While there is no percentage probability offered from this model, and as such the resulting numbers may not be statistically “correct”, the principles are certainly sound.

The model provokes conversations about predictability, but also (more importantly, in my view) the trade-off between trying to predict increasingly distant deliverables and agility.

If you want our team to be more agile, but you also want us to tell you when we will deliver a feature which is number 52 on our backlog (and use that information to make a potentially irreversible decision) then I am here to tell you those two requests are in conflict.

All models are imperfect — the point of them is to be useful, to provoke conversations and help us explore better ways. This model is better than what most teams and organisations are using right now (well, the ones I work with anyway!), at least in the context of complex software/product development in environments where people want (or want to use) estimates to support, or help them make, decisions, despite a rapidly changing environment and circumstances.

People often forget that, in complex environments, "requirements" and their desired delivery sequence being relatively static is far less influential on our ability to be predictable than the rapidly changing circumstances around us are, both internally (in our people, teams, priorities, workloads, expectations, etc.) and externally (in our customers and the wider marketplace). As an aside, if we do believe our requirements and delivery sequence are static, why are we even trying to use "agile" estimation techniques in the first place?

Regardless of the "agile" or otherwise nature of our project, we need to continually check-in and update our view on "progress" in order to navigate successfully through the turbulence around us. Using models and heuristics which by their very nature are imperfect may be preferable in such circumstances, in that they may prompt us to keep checking how useful they are being in helping us navigate, rather than tempting us to build unrealistic expectations of our chosen perceived "better" model which then unsuspectingly pulls the rug from under our feet (we will inevitably blame the model and throw it out, despite a model not being able to do anything wrong!).

My throughput model is also (in my view) simpler to understand than Monte Carlo forecasting. People (at least initially) reject ideas which are new to them if they see them as over-complicated, so in my quest as an independent agile consultant to help people solve their problems, I have to start somewhere. That said, I should point out that my model is not supposed to be a replacement for Monte Carlo or any other model. I would never advise the use of just ONE model. When I talk to companies about their struggles with estimation and forecasting, I typically introduce Monte Carlo forecasting to them as well.

I encourage you to try (and use) more than one delivery progress and forecasting model. Compare results from the different models. Start having frequent conversations about risk, uncertainty, predictability and agility — and their relationship with each other — and start uncovering better ways of balancing estimating and forecasting with the desire for agility in your context.

Comments
* The email will not be published on the website.