YOU DON'T NEED STORY POINTS


21 May
21May

This post will show you how a cross-functional development team (EDIT — including analysts, designers, programmers, testers — anyone who participates in the building of the product) can estimate the delivery of an agile project without the need for story points.

Many folks are looking for an alternative to story points because they are not getting the promised benefits in terms of predictability, are confused about their application, or the story point estimates are being misused/abused by management, leading to dysfunctional behaviour such as gaming or setting targets way beyond the team’s capacity.

Note 1 — By “agile project” I mean (in this instance) a project which is being planned and delivered in an incremental way, i.e. there are a number of things that need to be done, and when those things are done then the release or project itself is done. You have a relatively stable team, with dedicated capacity to the project, working in short cycles of ~4 weeks or less.

Note 2 — This is not a #NoEstimates post. This post describes an alternative to story points for estimating projects, not an abandonment of estimation.

Here’s how you do it:

Give every work item above the “customer commitment line” a T-shirt size

This should be done at the beginning of the project and then every time throughout the project when a new item appears above the line, or there is an obvious update required for an existing item.

S/M/L should be sufficient, but you can have XS and XL as well if you like. I find it useful to attach a timescale to the T-shirt sizes, to give them meaning and make it less confusing. I typically use:

S = Days (or less)

M = Weeks

L = Months

Note 1 — These sizes are just for rough scale, not precision. It doesn’t matter that some items will not retrospectively fall within the bucket you allocated for them, nor that the buckets are quite wide in range (we’ll deal with that later). The actual delivery time will be obfuscated by the item’s position in the backlog — and thus the delay between it being estimated and actually delivered to “done” — along with the fact that the estimates needed for business are typically for the delivery of batches of items, not one particular item.

Note 2 —EDIT — I have been asked by some folks “who does the estimates?”. This whole article describes how a development team estimates a project, so there’s your answer :) The development team estimates each work item as if they had only that item to implement (as a team), and had 100% focus on it.

This situation unfolding for every single item is unrealistic, but it doesn’t matter. Remember we’re looking for relative sizes, not absolute sizes. Assuming a normal distribution of disruptions and other overhead across all development work, let’s simplify the estimation process by not considering these things when assessing a particular work item. Systemic overhead can be made transparent by tracking other data such as unplanned work and cycle times. Best not to try and estimate that, use the data instead.

Note 3— The “customer commitment line” is my term for the line in the backlog above which promises/expectations have been set with a customer (or business stakeholder) for delivering those things.

Explicitly slice/split items Just-In-Time (JIT)

This can be done in a refinement/grooming activity or a sprint planning meeting.

Explicitly slice any items you are looking to do next (i.e. the items at the top of the backlog) into smaller items, and only bring size S items into development.

Note — There are many great learning resources on how to slice stories effectively. I run a regular training workshop in Melbourne on this activity. Here’s an article I wrote on the topic.

If you are struggling to vertically slice any items, you can break them down into workflow steps or tasks (aka “decomposition”) instead.

Count and record the number of S items you deliver each week/sprint

Record this in a simple spreadsheet — item count per week or sprint (aka throughput).

Also record the “split rate”, i.e. if you sliced any M or L items in planning, how many S items did those M and L items result in?

That’s it!

You now have a fully estimated backlog of work (for everything that needs an estimate, which is all the items above the customer commitment line), you know your team’s average delivery rate (total throughput divided by number of weeks or sprints), and the volatility/variance in that delivery rate.

Your throughput is measured in S items. In order to make forecasts against the backlog, which is likely containing M and L items (which, as I’ve described, will need to be sliced before they get delivered), you can use the average of the split rates you have been recording:

(L * L average split rate) + (M * M average split rate) + S

where L is the number of L items on the backlog, M the number of M items and S the number of S items.

For example, if there are 20 L items (which get sliced into 12 S items on average), and there are 35 M items (which get sliced in 5 S items on average), and there are 11 S items, then your total backlog size is:

(20 * 12) + (35 * 5) + 11

= 240 + 175 + 11

= 426 items

Most importantly, you have embedded the crucial practice of story slicing as an explicit activity in your refinement/planning process. This activity helps you work in a more agile way. You will be able to deliver value sooner, define and test deliverables more easily and unambiguously, respond to change more readily and (as a byproduct) you will become more predictable

Note 1 — If you need to provide an estimate at the beginning of the project and you do not know your team’s “split rate” (e.g. because you’re a new team), you will need to pick a few M and L items up front and slice them. You now have an estimated average split rate.

EDIT — In such a situation, aside from split rate history you will also have no delivery history, and as such will have to estimate that up front too. This highlights the importance of updating up-front forecasts with empirical data once you’ve actually spent some time as a team delivering items to done, and adding this requisite as a disclaimer to the initial estimate, particularly if it is going to be used to guide a significant decision or is resulting in concrete contractual promises to a customer. The same situation arises when you use story points, so tactics for dealing with it are outside the scope of this article.

Note 2— The volatility in your delivery rate can be measured using one or two standard deviations of the delivery count numbers you’ve been recording. Adding this number to the average gives you an optimistic delivery rate, taking it away gives you a pessimistic, and everything in between those boundaries is a probabilistic forecast of your delivery rate.

This allows forecasts to be made on how many items might be complete in a particular timeframe, or on what date a particular number of items might be complete, or a combination of the two (aka “are we on track”). You can also plug your numbers into a Monte Carlo simulation tool, such as this one.

Note 3— If your backlog really is the size in the example above, with all of those items being above the customer commitment line, it would be wise to think about how you can reduce this number. The more items and further out in time you want to forecast, the greater uncertainty and risk there is, so any estimation methodology is going to fall short.


Note 4— Hat tip to Troy Magennis for the term “split rate”, and his overall contribution to the world of more sane estimation and forecasting.

Note 5 — Here is a spreadsheet created by Daamon Parker for capturing throughput data and measuring flow metrics such as cycle time. It’s free to use. Please contact Daamon if you have any questions about it.


Thanks for reading! If you are looking for help with your software or product delivery, I provide agile coaching, public training (both theory and practical) up to executive management level, and more. As well as public events, I can also run training internally in your organisation for a massively reduced cost, so please ✍ get in touch.
Comments
* The email will not be published on the website.