Measuring performance (of e.g. products, services, operations and employee activity) has becoming increasingly popular in the arena of software and product development, and knowledge work more generally.
At the end of the day, you measure particular things because they are important to you in some way. Your weight and average 10km running time might be important to you if you are tying to get or stay fit. The window washer fluid level in your car is important if you want to be able to keep your windscreen clean.
If you are measuring something which doesn't actually help you achieve your goal, you might abandon that metric (either explicitly or through natural attrition) in favour of another in the hope that it might turn out to be more effective. You don't measure things for the sake of measuring them. That is, there is an underlying reason or goal behind the desire to measure things, and the meeting of the goal is what is most important. Clearly the metrics we use are a means to an end, not an end in themselves.
With that in mind, here are 5 principles to consider when using metrics, if you want them to be effective at measuring and improving the things you care about:
1. Understand and focus on the intent of the metric, over the metric itself
A metric increasing or decreasing in line with an improvement initiative (e.g. an organisational OKR) does not necessarily signal an improvement (at least an effective, systemic one).
For example, Customer Referral Rate (CRR) for a product might increase due to customers being offered a large monetary reward to refer others, rather than it being a result of them loving the product. So what is the intent of measuring CRR? A valid intent would be to understand whether customers love the product (in this case so much that they want their friends and families to use it, even without any reward for doing so). Don't make the metric itself the focus.
2. Don't rely on one single metric
This principle follows on nicely from the last. There are likely other ways you can learn if your customers love your product, and maybe CRR isn't lining up nicely with the intent (perhaps because marketing have put in place a referral incentive scheme, or you don't currently have an easy way of measuring CRR satisfactorily).
Rather than keeping all your metric eggs in one basket, it might be useful to have a few ways which combine to validate whether the underlying assumption behind the intent is true or not. So explore what other metrics you might use for the same intent.
3. Always have a counter-metric to every metric
We might validly improve CRR (say by support staff spending more time and effort on improving the experience of the customer), but in doing so our support staff become unhappy at working longer hours, or because they are not meeting previously defined SLAs (and thus performance targets). If we care about our support staff's wellbeing and success, we do not want them to suffer personally or professionally.
So when you want to improve a metric, also ask - What metrics do we want to ensure are not negatively affected in the process?
4. Understand the difference between types of metrics
Not all metrics are made equal. There are input (process) and output metrics, outcome metrics, leading and lagging metrics, and other distinctions. Learn the difference so you can apply fit-for-purpose metrics at the appropriate levels.
5. Don't make a metric a target
Goodhart's Law tells us that when a measure becomes a target, it ceases to be a good measure. The reason for this is largely related to principle 1. We are making the metric itself the focus rather than the intent. This can lead to behaviour which places the improvement of the metric in line with the target to be the most important thing, which can be ineffective at best and actively destructive at worst.
Using the CRR example again, if increasing CRR to a particular level is the target, and my bonus or even promotion relies upon it, I might find ways to improve CRR which are at odds with the intent (such as introduce a significant monetary incentive for customers to refer). Not only am I now ignoring other critically important indicators of customer behaviour, I am potentially sabotaging the business. This might seem like an extreme example, but there are many instances of companies being significantly harmed or even destroyed as a direct result of people's behaviour in pursuit of targets.
That's all for now. I hope these 5 principles help you make more effective use of metrics in your team, organisation or personal life.
EDIT — A gentleman called James Steele added the following great comment to this post:
Thanks for this, Neil. I'll try and add some value to what you've already said ⬇️
It's been helpful for me and my clients to categorize metrics into 4 categories to help determine if 1) a product/service is #fitforpurpose and 2) a change experiment has improved the situation or not.
1) Fitness Criteria/KPIs: For a particular market segment, what purpose(s) do customers have for selecting my product or service. For each fitness criteria what is the min/max threshold in which I would be delighting them.
📈 2) Improvement Drivers: These have a target. Customers don't care about them. They are used to drive improvements for fitness criteria that are outside acceptable thresholds. Once we hit the target it will often become a Health Indicator for us to monitor.
🎯 3) Health Indicators: Things we monitor to gauge the health of our product or service. They have a healthy range. If we fall outside of them we should consider looking into it - perhaps we need an Improvement Driver to get it back into a healthy range.
🏥 4) Vanity Metrics: Things that serve no real purpose other than to make us feel good about ourselves or exist to satisfy certain cultural needs. We often mistake Vanity Metrics for Fitness Criteria. 🪞
Hope this helps
This led me to the following addendum:
Thanks James, another nice way of categorising metrics. And it leads me to make an important caveat to what I said about not making a metric a target (given that this principle might seem to some to contradict with your point 2).
If we use, say, an OKR to try and improve something operationally (let's say Change Failure Rate), the KR (key result) becomes a "target" (e.g. decrease CFR to <5%). There is nothing wrong with quantifying where you would like to get to with a particular metric. The problem is when the target becomes the purpose, rather than the intent (e.g. improving the speed and reliability of our deployments) being the purpose. This usually happens because someone will get rewarded for hitting that target. This can lead to gaming and destructive behaviour, which of course is a major problem.
Instead, use KRs (and metrics in general) to guide performance and improvement (and its desired extent) rather than be the focal point. Metrics are information to guide our decision making, not things to put on a pedestal.