Based on a recent client conversation, I wanted to summarize my take on a common blocker to OKRs: How to measure success in practical, hands-on, down to earth ways.
The case has been completely fictionalized with made-up OKRs, industry and all.
Say you’ve just embarked on your OKR journey with your entire company. Say you’ve totally bought into the concept of outcomes vs output, and you’ve even understood that “Finish delivery of client project” is NOT a great outcome, but simply a measurable milestone.
You’ve read how to avoid your OKR program failing.
You’re excited about embedding OKRs into your meetings and making alignment and clarity of outcomes the center stage of your conversations. Your Business Reviews will now be focused around your OKRs.
Your meetings with the VP of Product and other leaders will be focused around the OKRs they’re supporting. Because, if we’re honest, if you’re not executing against, then even the best strategy is pointless.
In short: You’re excited about diving headfirst into OKRs.
Now, you proceed with the OKR-setting (the defining of your Objectives and Key Results in a collaborative workshop).
Setting the objectives comes easy. You’re discussing your North Star with your fellow leadership team and feel an exhilarating energy filling the room, as you agree on an inspirational statement that articulates the WHY and WHAT your company will be focusing on this year.
As a next step, you’re defining Key Results. Good Key Results. Clearly defined outcome-focused measurements that articulate success. Not what needs to be done but defining how you know you’ve been successful.
The first Key Result is easy. “Doubling expansion revenue from $20m to $40m”. Your strategy is focused on customer happiness after all, and happy customers want more of what works for them. You add a second Key Result (focused on customer retention), and a third one to measure actual customer happiness (Your CS team has the perfect metric for that).
The second focus of your strategy is steering the boat through rocky waters while keeping everyone in the boat safe. Meaning, you want to focus internally on employee well-being and happiness. The first Key Result is naturally (and literally) focused on ‘Employee happiness’, measured via the bi-annual company-wide survey. Here you stop.
You stop and think because you realize that you won’t be able to effectively measure the success of your employee-focused initiatives with a metric that’s only available twice a year. That means you could only course-correct twice a year. How to fix that?
The challenge is that the metrics you’re tracking as KRs is valid, yet the measurement frequency is too low, what some people call your ‘goals rate’. This limits your ability to remain agile through a pulse check, track progress, and take reactive measurements early on.
A second challenge is that if you’re measuring these types of metrics, then employees are “looking at a 0 measurement” for weeks or months on end without a new data input without the ability to act, because the metric just isn’t available.
This is a common challenge:
Key metrics are often only infrequently measurable, leaving us in the dark, lacking insights, and lacking the ability to act.
We call those “lagging indicator key results”.
Lagging indicators; Metrics that are only available after the fact. While they’re still valid and should be kept, I recommend adding ” leading indicators” as well.
Leading indicators are related metrics that we can often access more frequently. That means we’re measuring metrics from which we’re getting feedback on our outcome achievement, putting us in a position where we can see insights early on, and react accordingly. Another way to complement lagging indicators are “proxy metrics”. Metrics, not of the fact, but of a related fact, that we can measure. Proxy metrics can be a great way to measure impact (But can come with their own sets of challenges of course). Back to our leading indicators.
Leading indicators can be impact measures of early activities. (Read: Not measures of the activities, but measures of the impact of those activities). One common example for a great lagging indicator in B2B sales is pipeline and lead flow. B2B sales can take forever, so measuring early indicators through lead flow or pipeline metrics can generate amazing insights.
Now, what about employee happiness? For employee happiness metrics, I would try to find “early indicators” such as impact measures of initiatives that we’re driving. For example, if we’re looking to increase employee happiness, we may be driving an initiative around employee benefits (health and wellness). In such a case we could track how many people opt-in to participate in our program. Or we could measure a similar usage metric.
Again, just to be super clear: You’re not measuring the activity of you offering health and wellness benefits. That’s an activity (measured as activity based key results) and doesn’t tell you how well you’re doing, similarly to milestone-tracked output-based key results. Instead, you’re measuring how many of your employees make use of your program. That’s what you’re measuring, based on the assumption that if people are using and claiming your benefits, that your benefits are effective and may help to keep them happy. Now if after a while it becomes obvious that your leading indicator does not appropriately approximate the success of your lagging indicator, you must question both your approach and your leading indicator.
What do you think? Head over to www.wavenine.com to learn more!