Why Your OKRs Are Lying to You: The Measurement Problem in Strategic Execution
OKRs were designed to create alignment between strategy and execution. In practice, they often do the opposite, creating an illusion of alignment that masks deeper disconnection. We examine how this happens and what to do about it.
The alignment illusion
OKRs (Objectives and Key Results) have become the default measurement system for strategic execution. The promise is compelling: cascading objectives from leadership to frontline, with measurable key results that demonstrate progress. In theory, you can trace any team’s work back to a strategic objective. In theory, alignment is visible and measurable.
In practice, what we observe is something different. OKRs create the appearance of alignment without the substance. Teams set objectives that use the same language as the strategy. Key results are measurable and achievable. The quarterly review shows green across the board. And yet the strategy isn’t progressing.
We examined OKR systems in six organisations over two years. The pattern was consistent enough to describe as a structural problem, not an implementation failure.
Three ways OKRs deceive
1. The vocabulary trap
Teams learn to write OKRs using the language of the current strategy. “Accelerate digital transformation.” “Drive customer-centric innovation.” “Build AI-ready capabilities.” The words match. The connection is illusory.
When we traced individual key results back to the strategic bets they claimed to serve, we found that 44% had no causal relationship to the bet’s thesis. The OKR measured something. That something had no bearing on whether the strategic bet would succeed or fail.
This isn’t dishonesty. It’s a structural consequence of a system that measures linguistic alignment (does the OKR sound like the strategy?) rather than causal alignment (does achieving this key result generate evidence for the strategic bet?).
2. The achievability bias
OKR best practice recommends stretch goals, ambitious but achievable targets. In practice, achievability dominates. Teams negotiate key results they’re confident they can hit. Managers approve them because they want their teams to succeed. Executives accept them because green dashboards feel like progress.
The result is a measurement system optimised for achievement rather than strategic relevance. Teams consistently hit their OKRs while the strategy consistently doesn’t progress. The measurement system says everything is working. The strategic outcomes say it isn’t.
When 87% of OKRs are achieved and 0% of strategic bets are on track, the OKRs are measuring the wrong thing. This isn’t a calibration problem. It’s a design problem.
3. The quarterly horizon trap
OKRs reset every quarter. Each quarter is a fresh start with new objectives. This creates a structural bias toward work that shows results within 90 days. Long-horizon investments (architectural decisions, capability building, market development) are systematically deprioritised because they can’t demonstrate quarterly key results.
The 90-day cycle is invisible as an interference pattern because it’s built into the measurement system itself. Nobody questions whether the quarterly cadence is appropriate for the strategic bets being pursued. The cadence is a given. The bets must fit the cadence, not the reverse.
What measurement should actually do
Measurement in the context of strategic execution should answer three questions:
Is evidence accumulating that the bet is working? Not “are teams busy?” or “are milestones being hit?” but “are we seeing the signals we expected to see if the thesis is correct?”
Are assumptions holding? Every strategic bet is built on assumptions about the market, the customer, the technology, the organisation. Measurement should track whether those assumptions remain valid.
Where is interference forming? Which organisational boundaries are distorting the bet? Where is signal being lost? What patterns are emerging that could undermine the bet before it has time to prove or disprove its thesis?
Moving from activity measurement to evidence measurement
The shift from OKRs-as-activity-trackers to evidence-based strategic measurement requires three changes:
Redefine key results as evidence. Instead of “launch feature X by Q2,” the key result becomes “observe customer behaviour change Y as a result of feature X.” The first measures activity. The second measures evidence.
Match measurement cadence to bet horizon. A three-year strategic bet should not be primarily measured on a quarterly cycle. Create leading indicators that can be measured frequently, but assess the bet’s trajectory on a timeline that matches its horizon.
Measure the translation layer. Add measurement of how strategic intent is being received, interpreted, and acted upon at each organisational level. This is the missing layer in every measurement system we’ve examined.
OKRs aren’t inherently broken. But the way they’re implemented in most organisations creates a measurement system that actively masks strategic disconnection. The numbers go up. The strategy goes nowhere. And nobody sees the gap because the measurement system sits exactly where the gap lives.