All research
Strategy

The Data Strategy Paradox: Why More Data Makes Worse Decisions

Mal Wanstall 18 June 2024 14 min read

Organisations that invest most heavily in data capabilities often make decisions no better, and sometimes worse, than those that don't. We traced this paradox across six enterprises and found a structural explanation nobody was looking for.

The counterintuitive finding

We examined decision-making quality across six enterprises, comparing those with mature data capabilities (dedicated teams, modern stacks, significant investment) against those operating on relatively basic infrastructure. The assumption was obvious: more data capability should equal better decisions.

It didn’t. Three of the six “data-mature” organisations made demonstrably worse strategic decisions over a two-year period: slower to respond to market shifts, more likely to pursue initiatives disconnected from strategic intent, and more susceptible to false confidence in metrics that measured activity rather than outcomes.

The paradox has a structural explanation, and it starts with a question nobody thinks to ask: what happens to data after it’s produced?

The accumulation trap

Data-mature organisations tend to accumulate rather than curate. Every team builds dashboards. Every function has its analytics capability. Every initiative generates metrics. The result is an explosion of available data with no corresponding increase in the organisation’s capacity to synthesise it into coherent strategic signal.

We counted the dashboards in one financial services firm: 847 active dashboards across 12 business units. When we asked executives which dashboards informed their quarterly strategic review, the answer was consistently seven or eight, the same seven or eight they’d been using before the analytics team tripled in size.

The remaining 840 dashboards weren’t useless. They served local operational needs. But their aggregate effect was to create an environment where any position could be supported with data, any concern could be countered with a metric, and any inconvenient signal could be buried under competing numbers.

More data doesn’t produce better decisions. It produces more defensible decisions, which is a very different thing.

Three mechanisms of the paradox

1. Signal dilution

When everything is measured, nothing stands out. Critical signals (the early indicators that a strategic bet is failing, that a market is shifting, that a customer segment is eroding) get lost in the noise of comprehensive measurement. The organisations with fewer metrics actually had better signal detection, because the signals they tracked were chosen deliberately rather than accumulated by default.

2. Confidence inflation

Sophisticated analytics create the feeling of rigour without guaranteeing it. A machine learning model that predicts customer churn with 87% accuracy sounds authoritative. But if the model was trained on historical data that reflects an outdated customer definition, the precision is false. The organisation acts on the prediction with confidence it hasn’t earned.

We found this pattern in four of the six enterprises: quantitative sophistication masking qualitative errors in the assumptions underneath the numbers.

3. Political weaponisation

In data-rich environments, data becomes a political resource. Teams select the metrics that support their position. Conflicting data is presented as a “different lens” rather than a contradiction. The existence of multiple credible data sources makes it harder, not easier, to reach alignment, because every faction has evidence.

One enterprise had three different revenue forecasts from three different teams, each using different methodologies, different data sources, and different assumptions. All three were technically defensible. The executive team spent more time adjudicating between forecasts than acting on any of them.

What actually works

The organisations that avoided the paradox shared a common trait: they treated data strategy as a decision-support problem, not a data-accumulation problem. They asked “what decisions do we need to make, and what data would change those decisions?” rather than “what data can we collect and what can we do with it?”

This sounds obvious. In practice, it’s rare. The incentive structure of most data teams (measured on pipeline volume, dashboard count, model accuracy) rewards accumulation. The shift to decision-support requires a different orientation entirely, one that starts with the strategic questions and works backward to the data.

The paradox resolves when you stop treating data as an asset to be maximised and start treating it as a signal to be curated.