Why Optimisation Stalls: Three Uncomfortable Truths (and How to Fix Them)

Written by Alison Sainsbury | 28 January 2026

This article was co-authored with Christopher Nash, Partner & Director of DX Optimisation at RelevantEdge. With over 25 years in digital experience and analytics, and deep expertise in data-driven personalisation, Chris adds a practitioner’s perspective grounded in real enterprise delivery.

Between us, Chris Nash and I have spent decades working with enterprise digital, analytics, and optimisation programs. We have seen successes and failures, we have seen every generation of marketing tooling — and promises of clarity, confidence, and faster decision-making.

And yet, the same pattern of falling short keeps repeating.

Most optimisation programs don’t stall because teams lack tools. They stall because teams lack shared clarity on what success actually looks like, and how to understand and use the analytics they already have.

Dashboards are built. Reports are circulated. AI features are enabled. Yet decisions are slow, confidence in data is fragile, and optimisation quietly loses momentum.

🔺 This is not a tooling failure.
🔺 It is a leadership failure.

As organisations push harder on personalisation, experimentation, and AI-enabled optimisation, the cost of unclear measurement only increases. Without a strong foundation, even the most advanced platforms struggle to deliver value.

Optimisation doesn’t fail because of platforms

Most enterprise teams already have some level of:

  • Web and experience analytics
  • Experimentation and personalisation capability
  • Business intelligence and reporting tools.

What they often lack is agreement on:

  • Which outcomes matter and what decisions should be supported
    Optimisation can feel like a giant cookie jar of possibilities, and everyone reaches for a different flavour.
  • Which signals are important
    And nearly as importantly, which ones are noise.
  • Who is responsible for acting on the insights
    And whether they have the authority, time, and support to do so.

When success is undefined or contested, optimisation becomes activity without direction. Teams stay busy, but momentum stalls.

This gap between capability and value is something Gartner has repeatedly highlighted in its research into data and analytics leadership. In its 2024 CDAO Agenda Survey, Gartner found that while 74% of leaders believe their data and analytics function meets stakeholder expectations, only 49% have established business-outcome-driven metrics that allow those stakeholders to see or track value.

The issue is not access to analytics.

It is the absence of shared agreement on what the metrics mean, what success looks like, and how analytics should inform real decisions.

 

The Uncomfortable Truths

Truth 1: Dashboards alone rarely lead to decisions

Dashboards are often treated as the end product of measurement.

In practice, they are only valuable if they support a specific decision that someone is accountable for making, and if that decision maker can interpret the data with confidence.

Common failure patterns include:

  • Lack of decision-first KPI frameworks
  • Metrics reported without context
  • KPIs aligned to platform features rather than business outcomes
  • Reports reviewed on a cadence but not used to drive action
  • Leaders expected to “go find” insights themselves
  • Measurement as an afterthought, not designed into UX early in website design projects
When no one can confidently answer, “So what does this mean and what should we do about it?”, optimisation slows to a crawl.

This problem is not new. But it is becoming more visible as teams add more tools, more dashboards, and more AI-generated insight without fixing the fundamentals.

Which leads us to the second truth.

Truth 2: Technology trends are making the problem worse, not better

Composable platforms do not fix unstructured analytics. They replicate it.

Where teams once argued over a single set of web metrics, they now juggle multiple platforms reporting similar numbers in different ways. Which is the reliable signal? Which metric should be used for which decision?

AI does not remove the need for measurement discipline.
AI magnifies the need for discipline.

Both Gartner and Relevant Edge make the same point from different angles: AI systems are only as effective as the signals they are trained on. Without clear goals, consistent event definitions, and regular feedback loops, AI has nothing meaningful to optimise against.

RelevantEdge frames AI as an enabler, not a replacement for thinking:

“In a composable analytics architecture, AI acts as a modular intelligence layer that enriches behavioural data with meaning, automates interpretation, and feeds reusable insights into BI and decision systems, without locking organisations into a single analytics or AI stack.”

– Chris Nash, RelevantEdge

Composable analytics solutions are on the rise. One key reason is the ability to combine behavioural data (views, clicks, conversions) with content signal data (example of content signal data please?) surfaced using AI content analysis. Increasingly, AI provides insight into “why” users engage and convert. With this, decision-first KPIs provide richer insights for better decisions and actions.

Optimisation programs are the training ground that teaches both humans and machines what “good” looks like. There is no replacement for key stakeholders aligning on the definition of success in a collaborative workshop or process.

 

Truth 3: The business does not trust your numbers

Mistrust in data rarely comes from a single failure. It builds gradually through:

  • Inconsistent event definitions
  • Changing goals without updating reporting
  • Black-box dashboards owned by no one
  • Multiple systems claiming to be the source of truth

Once trust erodes, teams stop acting on insight and revert to instinct or hierarchy. Optimisation becomes political rather than empirical.

Gartner has described this shift as a move from a single source of truth to a “deluge of distrust”, warning that trust in data is now a critical enabler for both analytics and AI initiatives:

“As the distrust of information and data intensifies, D&A leaders’ ability to build trust in data is becoming increasingly important for their success.”

– Gartner - Top Trends in data & analytics 2025

Without trust, optimisation programs lose credibility, funding, and executive support.

When measurement is working properly, it enables:

  • A shared language across marketing, digital, and technology
  • Confident prioritisation of effort versus impact
  • Faster learning loops through testing and iteration
  • Leadership confidence to invest, scale, and automate.

This is why optimisation is not a reporting problem.
It is a leadership and planning discipline.

What to do about it

Six actions to rebuild trust in your digital analytics and reporting:

A strong optimisation foundation does not require perfect data or a massive analytics team. It does require intention.

Adopt a decision-first KPI framework
A decision-first framework defines KPIs by the business decisions they must inform, not by available data. It starts with explicit decisions, identifies behavioural signals, and designs decision-grade KPIs with context and ownership—ensuring metrics drive action, accountability, and measurable business impact rather than passive reporting.

Design a clear, flexible measurement and review cadence
Establish a cadence for reviewing performance, but expect it to change. During major initiatives, such as a rebuild or platform transition, deeper and more frequent reporting is needed to inform decisions. In BAU, focus shifts to overall health metrics alongside campaign and optimisation performance. Effective measurement adapts to context rather than remaining static.

Agree on what success means, and how it is measured
Align on outcomes, metric definitions, and calculation methods* before scaling reporting or AI-driven optimisation. Shared understanding of what the numbers mean and trust in their veracity is more important than the volume of metrics available.

Take reporting to where your people already are
Consolidate digital signals into tools leaders already use, such as BI platforms or shared operational dashboards. Do not expect teams to log into multiple platforms to piece together the story. Insight should appear where decisions are already being made.

Give reporting a visible place in your operating rhythm
Build measurement into existing processes, not side channels. Include a regular reporting slot in team and leadership meetings so performance is reviewed together. This signals that insight matters, builds shared understanding, and normalises learning from the numbers.

Invest in capability, not just tooling
Many organisations cannot afford a dedicated analyst. Most can afford a workshop to define what data matters, a model to deliver that data into a centralised dashboard, and targeted training to build confidence across teams. Capability multiplies the value of every tool you already own.

* Note: yes, this includes agreeing on how metrics are calculated. What constitutes a conversion? How is percentage change calculated? These basics are often misunderstood, and the downstream impact is bigger than most teams realise.

Conclusion

Optimisation does not fail because teams lack ambition. It fails because they lack clarity, alignment, and confidence in what success looks like.

Effective optimisation requires:

  • Clear ownership of outcomes and metrics
  • Agreed definitions of success
  • Regular review and prioritisation
  • Visible links between insight and action

Without these foundations, dashboards multiply, AI promises grow, and confidence continues to erode.

Fix that, and decisions accelerate.
Trust in data returns.
AI becomes an enabler rather than a risk.

That is when optimisation finally delivers on its promise.