By Dan Jaye, CTO 

A new study from MIT’s Project NANDA found something shocking: companies have poured $30-40 billion into AI, yet 95% of them aren’t seeing any real return.

The researchers call this the GenAI Divide. On one side are the 5% of companies turning pilots into production systems and unlocking millions in savings and new revenue. On the other side are the 95% stuck in endless pilots and demos, with no measurable business impact. 

We’ve suspected for a while that 95% of AI initiatives were silent failures. The research just confirmed it. It’s not a lack of effort or investment – it’s irrational optimism. Teams believe a shiny tool will be enough, but without learning, without consistency, it isn’t. The big question is, why?

The Actual Problem: AI That Doesn’t Learn

The answer is simple: most AI doesn’t learn. Current GenAI architectures are diverse, the tools don’t interoperate well (or at all), and the workflows are disconnected, which injects entropy (randomness) into systems. Because large language models (LLMs) are already non-deterministic – meaning they produce different answers each time  –  this inconsistency multiplies randomness in the output. 

In theory, generative AI tools should get better as they’re used. In practice, most don’t. They fail to retain memory, adapt to new contexts, or evolve with workflows – so they remain static “point-in-time” tools.

At Aqfer, we’ve seen it firsthand. There’s a lot of inconsistency in how companies are building AI. The tools don’t fit together, the workflows are disconnected, and the results are random. If you’re trying to build something reliable, it collapses. The result can be fragile pilots that look good in demos but fail when scaled into live business operations. Even slight differences in how a prompt is phrased, or which data source is connected, can change the output dramatically. Instead of building systems that learn and stabilize, enterprises end up with tools that generate more chaos than clarity.

The real barrier is not model quality but the lack of workflow consistency and learning capability.

What the Successful 5% Do Differently

The 5% of companies that are winning with AI share a few traits:

  1. Strategic Alignment With Business Objectives: Successful projects tie directly to revenue, cost, or customer gains. Leaders target high-value problems where AI can embed in critical workflows, not experiments.

     

  2. Data Readiness and Infrastructure Maturity: Winners built clean, governed data pipelines early. With first-party ownership, standardized taxonomies, and scalable environments, they enable reliable, trusted AI outputs.

     

  3. Cross-Functional Collaboration: Efforts aren’t left to “AI labs.” Business leaders, engineers, product teams, and compliance teams work in squads, moving pilots quickly from concept to production.

     

  4. Measurable Pilot Design: The 5% avoid “science projects.” Pilots have KPIs tied to P&L – CAC, retention, margin – so early wins prove value and justify scaling.

     

  5. Executive Sponsorship and Governance: C-level backing speeds funding and adoption. Strong governance around compliance and risk keeps projects on track.

     

How Aqfer Helps Companies Cross the Divide

At Aqfer, this is exactly the problem we solve: helping companies close the learning gap that keeps most AI pilots from scaling. Our platform eliminates entropy at the data layer so AI tools can learn and deliver reliable results. 

  • Reliable Data Flows – We make sure a company’s own data is organized and trustworthy, so AI isn’t learning from messy or incomplete information.

     

  • Always Current – Our platform delivers fresh and up-to-date context to AI models, preventing them from making decisions based on stale or fragmented data.

     

  • Consistency Across Workflows – We provide proper structure so AI can “remember” and build on past work, instead of starting over every time.

As we often remind clients: without consistency, randomness creeps in. And randomness doesn’t scale. By giving AI the right data and the right context, Aqfer helps companies move from experiments to measurable ROI.

Why Timing Matters

The MIT report makes another critical point: companies are locking in AI vendors right now. Over the next 18 months, many will make choices that will be very difficult and costly to unwind.

If you get it wrong, you don’t just waste money. You risk falling permanently behind competitors who get it right. As of now, 95% of companies are stuck, while 5% are creating value. The difference isn’t luck. It’s about closing the learning gap and choosing systems that actually improve over time.

That’s why we’re excited about our growing Data Enablement for AI efforts at Aqfer. We help companies make AI work – not in theory, but in practice.

 

Categories

Recent Posts

Subscribe Now

This field is for validation purposes and should be left unchanged.