×
×
Aqfer Insights
Stay on top of the latest trends in Martech, Adtech, and Beyond
Subscribe to follow the news on what’s happening in the marketing data ecosystem.
By Dan Jaye, CTO
A new study from MIT’s Project NANDA found something shocking: companies have poured $30-40 billion into AI, yet 95% of them aren’t seeing any real return.
The researchers call this the GenAI Divide. On one side are the 5% of companies turning pilots into production systems and unlocking millions in savings and new revenue. On the other side are the 95% stuck in endless pilots and demos, with no measurable business impact.
We’ve suspected for a while that 95% of AI initiatives were silent failures. The research just confirmed it. It’s not a lack of effort or investment – it’s irrational optimism. Teams believe a shiny tool will be enough, but without learning, without consistency, it isn’t. The big question is, why?
The answer is simple: most AI doesn’t learn. Current GenAI architectures are diverse, the tools don’t interoperate well (or at all), and the workflows are disconnected, which injects entropy (randomness) into systems. Because large language models (LLMs) are already non-deterministic – meaning they produce different answers each time – this inconsistency multiplies randomness in the output.
In theory, generative AI tools should get better as they’re used. In practice, most don’t. They fail to retain memory, adapt to new contexts, or evolve with workflows – so they remain static “point-in-time” tools.
At Aqfer, we’ve seen it firsthand. There’s a lot of inconsistency in how companies are building AI. The tools don’t fit together, the workflows are disconnected, and the results are random. If you’re trying to build something reliable, it collapses. The result can be fragile pilots that look good in demos but fail when scaled into live business operations. Even slight differences in how a prompt is phrased, or which data source is connected, can change the output dramatically. Instead of building systems that learn and stabilize, enterprises end up with tools that generate more chaos than clarity.
The real barrier is not model quality but the lack of workflow consistency and learning capability.
The 5% of companies that are winning with AI share a few traits:
At Aqfer, this is exactly the problem we solve: helping companies close the learning gap that keeps most AI pilots from scaling. Our platform eliminates entropy at the data layer so AI tools can learn and deliver reliable results.
As we often remind clients: without consistency, randomness creeps in. And randomness doesn’t scale. By giving AI the right data and the right context, Aqfer helps companies move from experiments to measurable ROI.
The MIT report makes another critical point: companies are locking in AI vendors right now. Over the next 18 months, many will make choices that will be very difficult and costly to unwind.
If you get it wrong, you don’t just waste money. You risk falling permanently behind competitors who get it right. As of now, 95% of companies are stuck, while 5% are creating value. The difference isn’t luck. It’s about closing the learning gap and choosing systems that actually improve over time.
That’s why we’re excited about our growing Data Enablement for AI efforts at Aqfer. We help companies make AI work – not in theory, but in practice.