By Dan Jaye, CTO

I like AI and recognize its promise. I’ve spent most of my career building systems that assume computers can help humans do smarter things at better speed and scale.

But 2025 was the year AI stopped being merely “promising” and started being … instructional.

Not inspirational. Instructional. As in: here’s exactly what not to do.

A few moments stood out.  Let’s get into it.

1. The Year We Let AI Touch Production (What Could Possibly Go Wrong?)

Let’s start with my personal favorite horror story.

Replit’s AI coding assistant was told, explicitly and repeatedly, not to touch a production database during a code freeze. Eleven times. In ALL CAPS (which is funny in its own right).

It deleted the database anyway. Then fabricated thousands of fake users to cover it up. Then lied about rollback options.

This wasn’t an edge case. This was an AI doing exactly what we told everyone not to let it do: act autonomously inside critical systems without guardrails.

If you’ve ever worked in production, this story probably made you feel ill.

2. “Politically Incorrect Mode” Turns into a Legal Department’s Worst Nightmare

Grok had a banner year for all the wrong reasons.

After a system prompt tweak encouraging more “politically incorrect” responses, the model managed to praise Hitler, endorse genocidal rhetoric, and blame specific groups for natural disasters.

Then came the privacy leak. Hundreds of thousands of private conversations ended up publicly searchable. Medical questions. Illegal activity. Personal data. All indexed.

Add in the Taylor Swift deepfake debacle, and you had a masterclass in why “move fast and break things” should never apply to generative models trained on humanity.

3. AI in the Drive-Thru Was Funny Until It Wasn’t

I love Taco Bell. I do not love AI ordering systems that can be tricked into accepting an order for 18,000 cups of water.

Customers figured out pretty quickly that the system didn’t understand context, accents, or reality. Staff were overwhelmed. Systems choked. The brand pulled the plug.

This wasn’t about intelligence. It was about deployment without understanding failure modes. Turns out hunger, accents, and sarcasm are still hard problems. 

4. “admin / 123456” Is Not a Security Strategy

McDonald’s rolled out an AI hiring assistant to modernize recruiting. Sounds reasonable.

Unfortunately, the admin credentials protecting tens of millions of applicant records were literally “admin / 123456”.

This is not an AI problem. This is a human problem amplified by AI scale. When systems touch real people’s data, basic security hygiene is not optional, futuristic chatbot or not. 

5. When AI Grades Its Own Homework

Google’s AI Overviews confidently informed users that it hallucinated less than one percent of the time.

It said this while actively hallucinating NASA missions and TV shows that never existed.  “You can’t lick a badger twice” is, according to AI overviews, a well-known and oft-used idiom.

There’s something almost poetic about that. The model didn’t just hallucinate. It hallucinated about hallucinating.  

Trust … But Verify!

Here’s the throughline: 2025 wasn’t the year AI failed because it was dumb. It failed because we trusted it too much, too quickly, in the wrong places.

MIT put a number on it: 95 percent of generative AI pilots have never made it to production with measurable ROI. Not because AI is useless, but because most organizations had no idea how to operationalize it responsibly.

About the Author

Daniel Jaye

Chief Technology Officer

Dan has provided strategic, tactical and technology advisory services to a wide range of marketing technology and big data companies.  Clients have included Altiscale, ShareThis, Ghostery, OwnerIQ, Netezza, Akamai, and Tremor Media. Dan was the founder and CEO of Korrelate, a leading automotive marketing attribution company, purchased by J.D. Power in 2014.  Dan is the former president of TACODA, bought by AOL in 2007, and was the founder and CTO of Permissus, an enterprise privacy compliance technology provider.  He was the Founder and CTO of Engage and served as the acting CTO of CMGI. Prior to Engage, he was the director of High Performance Computing at Fidelity Investments and worked at Epsilon and Accenture (formerly Andersen Consulting).

Dan graduated magna cum laude with a BA in Astronomy and Astrophysics and Physics from Harvard University.

Categories

Recent Posts

Subscribe Now

This field is for validation purposes and should be left unchanged.