Articles

8 Historic Mistakes Repeated in the Age of AI

History Dysfunctions and Antipatterns are repeating themselves. We're here to help prevent them.

Sander Dur

Sander Dur

April 23, 2026
6 minutes

I've been part of the initiation, implementation, and evaluation phases of AI adoption at quite a few companies now. And there's some shenanigans going on that I have seen more than I'd like to admit. There are persistently 8 historic mistakes repeating in the age of AI.

Despite differences in industries, strategies, and levels of maturity, the same dysfunctions keep recurring. It’s almost uncanny. Teams invest in tools, run pilots, announce bold ambitions… and then stall, regress, or abandon efforts that once looked promising.

The issue is rarely what people think. It’s how organizations approach the change.

If you zoom out, most failed or underwhelming AI initiatives can be traced back to a handful of recurring antipatterns. Think of them less as mistakes and more as organizational habits, default behaviors that feel productive but actually derail progress. Many of these are also discussed in our book Solving for Value: A Journey of Ambition and Stupidity.

Here are 8 Historic Mistakes.

Little Leadership Understanding

AI is often treated as “important,” but not deeply understood at the leadership level. That gap matters more than people think.

When leadership doesn’t grasp the implications of AI, what it can and cannot do, where it creates leverage, how it changes decision-making, it leads to shallow conversations. Strategy becomes vague. Priorities become reactive. Investments become inconsistent. You’ll hear phrases like “we need to do something with AI” without a clear why, where, or how.

And teams feel it. They’re left navigating ambiguity, often overcompensating with experiments that lack direction. AI adoption doesn’t require leaders to become engineers. But it does require them to develop enough literacy to ask better questions, challenge assumptions, and guide meaningful trade-offs.

Mindless Implementation

This is the “we found a tool, so let’s use it” pattern. A new model, platform, or vendor appears, and suddenly there’s pressure to implement it. Not because it solves a real problem, but because we can. It results in solutions looking for problems (I'm sure there's a dating show in there somewhere).

Teams end up integrating AI into workflows where it adds little value, or worse, creates friction. Outputs are technically impressive but practically irrelevant. It’s easy to mistake activity for progress here. More dashboards, pilots, and demos are executed. But when you ask, “What actually improved?” the answers get fuzzy. There is no clear change in organizational structure or outcome.

Shotgun Tactics

When in doubt, try everything. That’s the logic behind shotgun tactics: spreading efforts across dozens of use cases, teams, and experiments, hoping something sticks. There's no discipline to truly make something work. Resources get fragmented. Learnings stay clustered. Nothing compounds.

Instead of building depth in a few high-impact areas, organizations end up with a long list of half-baked initiatives. Each one too small to matter, too disconnected to scale. Change comes with focus, adaptation, and persistence. A handful of well-chosen opportunities, explored deeply, will outperform a hundred shallow experiments every time.

Harry Potter’s Magic Wand

There’s always a moment where *insert buzzword here* is expected to magically fix everything.

Legacy systems? AI will solve it. Poor processes? AI will optimize it. Misaligned teams? AI will align them. It’s the “just add AI” mindset. But AI doesn’t replace thinking. It amplifies what’s already there. Similar to buzzwords like Agile and Scrum back in the day.

If your underlying processes are broken, AI will scale the brokenness. If your data is messy, AI will produce confident nonsense faster. Treating AI like a magic wand creates unrealistic expectations and inevitable disappointment. The more useful framing is this: AI is leverage. And leverage only works when there’s something solid to amplify.


8 Historic Mistakes

Legal? What’s That?

Compliance, privacy, ethics: these tend to show up late. Sometimes too late.

In the rush to experiment, organizations often sidestep legal and governance considerations. Not out of malice, but out of momentum. “We’ll figure it out later.” Except later becomes an impediment. Or worse, a risk. Suddenly, promising initiatives can’t go live because of data concerns. Or teams are forced to rework entire solutions to meet regulatory requirements (or the European AI Act).

This isn’t an argument to slow down innovation. It’s an argument to bring legal and risk perspectives in earlier. The organizations that move fastest aren’t the ones ignoring constraints, but the ones that can learn the fastest.

All Aboard the Hype Train

AI hype is powerful. It creates urgency, excitement… and distortion. Everything starts to feel like a breakthrough. Every new capability becomes a must-have. Expectations explode. And then reality proverbially slaps them in the face. The gap between hype and reality creates frustration. Stakeholders lose patience. Teams feel pressure to overpromise.

The irony is that AI is transformative. Just not in the way hype suggests. Real value comes from consistent, grounded application rather than the latest trend.

Change Is for Thee, Not for Me

This one is subtle, but deadly. AI is introduced as something that will change how others work. Customer support will change. Operations will change. Analysts will change. But leadership? Decision-making? Incentives? Those remain untouched.

This creates a disconnect at best. AI fundamentally shifts how information flows, how decisions are made, and where value is created. If leadership behaviors and organizational structures don’t evolve alongside it, friction is inevitable.

People are asked to work differently within systems that haven’t changed. And eventually, they revert to old habits.

AI adoption isn’t just a tooling change, but an organizational one. And that change has to include everyone.

Lack of Purpose

When I ask my kids to do something, the first thing I hear is a screetching "WHYYY?!" If people don't feel a sense of purpose, it drags out change.

Why are we using AI? What are we trying to achieve? What does success look like?

When those questions go unanswered, everything becomes reactive. Teams become Feature Factories, leaders focus on narratives, and real progress is virtually nonexistent.

Purpose acts as a filter. It helps prioritize, align, and say no. Without it, organizations drift. With it, AI becomes a means to an end, not an end in itself. And that is what we want.

Summary: 8 Historic Mistakes can be Prevented

None of these dysfunctions are rare or new. In fact, they’re almost the default.

They emerge when organizations approach any hype as a technology initiative rather than a strategic one. It lacks any mindfulness. The good news? They’re all fixable. It starts with awareness.

Recognizing these patterns in your own organization is the first step. Not to assign blame, but to create better conversations. Because successful AI adoption isn’t about avoiding mistakes entirely. It’s about noticing them early and adjusting before they become entrenched.

Luckily, we have a solution to help you out of this situation.

Contact

Let’s discuss how we can support your journey.