The Long View

Blog archive

Stop Waiting for the Perfect AI Strategy

I've spent 25 years in technology media. I've watched companies respond to the Internet, to mobile, to cloud, to social. I know what the winners looked like early on, and I know what the companies that fell behind had in common, too: They had great PowerPoints.

Here's the concerning thing I'm seeing right now with AI: Too many enterprise leaders are treating this moment like it's a traditional technology procurement cycle. They're forming committees, hiring consultants to build AI readiness assessments, and waiting for the governance framework to be finalized before anyone touches a tool. And while they're doing that, their competitors have already moved on without them.

When ChatGPT launched in late 2022, we didn't wait at Converge360. This very site, Pure AI, already existed as a machine learning publication. Within weeks, we were transforming it to cover generative AI. Within months, we were adding AI tracks to our conferences and building training curriculum. Not because we had a perfect plan, but because we knew that the organizations that would understand AI best were the ones learning by doing.

About This Blog

Daniel LaBianca is the president of Converge360, which owns PureAI.com. His blog, "The Long View," is an ongoing series exploring how enterprise leaders can move from AI curiosity to real-world impact.

That instinct has been validated over and over. The teams winning with AI right now aren't the ones that spent six months on a strategy. They're the ones who got their people using tools, watched what happened and kept adjusting.

Small Bets, Real Learning
There's a version of AI adoption that sounds responsible but is actually avoidance dressed up in professional language. ("We're being thoughtful." "We're managing risk." "We're waiting until the technology matures.") But waiting for stability is waiting for something that isn't coming. This technology is moving faster than any enterprise change management process was designed to handle.

What I advocate instead is something almost embarrassingly simple: Pick something small, try it and pay attention to what you learn. Let a team use an AI writing assistant for a month. Pilot an AI-powered customer support tool in one region. Have your developers experiment with a code assistant for 30 days and report back honestly. These aren't risky bets; they're low-cost, high-signal experiments that build the organizational muscle you'll need for the bigger moves ahead.

The goal of the first experiment isn't ROI, but fluency. You're teaching your team how to think about AI, how to prompt well, how to spot where it helps and where it hallucinates. That's something you can only learn by using the tools on real work, not in a training course.

The Other Failure Mode: Moving Without Direction
I want to be fair here, because there's an equal and opposite problem I see just as often, and it's one that rarely gets talked about with the same urgency.

Some organizations don't have an AI paralysis problem, but an AI sprawl problem. Individual teams are spinning up their own tools, signing their own contracts and running their own experiments with no visibility from leadership, shared learning or coherent sense of where the company is actually trying to go with AI. The result isn't competitive advantage, but a mess of redundant subscriptions, inconsistent outputs and security exposure that IT didn't know about until it was a problem.

This is the shadow IT story, replayed for the AI era.

The answer isn't to lock everything down, but for leadership to get in the game: Set a direction. Identify two or three priority areas where AI can move the business forward. Give teams a framework for what kinds of experiments are encouraged and what guardrails exist around data and security. Then let them run.

The goal is coherent momentum. There's a meaningful difference between a hundred people pulling in roughly the same direction and a hundred people pulling in a hundred different directions. Both are "moving" but only one of them is getting somewhere.

The Real Risk Is Falling Behind
I understand the hesitation; AI raises real questions about accuracy, bias, data privacy and workforce impact. These aren't trivial, and they deserve serious attention. The thing is, the organizations that will be best equipped to manage those risks long-term are the ones that already have hands-on experience with the technology. You can't govern what you don't understand.

The executives I'm most worried about are the ones who have convinced themselves that caution is strategy. It isn't. Caution is a reasonable input to strategy, but if the output of your AI deliberations is inaction, you haven't been prudent -- you've made a choice, and a costly one, at that.

I'm not telling you to launch reckless experiments with no oversight. I'm telling you to build a rhythm. That rhythm -- not the technology itself -- is the competitive advantage. The AI landscape is going to keep changing. The organizations that have built an iteration muscle will adapt, while the ones that are still waiting for the perfect strategy will be starting from scratch, again, with each new wave.

Where To Start
If you're an IT leader or a business executive reading this and wondering where to actually begin, my honest advice is to start with your team's pain. What takes too long? What's tedious and repetitive? What requires synthesis of large amounts of information? Those are your first experiments.

Don't start with AI strategy. Start with a problem worth solving, and then ask whether AI can help solve it faster, better or cheaper. That question will take you further than any framework.

We're going to explore exactly that kind of thinking in this column -- regularly, practically and with the conviction that the organizations that move thoughtfully but decisively are the ones who will look back on this moment as the beginning of something significant.

Posted by Daniel LaBianca on 04/20/2026


Featured