Start with Clarity, Not Enthusiasm
Before you pitch AI to anyone, be brutally honest about what problem you're actually solving. Not 'we want to use AI' but 'this process costs us $200k annually and takes three weeks, and AI could cut it to three days.' That specificity matters. Talk to the people who do the work every day. Your operations team, your customer service reps, your finance analysts. They'll tell you what actually breaks, what workarounds exist, and whether AI is even the right answer. Sometimes it isn't. Maybe you need better process documentation first. Maybe a template library solves 80% of the problem for 5% of the cost. If that's true, say it. Your credibility depends on it. When you've nailed down the real problem and confirmed AI is a reasonable solution, document it in plain language. One page. No jargon. This becomes your north star for all internal conversations.
Map Your Stakeholders and Their Concerns
Different people care about different things. Your CFO wants ROI and risk mitigation. Your legal team wants compliance and liability boundaries. Your frontline staff wants assurance they won't be replaced. Your IT team wants to know about security and infrastructure. Each group needs to hear why this matters to them specifically. Don't try to convince everyone with the same message. That's how you get surface agreement from people who are actually terrified. Instead, do small listening sessions with each group. Ask them what keeps them awake at night about this kind of initiative. Then, when you present, address those specific concerns. For frontline teams, be transparent about roles. If the AI handles routine tasks, what do your people do instead? Usually the answer is higher-value work--but you have to say that clearly and mean it. If jobs are actually at risk, own that too. People respect honesty more than false reassurance.
Run a Real Pilot, Not a Demo
Here's where most organizations stumble. They build a beautiful proof of concept in a lab, get excited, and try to roll it out. Then they hit real-world friction they never anticipated, and suddenly internal support evaporates. Instead, run a proper pilot with actual users, actual data, and actual constraints. Pick a small team or department that's motivated and has the time to give feedback. Give them three months. Let them break it. Let them tell you what's confusing or slow or doesn't work the way they expected. Then fix those things. When you come back to the broader organization, you're not asking them to believe in a concept. You're showing them results from people like them. That team becomes your best advocates. They've used it. They know the rough edges and the real benefits. Their buy-in is worth infinitely more than a polished presentation.
Show the Work, Explain the Decisions
AI feels like a black box to most people, which breeds skepticism and resistance. Combat that by being radically transparent about how you're building and testing. Explain your audit process. Show them the data you looked at. Tell them what quality issues you found and how you addressed them. Explain the metrics you're using to measure success and why those metrics matter. If there are limitations or failure modes you know about, name them. 'This model works great on recent data but may struggle with edge cases from five years ago because the business operated differently then.' This transparency does two things. First, it reassures skeptics that you're being rigorous, not reckless. Second, it gives your advocates specific language to use when they're defending the project to colleagues. They understand it deeply enough to explain it.
Plan the Rollout with Your Users, Not For Them
By the time you're ready to expand beyond the pilot, you should have a coalition of supporters across different departments. Use that coalition to co-design the rollout plan. How are we phasing this in? What training do people actually need versus what's just nice-to-have? Who's the point person if something goes wrong? How do we measure whether this is working? What does success look like six months from now? When people help design the rollout, they're invested in making it work. They also catch problems you might have missed. Your frontline people know operational realities you don't. Set clear expectations about what happens after launch. You're not done improving once it's live. You're collecting feedback, fixing bugs, and tuning performance. Make that visible. Monthly updates about what you've changed and why you changed it build confidence that this is a real commitment, not a one-time installation.
Building internal support for AI takes time, honesty, and genuine collaboration with the people who will use it. It's not as fast or flashy as jumping straight to implementation, but it's the difference between a project that transforms your organization and one that quietly fails. Start with your audit, run a real pilot with real people, and let the results speak. Your team will meet you halfway if they trust that you're being thoughtful and transparent about what this actually means.