The 30% Problem Nobody Talks About
2-minute read
The first post established that AI can be confidently wrong. This post is about what that means in practice, and what to do about it.
A CEO presented his board with a market-sizing analysis. He had used an AI tool to build it. The board approved the strategy. Two weeks later, his CFO found the error.
The AI had been confidently, completely wrong.
This is not an edge case. A former client who leads a technology firm put it plainly: AI is still about 30% confidently wrong. It produces a perfectly composed, certain answer based on facts that may not be 100% correct. And the better the writing, the harder the error is to catch.¹
“The leaders who will do well with AI are not the ones who understand the technology best. They are the ones who ask the best questions about when to trust it.”
The Right First Question
Most leadership conversations about AI begin with, “Should we be using this?” That is the wrong starting point.
The right first question is, “Where does it matter if this is wrong?”
In low-stakes, high-volume tasks that are easy to check, such as drafting, summarizing, and first-pass analysis, the error rate is manageable. The work gets done faster, and humans can catch mistakes before they cost anything.
In high-stakes decisions, including strategic choices, hiring, financial projections, and anything in a regulated industry, the 30% problem becomes a liability especially in particular industries such as medicine or law. It’s not that AI should not be in the room, but it needs a much shorter leash and much more rigorous review.
Developing the Judgment
Knowing when to trust AI and when to override it does not come from reading about AI. This judgment comes from using the tools yourself, developing pattern recognition for when they fail, understanding the stakes of different decisions, and staying connected to ground truth through your customers, frontline employees, and market signals.
Before your next AI conversation, whether with your board, your team, or a vendor, get clear on two things: what problem you are solving, and what happens when the answer is wrong. The technology is impressive. The clarity about its risks and when to use it has to come from you.
About the Author
Dr. Melissa Fristrom
Founder, Core Allies, LLC
Melissa Fristrom is the founder of Core Allies, LLC an executive coaching and advisory firm that works with C-suite leaders navigating inflection points. She advises senior leaders on strategy, organizational change, and the human side of technology adoption. Before founding Core Allies, she held senior leadership roles in frontline positions up to CEO. She is based in Boston.
The artwork in this post is from fristrom.art. Melissa works in encaustic, pigmented wax layered to explore how color carries emotion, perception, and meaning. It is a practice that runs parallel to the questions this series is asking about leadership: looking more carefully at what is actually in front of you, rather than what you expect to see.
If any of this landed, useful or uncomfortable, that's worth paying attention to. I work with leaders and their teams on exactly these questions. I'd love to connect.
mfristrom@coreallies.com · (617) 444-9809 · coreallies.com