The AI Already Inside Your Organization That Nobody Approved
2-minute read
Before you finalize your AI strategy, you need to acknowledge that your people are almost certainly already using AI tools you did not sanction.
Research from UpGuard found that more than 80% of workers are using unapproved AI tools at work, not because they are careless, but because the tools are useful and the wait for official approval feels long.1 This is called shadow AI, and it is one of the most underestimated risks in the current environment.
Why It Matters
Every time an employee pastes client information into a public AI tool, they may be feeding proprietary data into a system with no confidentiality protections. Every time they use an unsanctioned tool to draft a client-facing document, there is no review process, no quality standard, no accountability chain if something goes wrong.
There is also a misinformation risk that most organizations have not yet priced in. AI-generated content can spread false information or produce outputs that misrepresent your organization, and when those outputs are created by employees using unsanctioned tools, there is no process to catch them before they reach clients, regulators, or the public.
The risk is not that your people are using AI. The risk is that you do not know which AI they are using, or what they are feeding into it.
This Is Not a Discipline Problem
The instinct is to respond with a policy. Employees are not to use unapproved AI tools, full stop. That instinct will not work and will not be followed. The people most likely to use shadow AI are often your highest performers, the ones who find inefficiencies and fix them without waiting to be told.
The more effective response is to understand what they are using and why, then build a path from shadow to sanctioned. What tools have your people found genuinely useful? What workflows are they trying to improve? Those answers are a map to where AI can create real value in your organization.
What to Do Now
Find out what is already in use. Have honest conversations with your managers about what tools their teams are using day to day. Establish clear guidelines on what data can and cannot go into public AI systems. And create a legitimate channel for people to surface AI tools they find valuable, so the organization can evaluate them rather than pretending the behavior is not happening.
Your AI strategy should start with what your people have already discovered. It is the most honest signal you have about where the real value is.
1. UpGuard. “State of Shadow AI Report.” November 2025. Based on surveys of 1,500 security leaders and employees across the U.S., U.K., Canada, Australia, New Zealand, Singapore, and Malaysia. cybersecuritydive.com/news/shadow-ai-employee-trust-upguard/805280/
About the Author
Dr. Melissa Fristrom
Founder, Core Allies, LLC
Melissa Fristrom is the founder of Core Allies, LLC an executive coaching and advisory firm that works with C-suite leaders navigating inflection points. She advises senior leaders on strategy, organizational change, and the human side of technology adoption. Before founding Core Allies, she held senior leadership roles in frontline positions up to CEO. She is based in Boston.
The artwork in this post is from fristrom.art. Melissa works in encaustic, pigmented wax layered to explore how color carries emotion, perception, and meaning. It is a practice that runs parallel to the questions this series is asking about leadership: looking more carefully at what is actually in front of you, rather than what you expect to see.
If any of this landed, useful or uncomfortable, that's worth paying attention to. I work with leaders and their teams on exactly these questions. I'd love to connect.
mfristrom@coreallies.com · (617) 444-9809 · coreallies.com