A conversation I had last week reminded me why practical cybersecurity beats fear-mongering every time.
One of my team was working with a professional services client when the conversation turned to AI governance. The client's position, sparked by an industry presentation they'd attended, was emphatic: "It's impossible to stop staff taking files and uploading them to AI tools."
I've heard this narrative gaining traction across boardrooms, lunchrooms and even war rooms. It's almost become accepted wisdom.
And it's completely and utterly wrong!
Here's what actually happened: The client had attended a conference where a speaker painted a picture of inevitable data leakage. Employees working from home, using personal devices, copying sensitive client files into ChatGPT to "work faster." The implication was clear: resistance is futile.
Our response was straightforward: It's absolutely possible to control what data gets uploaded to AI. You just need to configure the technology you're likely already paying for, document clear policies, and follow through.
The client was skeptical. "But they can access these tools from anywhere, on any device."
Exactly. And you can control access anywhere, on any device – if you set it up properly.
Before we dive into solutions, let's acknowledge why this "impossible to control" narrative has taken hold:
Let me be absolutely clear: This is not a new problem. It's a new channel for an old problem.
I put this to the client: "If you accept that you can't stop staff uploading sensitive files to AI, what does that say about everything else they can do with your data?"
Think about it. If we genuinely can't prevent someone copying a client's confidential legal brief into ChatGPT, that means we also can't prevent them from:
The technology to prevent unauthorised data movement has existed for over a decade (yes, it pre-dates COVID!). Data Loss Prevention (DLP) systems, endpoint protection, email filtering, web proxies, mobile device management – these aren't new concepts.
The question isn't whether the technology exists. The question is whether you've configured it properly and extended those controls to AI platforms.
Let me share the 5 step process that we have implemented for this client and dozens like them:
Most organisations have no visibility into which AI tools their staff are using. You can't govern what you can't see.
Within 48 hours, we'd used their existing network monitoring and Microsoft Defender tools to identify:
This wasn't sophisticated hacking. This was using logs from systems they already owned.
We worked with the client to categorise their data into clear tiers:
This took three workshops, not three months.
Most organisations already have data classification frameworks – they just haven't explicitly mapped them to AI usage.
This client was already licensed for Microsoft 365 E5, which includes Purview. They were paying for comprehensive DLP capabilities but using less than 30% of them.
We configured:
The total additional licensing cost? Zero dollars.
The implementation time? Three weeks, including testing.
Technology without policy is just speed bumps. We drafted a two-page "AI Acceptable Use Policy" that covered:
This wasn't a 40-page legal document. It was practical guidance that answered the questions staff actually have: "Can I use this? How do I know? What happens if I get it wrong?"
We ran 60-minute workshops for all staff, with real examples:
The feedback was overwhelmingly positive. Staff wanted to use AI safely. They just needed to know how!
This isn't really about AI. It's about your organisation's fundamental approach to data governance.
Data security in 2026 is about integrated controls, not point solutions. AI is just the latest channel where data can leak. Email, file sharing, messaging, USB devices, print, mobile – every organisation has multiple potential leakage paths.
The organisations that successfully govern AI data are the ones that already had:
They didn't start from scratch for AI. They extended existing controls to a new channel.
Conversely, if you've never successfully controlled what gets emailed externally or uploaded to personal cloud storage, AI governance will be a struggle. Not because AI is special, but because you haven't solved the foundational problem.
The uncomfortable truth: If you genuinely believe controlling AI uploads is impossible, your data governance has bigger problems than ChatGPT.
Here's what I told the client, and what I'll tell you:
Organisations that fail to govern AI data aren't victims of impossible technology. They're victims of inaction.
The tools exist. The frameworks exist. The expertise exists (or can be acquired).
What's often missing is the will to treat AI governance as seriously as email governance, cloud storage governance, or any other data channel.
Dale Jenkins
Founder & CTO, Microsolve
30+ years helping businesses turn IT challenges into competitive advantages
This article is based on real client engagements across professional services, financial services, and healthcare sectors throughout 2025–2026.
Technical details have been generalised to protect client confidentiality, but the outcomes, timelines, and implementation approaches are factual. If your organisation is wrestling with AI governance, data security, or digital transformation challenges, let's talk about practical paths forward.