What's Actually Possible
(And Proven)
Let me share the 5 step process that we have implemented for this client and dozens like them:
1. Discovery: Know What You're Dealing With
Most organisations have no visibility into which AI tools their staff are using. You can't govern what you can't see.
Within 48 hours, we'd used their existing network monitoring and Microsoft Defender tools to identify:
- 23 different AI platforms being accessed by staff
- 8 browser extensions with AI capabilities
- 4 desktop applications with embedded AI
- over 67% of staff regularly using at least one AI tool
This wasn't sophisticated hacking. This was using logs from systems they already owned.
2. Classification: Define Your Red Lines
We worked with the client to categorise their data into clear tiers:
- Public – Marketing materials, published content
(AI use: unrestricted) - Internal – General business documents, internal memos
(AI use: approved tools only) - Confidential – Client files, financial data, strategic plans
(AI use: highly restricted, audited tools only) - Restricted – Personal data, legal privilege, trade secrets
(AI use: blocked entirely for external tools)
This took three workshops, not three months.
Most organisations already have data classification frameworks – they just haven't explicitly mapped them to AI usage.
3. Technical Controls: Turn on What You Already Own
This client was already licensed for Microsoft 365 E5, which includes Purview. They were paying for comprehensive DLP capabilities but using less than 30% of them.
We configured:
- Endpoint DLP to detect and block sensitive files being uploaded to unauthorised AI sites. If someone tries to attach a document marked "Confidential" to ChatGPT, they get blocked at the browser level before the file leaves their machine.
- Web filtering to categorise AI platforms into "approved," "monitored," and "blocked" lists. Microsoft Copilot (integrated with their tenant) was approved. ChatGPT free tier was blocked. Claude (with enterprise contract) was approved with logging.
- Clipboard monitoring to prevent copy-paste of sensitive content. You'd be surprised how many data leaks happen through simple copy-paste of text into AI prompts. Modern DLP can inspect clipboard content in real-time.
- Session recording for high-risk roles. For staff working with the most sensitive client data, we enabled session recording on any AI platform interaction. Not to spy, but to have an audit trail if something goes wrong.
- Application control – Using Microsoft Defender Application Control and Intune policies, we prevented installation of unauthorised AI-powered applications and browser extensions.
The total additional licensing cost? Zero dollars.
The implementation time? Three weeks, including testing.
4. Policy: Make Expectations Crystal Clear
Technology without policy is just speed bumps. We drafted a two-page "AI Acceptable Use Policy" that covered:
- Which AI tools are approved for which data types
- A simple decision tree: "Would you be comfortable emailing this file externally without an NDA? If not, don't put it into AI."
- Consequences for violations (progressive discipline aligned with existing IT security policies)
- How to request access to new AI tools (a proper evaluation process, not a blanket "no")
This wasn't a 40-page legal document. It was practical guidance that answered the questions staff actually have: "Can I use this? How do I know? What happens if I get it wrong?"
5. Training: From Compliance Exercise to Cultural Norm
We ran 60-minute workshops for all staff, with real examples:
- "Here's what happens when you paste a client email into ChatGPT – the warning you'll see, why it's blocked, and what approved alternatives you have."
- "Here's how to use Copilot safely for drafting documents – it's integrated with our tenant, doesn't train on your data, and respects our permissions."
The feedback was overwhelmingly positive. Staff wanted to use AI safely. They just needed to know how!
The Bigger Picture: What This Means for Your Business
This isn't really about AI. It's about your organisation's fundamental approach to data governance.
Data security in 2026 is about integrated controls, not point solutions. AI is just the latest channel where data can leak. Email, file sharing, messaging, USB devices, print, mobile – every organisation has multiple potential leakage paths.
The organisations that successfully govern AI data are the ones that already had:
- Clear data classification
- Mature DLP programs
- Strong security culture
- Documented policies with teeth
They didn't start from scratch for AI. They extended existing controls to a new channel.
Conversely, if you've never successfully controlled what gets emailed externally or uploaded to personal cloud storage, AI governance will be a struggle. Not because AI is special, but because you haven't solved the foundational problem.
The uncomfortable truth: If you genuinely believe controlling AI uploads is impossible, your data governance has bigger problems than ChatGPT.
The Bottom Line
Here's what I told the client, and what I'll tell you:
Organisations that fail to govern AI data aren't victims of impossible technology. They're victims of inaction.
The tools exist. The frameworks exist. The expertise exists (or can be acquired).
What's often missing is the will to treat AI governance as seriously as email governance, cloud storage governance, or any other data channel.
Dale Jenkins
Founder & CTO, Microsolve
30+ years helping businesses turn IT challenges into competitive advantages
About This Article
This article is based on real client engagements across professional services, financial services, and healthcare sectors throughout 2025–2026.
Technical details have been generalised to protect client confidentiality, but the outcomes, timelines, and implementation approaches are factual. If your organisation is wrestling with AI governance, data security, or digital transformation challenges, let's talk about practical paths forward.