The Open AI Dilemma: Why Convenience Can Be a Cybersecurity Trap

By Nathan Jamieson, CISO, The Iomart Group 

Published Date: 13/08/2025

Category: Cyber Security

Read Time: 3 minutes

Artificial intelligence is fast becoming as embedded in the modern workplace as email or instant messaging. From drafting reports to writing code, tools like ChatGPT and GitHub are helping teams work faster, smarter, and more creatively than ever before.  

But as adoption rises, so too does a more sinister trend: the quiet, unsanctioned use of open-source and public AI tools in environments where data protection, regulatory compliance, and security should be non-negotiable. It’s a trend that’s gaining traction – and one that organisations can’t afford to ignore. 

Shadow AI is here – and it's growing 

Recent studies show that 75% of employees are using generative AI at work – but only 26% of companies have an official policy in place to govern it (Salesforce, 2024). This disconnect is giving rise to what we call “shadow AI”: the unmanaged use of AI tools outside formal IT control. 

But this isn’t about malicious intent – it’s about convenience. 

Employees want to move faster and reduce friction – especially in industries like law, finance, and healthcare where information overload and time pressures are common. Without secure alternatives, even well-meaning staff may copy and paste sensitive data into open interfaces. The risk? That your data is now in someone else’s hands – or worse, in someone else’s training toolkit. 

The real-world risks of open AI tools 

This is no longer a theoretical risk. In 2023, Samsung suffered a major breach of sensitive source code after employees pasted confidential information into ChatGPT. And earlier this year, leaked documents revealed that some publicly available AI models had inadvertently trained on proprietary or sensitive datasets taken from online forums, websites, and source code (MIT Technology Review, 2024). 

Some of the main concerns we see across our client base include: 

  • Data leakage – Once data is entered into an open AI tool, it may be retained, reused, or exposed to other users.
  • Intellectual property loss – Sensitive code or internal IP can be absorbed into training data, with no way to retrieve or delete it.
  • Compliance breaches – Regulations like GDPR may be violated if protected data is shared with tools outside your control.
  • Operational blind spots – Shadow AI contributes to shadow IT, making it harder for security teams to maintain oversight or enforce policies. 

The legal sector: A case study in contrast 

Take the legal sector as one example. The Iomart Group works with several legal clients contending with AI adoption – and the divide is often generational. Younger lawyers are eager to experiment with AI, while more senior practitioners remain cautious. But without clear guidance, guardrails, or approved solutions, this energy risks fuelling shadow AI rather than safe innovation. 

This reflects a broader challenge across industries: balancing opportunity with risk. Blocking AI outright rarely works, but enabling it securely? That’s where the real value lies.  

From exposure to empowerment: Four steps forward 

To protect your organisation without stifling innovation, we recommend the following: 

1. Educate without fear 

Make the risks real and relatable. Help employees understand what’s at stake – and why it matters to them and their clients. 

2. Provide secure alternatives 

If your teams can’t easily use approved, internal AI tools, they’ll look elsewhere. Prioritise secure, approved alternatives. 

3. Implement smart guardrails 

Not blanket bans – but clear, risk-based policies. Define what can and can’t be entered into public AI tools, and establish regular audits. 

4. Invest in AI literacy 

This isn’t just about security. It’s about helping your workforce develop a deeper understanding of what AI can do, how it works, and how to use it securely and effectively. 

It’s a cultural shift, not just a technical one 

Regulating AI is not just a matter of policy – it’s a matter of culture. If you want people to adopt secure AI platforms, they need to feel confident, empowered, and supported. 

That means involving teams early, designing policies with users in mind, and ensuring that secure tools are just as convenient as the public ones they replace. It also means acknowledging that every organisation is different – and your approach to AI governance should reflect your people, your sector, and your risk profile. 

Looking Ahead: From Risk to Readiness 

This is the first in a new series exploring how organisations can harness the potential of AI without compromising on security, ethics, or control. 

Over the coming weeks, we’ll delve into key topics like AI’s role in driving workplace productivity, the ethical implications of rapid adoption, and how to build the right guardrails without stifling innovation. 

Ready to take control of AI in your organisation? 

Speak to our team about building a safer, smarter AI strategy tailored to your environment. 

Get in Touch