AI in Cybersecurity: Powerful Ally or Unruly Guest?
AI is everywhere right now. Whether you’re a security leader or just trying to get through your inbox without another “AI-powered” sales pitch, it’s hard to escape the hype. But while the business world races to adopt generative AI (genAI), cybersecurity teams are left grappling with some big questions: how do we use it safely, and how do we defend against it?
That’s exactly what Adarma set out to explore in a recent roundtable event in London, hosted in partnership with Splunk. A dozen security leaders joined to share their experiences, frustrations and hopes about the role AI is starting to play in their organisations.
Everyone’s Using AI – Ready or Not
One of the biggest takeaways? AI adoption is charging ahead much faster than most organisations are prepared for. Whether it’s creative teams using image generation tools or employees quietly bringing AI meeting assistants into calls, AI is seeping into everyday business life – often without any oversight.
As Adarma’s Head of Security Response, Philip Ridley, put it:
“Every week there’s a new tool or process, yet people aren’t armed with the right information.”
It’s a familiar story. One attendee mentioned that their company had several AI projects up and running, but the data governance work meant to come first had already been shelved. It’s like building a house before laying the foundations – not a great plan.
The Silent Spread of Shadow AI
“Shadow AI” – unsanctioned tools used by well-meaning employees – was a recurring theme. Meeting assistants that quietly transcribe sensitive conversations, or private tools doing who-knows-what with company data, are becoming a real headache.
One leader shared:
“We’re seeing AI tools invited to meetings, taking notes on conversations that maybe the user wasn’t meant to be part of. Then that transcript is just… out there.”
This lack of control is putting security teams on the back foot – they’re being asked to secure environments they can’t even fully see.
AI as a Cybersecurity Assistant – Not a Replacement
There’s also plenty of discussion around where AI fits into the security stack. Could it replace tier-one analysts? Should it?
Splunk’s Craig Saunderson was clear:
“AI is an analytic tool. It’s not about replacing people – it’s about assisting them. Automation might reduce workload, but it’s about trust. You’ve got to understand the use case first.”
Adarma echoed this point. Ridley noted that the real value lies in speed and scale:
“If an analyst can get an answer in one minute that used to take an hour, that’s powerful. It means we can help customers faster.”
But the idea that AI could – or should – replace people didn’t sit well with anyone. Expertise, especially at higher tiers, is still critical.
Measuring AI Success? Easier Said Than Done
While many organisations are experimenting with private AI tools and internal LLMs (Large Language Models), almost no one felt confident measuring whether they were actually delivering value.
One participant summed it up nicely:
“You’re not going to save money with AI, but you’ll still have to use it – because your competitors will.”
The bigger challenge? Educating teams to use the tools well, and helping boards understand the risks, especially around data and intellectual property. Several leaders mentioned boards were nervous – and rightly so – about putting sensitive information into SaaS tools they don’t fully control.
Attackers Are Using AI Too
Of course, it’s not just defenders using AI. Attackers are loving it too. The threat landscape hasn’t changed fundamentally in 20 years, but AI is supercharging the speed and scale of attacks.
Saunderson warned:
“We’re looking at mean time to detect and detain of 10 to 15 minutes. AI helps you figure out where to look – fast.”
That’s critical when the time between breach and impact is getting shorter by the day. Defensive teams need every edge they can get.
Let’s Not Forget the Basics
With all the noise around AI, it’s easy to lose sight of the fundamentals. But as Adarma’s Ridley reminded the group:
“Security is still security. AI might collapse the timescales, but the fundamentals haven’t changed. You still need to get the basics right.”
That means good governance, proper controls, and a clear understanding of what you’re trying to achieve. AI can help – but only if the groundwork is solid.
Final Thoughts
AI in cybersecurity isn’t black and white. It’s not just a magic solution or a ticking time bomb. It’s a tool—and like any tool, its success depends on how well you use it.
The takeaway from the roundtable? Take a breath. Build the foundations. Understand your risks. And don’t forget: no matter how smart your tech gets, it still needs smart people behind it.
Want to know how Adarma can help you safely adopt and defend against AI? Get in touch at hello@adarma.com.