AI’s Rise Hasn’t Rewritten the Cybersecurity Rulebook: Why the Fundamentals Still Matter
In Partnership With

At a recent RANT roundtable in Edinburgh, hosted by Adarma and Splunk, security leaders came together to explore the role of artificial intelligence (AI) in modern cyber defence.
Rather than pointing to radical change, the discussion revealed a more grounded reality: aside from improving speed and accuracy, AI isn’t rewriting the cybersecurity rulebook, it’s refining it.
Instead of introducing an entirely new generation of threats, AI is primarily enhancing existing ones, making them more sophisticated and, in most cases, more effective. While the threat landscape is becoming more adaptive, the pace of attacks is increasing, and the stakes feel higher, the nature of the threats themselves remains largely unchanged.
As the French writer Alphonse Karr put it back in 1849, “Plus ça change, plus c’est la même chose” – the more things change, the more they are the same. Nearly two centuries later, that sentiment still holds true.
Freedom Doesn’t Come for Free
AI, particularly generative AI (genAI), has undoubtedly given attackers a new toolkit. But it hasn’t changed their goals or the underlying vulnerabilities they exploit.
“They can send a more convincing phishing email, but it’s still a phishing email. The threat is still the threat.” – Philip Ridley, Head of Security Response, Adarma
That sentiment echoed around the table. One senior security leader asked: “If there’s an AI-generated phishing email, where’s the data to say it’s more effective than a non-AI one? What is it we’re trying to defend against that’s different?”
In fact, the improvements in tone and grammar may have had an unexpected outcome: rather than overwhelming defenders, AI-generated content may be narrowing their focus.
“The competition to have better-written phishing emails has had the result that a lot of the most effective attacks are the very simple ones.” – Jamie McCallion, Partner Technical Manager, Splunk
She noted that attackers still rely on the “urgency-immediacy-authority triangle,” tactics that have worked for years and continue to succeed. Many of today’s successful attacks are drawing on the lessons of previous failures rather than any clear benefit from genAI.
“A few years ago we were seeing exhaustion, with hundreds of links in emails. Defensive engines were developed against those. It’s the simple ones that are successful now, so maybe AI wasn’t responsible for those.”
One CISO joked: “Maybe threat actors will revert to being non-native English speakers and start deliberately inserting typos.”
Workers’ Playtime
Much of the discussion focused not on what AI enables, but on how quickly organisations are expected to adopt it, often without due diligence.
Security leaders raised concerns about governance, oversight and the tension between innovation and compliance.
“We’re quite a small company and we found all our developers getting these AI tools, downloading them and using them,” said one participant.
“We thought, hang on, let’s rein this in a bit, let’s see if we want to make one of them company-wide. But the problem was, they store the data in the US. That gives rise to all sorts of complications. In the due diligence we get done on us, a question we always get asked is, do we store data outside the EEA? And we always answer no.”
Another recalled the fallout from restricting access to unapproved AI tools:
“At my previous company, things came to a head when we put a new security gateway in while people were playing with ChatGPT as this exciting new thing. I had heads of department saying, ‘You’ve cost me five members of staff by blocking that!’ That was kind of the mentality. But it’s just another SaaS tool. I don’t care whether it’s an AI tool or Google Drive, if it hasn’t been approved, it doesn’t belong in the workflow.”
Back to Basics
Amid confusion over whether, and to what degree, cybersecurity products and services are successfully deploying AI capabilities – and even some uncertainty over whether the distinctions between artificial intelligence, machine learning and deep learning are well understood, let alone clearly communicated – a quieter consensus emerged.
As much as society and businesses are talking about genAI and are excited about its potential, nothing has happened so far that justifies ignoring or circumventing the long-established security basics.
Most of the security headaches that genAI is giving CISOs and other senior leaders require solutions that have worked for previous innovations and technologies. For most organisations, this means investing the necessary time and money to train employees on how to get the best out of these tools, and how to integrate them into operations without taking unnecessary or unintended risks.
“With all of this stuff, it’s about teaching our employees how to use it,” said one CISO.
“We’ll pay £300 for everyone to have a CoPilot licence but not tell people how to use it,” another added.
“You want to know what the best questions to ask are, but most people don’t know, and don’t ask,” one attendee reflected.
The tools may be modern, but the challenges – user training, behavioural risk and strategic oversight – are anything but new.
“I’ve not heard many use cases where we’re getting value from AI yet,” said Alex Lindl, Head of Information Security at Aegon and RANT’s guest co-host.
“Yes, AI is relevant, and it’s highlighted problems, but those problems were already there. We shouldn’t let the board go off on a tangent about AI without getting the basic security right.”
Ridley agreed: “We’ve had a really interesting discussion about the state of the nation when it comes to AI, but to me, again, it’s just another tool.
Those conversations in our business about risk and governance still apply. Just because it has an AI label doesn’t make it different. The attack vectors are still the same. It’s all the stuff we’ve been trying to defend against for years.”
And while AI’s role will undoubtedly continue to evolve, most agreed that we’re still in the early stages.
“We’re very early in this conversation. It’ll be interesting to have this chat again in a year. We’ll probably be talking about all the AI agents in our system getting things right,” Lindl concluded.
Interested in learning more from Adarma and Splunk? Find out more here: Adarma / Splunk