RANT / SECURONIX Post-Event Content

In Partnership With

The age of accessible AI is undoubtedly upon us – opening up a new security frontier amid its potential to completely reorient both how cyber adversaries, and frontline teams, operate.

In its wake, the foremost argument in security’s artificial intelligence debate has shifted from the existential to the adversarial, namely: ‘How can our AI outpace that of the cyber attacker?’

Yet from a defensive standpoint – and in these times of extraordinary technological fluidity – what constitutes good security is as much about understanding ‘how’ AI is being used by adversaries, as it is about the race for adoption.

In turn, SOC teams should focus on how AI can be best utilised to protect organisations; it is AI’s capacity to fine-tune defences and bolster intelligent decision-making that can truly help in keeping that crucial step ahead of the adversary.

But as the tech world scrabbles over the potentials of generative AI – seeking to capitalise on the success of ‘ChatGPT’ or image generator ‘DALL-E’ – it’s important to remember that the use of artificial intelligence in cybersecurity is nothing new.

Machine learning (ML) technology is mature, whereas AI is now maturing to the point where it is adaptive enough to detect the most atomised of anomalies.

When deployed correctly, AI’s ability to combine and enrich attack data – using contextual algorithms with ever-increasing levels of nuance – means it can find abnormalities within an organisation’s network and flag them far more quickly than a human (or signature-based system) ever could.

AI and ML speed up analysis and help to reduce a lot of the noise that inhibits better decision-making, particularly when combined with more elegant and prescriptive advances in threat detection – such as the emerging field of ‘Detection as Code’.

And seen as additive to human frontline assets, generative AI can also enable the transfer of knowledge, a benefit that is frequently overlooked in the rush to ‘block’ AI chatbots.

Yet not only have recent advancements in AI ushered in a raft of knotty governance, safe-guarding and societal concerns, but they’ve also ramped up fears over both the AI capabilities of hacker nation states – and over how it could aid far less-skilled attackers.

And given the general economies of cyber attacks – in which it is frequently cheaper and easier to launch attacks than to build effective defences – it’s no wonder that CISOs are worried about an asymmetrical fight between adversaries with unrestricted use of AI tech, and defenders constrained by regulations.

AI – A Tool for the Adversary and Facilitating the Rise of Script Kiddies

There’s no doubt that AI offers an abundance of powerful new attack vectors. Advances in AI and ML have already enabled botnets to expand exponentially quickly and with far greater network capacity, while we are moving into the age of AI-injected data poisoning, ‘smart’ malware and biometric spoofing.

One current example of how generative AI is fuelling attacks is by helping in the revival of ‘script kiddies’ – low-skilled cybercriminals who can, nevertheless, be very dangerous to organisational defences.

As Richard Cassidy, VP of Global Technology at Security Intelligence specialists Securonix, explains: “A lot of Russian adversarial groups don’t speak good English, so they’re using tools like ChatGPT, Bing AI and ChatSonic to write perfectly formed wording for their spear phishing campaigns.

“This is the scary part – you don’t actually need to know what you’re doing anymore; you just need to ask ChatGPT to do it. And that’s who people need to worry about – it’s script kiddies who actually haven’t got a clue what they’re doing. They’re just attacking you because they know they can and it’s easy to do.”

AI – An Advanced Tool for Security Teams and Why You Shouldn’t Block ChatGPT

But a rush to block the likes of ChatGPT would ignore its huge potential as a tool for the defender.

“I’ve been trying to tell CISOs – don’t block ChatGPT, use it,” explains Cassidy.

“There’s a skills shortage in this industry and, for example, if you’re looking at a Base64 encoded message, you may want to know: ‘What does that really mean, in terms of the packet?’”

“So go to ChatGPT, paste it in and let ChatGPT tell you what it is. Allow it to the right user groups, but use it as a tool to empower your SOC analyst – and now that easy way for your adversaries to get answers to complex questions has become an easy way for your defenders.”

One of the most innovative use-cases for AI in today’s security landscape is through a ‘Detection as Code’ approach to threat detections.

By treating detections as well-written code, SOC teams can build much higher quality alerts. Not only does this eliminate a huge amount of noise, but it creates a structured approach to analysing log data that utilises AI to fine-tune detections for specific environments – and different types of data. While, over time, identifying patterns in attacker behaviour.

“It’s about what the code can do to be self-learning and self-protecting,” explains Cassidy.

“This is where the industry is going; it’s allowing AI to tune up and down your detection rules – based upon your security policy. We don’t need the SOC to do it any more, we need the SOC to validate the outcomes, which is what they’re there for, in my opinion.”

AI – Finding the ‘Unknown Unknowns’

AI is maturing to the stage where it will soon be adaptive enough to show how quickly adversaries can pivot on a SOC’s detections. And it comes into its own in the detection of the ‘unknown unknowns’.

“If you’re a CISO, or Director of Information, you care about a couple of things: what am I protecting, who am I protecting it from and where does it sit,” explains Cassidy.

“If you know those things, then you can probably cover about 85% of your security needs. You don’t need ChatGPT or any big AI – but there is that 10%, maybe 5% (if you’re lucky) that is Black Ops stuff. And I don’t think organisations are really creating a function of the SOC that is saying: ‘Okay, I think we’ve covered that part – but what about the ‘unknown unknowns’. What are we doing to detect those anomalies – those outliers?’

“So can AI enhance that testing? Of course it can. But as an organisation you also need to bring in consultants with capabilities that are able to use these adversarial techniques in a controlled way. Without that, forget what you’re buying – you’re going to have attack vectors you don’t even know about.”

“And it doesn’t matter what AI you have if you’re not asking it the right questions, and you’re not applying it in the right way. So I think all AI is the same – but it’s HOW you apply it that gives you the business and security outcomes you need. And that’s wholly reliant on your MSSP – or your product – or more importantly, the people in your business that make decisions.”

Questions of Governance

Lastly, it’s inevitable that the age of accessible AI brings with it a legion of compliance and safeguarding concerns – from IP protection and copyright law to ethics.

Sociological questions arise over the bias in the data sets that AI is trained on, and that generative AI will damage peoples’ ability to learn and think for themselves. While also prompting fears that over-governance will just drive nefarious activity further underground.

As with any technology borne from the very human drives of advancement and betterment to create something truly greater than the sum of its parts, AI brings with it a myriad of complexities. It’s no wonder – given AI’s particular capacity to self-learn – that many CISOs view it as a seemingly unstoppable force.

Yes, it is a powerful tool for adversaries. But an equally powerful one for security teams, if they take the time to understand how it can be effectively used in the bigger fight, that of defending organisations from the frontlines.