Right Here, Right Now: Is Cybersecurity Focusing On The Most Important AI Threats?
A RANT Forum is always a free-form and unpredictable kind of security conversation, and so it proved when the great and the good of the London-based cyber community gathered to discuss whether or not the right risks are being considered as businesses and their SOCs assess the mushrooming implementation of generative AI tools. A short answer? Probably. But, as so often, the devil is in the detail.
“I’d never say any risk is wrong to focus on, it all depends on the organisation,” said Sapna Patel, head of cyber at the King’s Trust, and one of RANT’s two invited panellists for the event. “In my organisation, I’m not as worried about shadow AI as I was a year ago, but I’m worried about embedded AI that pops up without us knowing about it.” Security leaders need to balance competing demands, she argued – from those within the workplace who want to ramp up use of these tools, and those within the workplace who remain at best sceptical, at worst actively distrusting of the technology.
“There’s three broad areas of risk associated with the use of AI,” said Joe Mulhall, Technical Information Security Officer with the Financial Conduct Authority. “Data leakage is a huge one – and while it was certainly bigger last year it’s not gone away yet. Then there’s the security of the large language models built within our organisations, which could introduce risks of model theft and data poisoning as well as data leakage. And the other area of risk we’re all looking at is productivity risk. How do we leverage this to keep on top of all the other risks?”
“I don’t think we’re focusing on the wrong AI risks, because there’s so many of them,” Ben van Enckevort, chief technology officer and co-founder of Metomic, the night’s sponsors. “There are today risks and tomorrow risks – and today there are three ways I’ve seen individuals adopt AI. First, people are going to integrate something like Chat GPT, or employees are using it – and then we get worried about employees uploading sensitive data. That was a year ago: now they’re saying, ‘OK, we’re not really that worried about shadow AI – we’re going to plug our data into various systems because we want our employees to do the best job.’ This is where the biggest risk is, because they may introduce risk accidentally.”
More troubling, though, van Enckevort argued, are the challenges that become amplified when automation is applied to the picture, and AI agents are authorised by companies to take actions without human intervention.
“People want to automate things; companies want automatic employees,” he added. “At that point, the visibility on what’s sensitive data, where it is, and what people are doing with it, is the biggest risk. And going on to the tomorrow risks: tomorrow, it’s about when agents are acting on instructions that may not come from your employees.”
Build It Up, Tear It Down
Immediately, as so often at a RANT Forum, the first speaker from the floor took issue with the parameters of the discussion as just outlined by the speakers.
“Never mind today and tomorrow risks – let’s talk about 10-year risks,” they challenged. “The rise of agentic AI means we get rid of all the entry-level jobs we use to train our juniors, who, over time, become our seniors.” The eventual hollowing out of the experience base of the enterprise, they suggested, was inevitable. “It’s all utterly predictable,” they said.
Not so fast, the panel cautioned.
“Automation is kind of what we in the cyber security industry do – or what we protect,” Mulhall replied. “I don’t see less people in IT, I see more. We’ll be doing different jobs. Do people need to come in and do the jobs we already do? I’m not sure.”
“I share the concern, and I agree, though I don’t know the answer,” van Enckevort said. “Looking at patterns over time, the jobs we have to do will completely change. Things juniors are being trained to do now will be right at the edge of what we once needed them to do. The only thing I’m confident about is that in 10 years the jobs will be different.”
“I think there’ll be a lot of regret,” Patel acknowledged. “And after we regret, we will see the need to humanise everything” – to bring things back to the way things are now, she added.
“It’s an interesting thing to explore,” Mulhall added. “Thinking of security – not any other job – I look at history for inspiration. Cities have traded for thousands of years. There’s always been security problems. We’ve always worked to solve them. We’ll still need security professionals to solve them.”
Going Out Of My Head
Talk turned to knock-on risks – risks associated with AI use, but that do not arise directly from it. For example, the risk of a company being sued because of actions taken by AI agents which had carried out those actions without any direct instruction to do so from a member of staff. The first response, from Mulhall, garnered a spontaneous round of applause from the majority in the room, when he said that he didn’t view this as a cybersecurity problem. Yet there was rueful acceptance that, whether it was properly a challenge for security teams or not, it was likely to end up in their in-trays.
Future regulation of AI – in the European Union and the UK in particular – was also an issue that had been on people’s minds. In answer to a question about preparation for those, Patel said “I’m keeping an inventory and just making sure I’m keeping ready”.
Clearly, how AI risk will be assessed is going to keep on changing, as both new products and new regulations emerge. Preparing a business for all of that may be challenging, but it should not be impossible.
Patel said: “I treat it in the same way I treat everything right now – dissect it, look at it more, then decide.”
Van Enckevort suggested that there may be some useful examples to be found in recent data-protection history. “What kickstarted companies taking GDPR [the EU’s General Data Protection Regulation] seriously was real risk. I think we’re driving head-long into some large lawsuits, and that will start focusing attention.”
Mitigating these kinds of risk is possible by enhanced, focused and forensic attention on precisely what you’re using in your organisation, and minimising exposure by shutting the door on redundant or unnecessary tools, he argued.
“Personally, we drill very, very strongly into the employee base, to understand what’s appropriate,” he continued, “and if you generate something you don’t use, that’s something that’s inappropriate. But that won’t last long. What I see coming is greater focus on what’s appropriate to use. This is something we see in our customer base. I think the answer will change over a couple of years, but I expect this will still be an issue.”
Yet where these risks arise from using a tool supplied by a third party, the responsibility will likely depend on the precise contractual terms the customer company has agreed with the AI vendor. And the more companies are reliant on wide-ranging AI technologies, the less likely those contracts are to be worded in a way that is particularly helpful. And, when one wag wondered whether anyone in the room had a data-processing agreement with Chat GPT, the chorus of “No”s was close to unanimous. Whether these kinds of risks will be considered jobs for the security team or not, they will represent threats to the business, and someone will have to bear them in mind.
Slash Dot Dash
Perhaps surprisingly, data protection issues – and the governance, regulation and compliance questions they raise for businesses – went on to become a major talking point. One attendee told a story of being asked to input their passport and other details into the automated chat feature of a large airline’s website. They declined, and when they asked the airline how those data were being protected, the airline said they didn’t know.
“Should there be measures in place to make sure that when you put data into a form on an automated chat that it’s protected?” they asked. “And who should be responsible? If I park my car at my own risk, I know it: do we input our data into automated web chat forms on the same basis?”
Patel, her role as the voice of reason for the evening already well established, pointed out that “how they use such information should be on the privacy policy of the website. You should know that up front.”
“Unfortunately, this is very common,” van Enckevort said. “I moved into an apartment recently, with a company I’d used 10 years ago. They needed me to prove my identity. I said, ‘Haven’t you already identified me?’ And they said, ‘Oh yes, here’s your passport from 10 years ago’.”
Mulhall was optimistic that common sense may, eventually, prevail – “I think what will happen is that people will stop asking for data that they really don’t need,” he said – but did not venture a guess as to when that may happen.
Praise You
The conversation began to coalesce around what might be termed the pragmatic solutions or approaches that businesses and their security teams can meaningfully and usefully start to adopt. And the first step to assessing what is viable as a security posture (as opposed to what would be great to have but will be unaffordable, impractical or otherwise unachievable) may lie in deciding to what extent the challenges posed by this new technology are, in and of themselves, novel or unprecedented.
“I’ve been in security for quite a while,” one attendee admitted, to sniggers from among some friends and colleagues. “We’ve always got something new to deal with. It’s always a major issue, and we need to fix it, so our focus goes on it. And then eventually all these things move into the organisation and they become part of normal cybersecurity hygiene – part of the default minimum requirements. What do we think, at the moment, is AI security hygiene? To me, it’s all over the place. We attack everything as a priority. It some point, I guarantee, it will settle down, and there’ll be AI security hygiene requirements that you do as part of your normal work. What will those be, by default?”
It was a good question, and occasioned some brow-furrowing and visibly evident deep thought. And the first response, which arrived surprisingly quickly, felt both achievable and inspired.
“We need to apply GDPR wherever we use AI, and apply it as deeply as possible,” Patel said.
“Classification as standard – classification for all your data,” van Enckevort then suggested. “Classification is the thing that stands out to me, where you have to understand what’s inside each piece of data. Classification comes first.” Such classification would, he added, need to be done automatically.
“I agree about GDPR. But if you put information into an LLM, you can’t delete it – so GDPR is an issue,” one attendee argued. “Automation of classification systems – that hasn’t worked so far. And AI systems are non-deterministic, so you can’t test them in the way you normally do.”
“It doesn’t have to be perfect to deliver value,” Mulhall said. “We have to remember that. If we as an industry get the majority of security right, we’re still far better than we are now.”