preloader
Resources

People Get Ready: Like It Or Not, The Human Perimeter Is A Cybersecurity Challenge

In Partnership With

The fact that there’s always a human dimension to every cybersecurity incident is a given. But is every data breach, necessarily, a cybersecurity incident? This became the recurring theme of a typically lively RANT roundtable discussion in Manchester, hosted by security intelligence partners Exabeam and Cybanetix, which was convened to assess how to improve the human aspect of cybersecurity operations.

“One emergent threat we’re seeing is AI being weaponised, especially around deepfakes,” Findlay Whitelaw, Exabeam’s security researcher and strategist, said. Whitelaw recounted how, as an exercise, a friend managed to construct a convincing deepfake video of her from only 30 seconds of footage shot on a phone. She then recalled the notorious example of the UK company in which an employee was socially engineered to transfer £25 million out of the business following a videoconference with the chief financial officer – which turned out to be a deepfake.

“The CFO was putting pressure on the accidental insider to transfer the money, but what they did that was really clever was to deepfake two other board members who were sitting in on this Teams conference call,” she said. “They were giving reinforcement through nodding of heads. The accidental insider had no idea – they trusted every element of it, and they made the payment. That’s what we’re going to be seeing more of: it’s not just the technology that’s being exploited – the human element is being exploited.”

While there was agreement around the table that such episodes are clearly very worrying, there were several high-level security staffers in the room who took issue with the extent to which such highly targeted attacks could be easily repeated. And one CISO wondered whether, in fact, this was even a security issue at all.

“That’s not a cybersecurity failure – that’s a financial-controls failure,” they argued. “There’s no way anyone should be in a situation where a senior leader in the business comes to them and puts them in that situation. I’m putting the blame for this one on the CFO! It’s nothing to do with cyber, yet very often the cyber team are brought in to deal with any failure like this.”

In turn, they argued, this raises another human dimension in the cybersecurity debate: the pressure that is being placed on security staff by failures that may be enabled by technology but result from ineffective policies or patchy implementation in other parts of the business.

“It’s why so many people here are worn thin,” this security leader suggested. “They’re dumped into everything that happens like this, even though it’s nothing to do with cybersecurity.”

 

(Don’t Worry) If There’s A Hell Below, We’re All Going To Go

Alsa Tibbit, an advisor and researcher on AI cybersecurity with Sheffield Hallam University – and RANT’s guest co-host for the discussion – raised other ways in which AI and deepfakes are being used to manipulate human behaviour. AI agents, she argued, will “create a huge revolution in cybersecurity because they don’t have a hierarchy of needs” that apply to, and affect the behaviours of, every human being.

“Why do we think the human is the weakest link in cybersecurity? Because the human has that hierarchy of needs,” she said. “Food, family, health – so many things. These elements can all at some point be shaky. Due to stress you might click a link, then feel guilty, so you don’t tell anyone about the mistake. AI agents just have one aim.”

This ability to focus on the task in hand is what makes generative AI such a potentially powerful tool, but also what makes its deployment in the workplace such a challenge from a security perspective. Tibbit mentioned, also, genAI’s capacity to amplify other problems faced not just by businesses but by entire societies, such as the spread of misinformation – whether deliberately or by accident. Again, though, such challenges, because they are technology-enabled, are being turned into cybersecurity staff’s problems – even though, the leaders in the room agreed, they are not actually cyber challenges.

“This is the perennial problem with cybersecurity people: we take on everyone else’s burden,” one CISO lamented. “Misinformation is not a cybersecurity problem.”

“Phishing, absolutely, is a security problem,” another security leader acknowledged, adding that genAI was putting “phishing on steroids”. However, they pointed out, genAI has not created a new challenge here, just amplified an existing one.

 

Hard Times

Another problem that is being felt by security teams, even if it is not strictly speaking their responsibility to solve, is around how businesses are encouraging their staff to feel comfortable with genAI tools. In general, the approach seems to be to humanise the technology by giving it a name, a role or a title that would normally belong to a human member of staff.

“Now we’ve got call centres using AI, giving the AI assistant a name: ‘This is Gemma from wherever, how are you today?’,” another CISO said. “The more we start doing that, the more normal people think it is. We’re humanising interaction with AI.”

“And we’re becoming over-familiar with it,” Whitelaw agreed. “AI is not going to go away. People like it. It’ll bring its own benefits. But where we are in security, companies aren’t writing AI into their policies and standards. Colleagues and employees are not being trained on the implications of the damaging impacts. There is a gulf between the individual and the corporate stance.”

Bridging that chasm is going to take not just a major education effort – but may well require a wholesale retraining of the attitude of the average employee. Attendees were quick to highlight the kind of mindset shift that is needed: encouraging employees not just to care about security, but to care about the future of the company in order to promote the idea of security as being a shared task that every staffer needs to feel fully invested in.

“Until the person cares about what they’re protecting they’ll carry on” accidentally putting data at risk, one CISO argued.

“It’s not just about caring – it’s about knowledge,” another suggested.

“And impact,” pointed out Dharm Vashi, Cybanetix’s sales lead.

“We all understand the impact of clicking on something, but is it sustainable?” Whitelaw asked. “In different roles I’ve had, the consequences were never consistent.” Neither, therefore, she suggested, were the interpretations staff would draw over what the company’s standards and policies actually mean. In such an environment, the best training programme in the world will struggle to gain much purchase.

 

Power To The People

The conversation began to explore ideas that might start to move the needle on effecting that kind of change. Throughout, there was recognition that these, in and of themselves, are not security issues: but among the many qualities required by cybersecurity professionals, pragmatism is among the most paramount. So regardless of how reasonable it may  or may not be, everyone in the room seemed to accept that it would likely be them who were left to pick up the pieces from failures elsewhere in the business.

“Going back to human needs – they’re not thinking about security: they’re thinking about how to feed their family, and how to be more efficient,” another security leader said. “So the training isn’t going to stick in people’s minds. Once they’re outside that environment, it goes out of their heads.”

“We’ve been trying to say that security is everyone’s responsibility, not just mine or my department’s,” another CISO said. “But you’ve got to keep doing it. Annual training isn’t enough. Most people don’t want to do it, or they do it begrudgingly, and they don’t take anything away from it. We have training and then we have short, sharp nudges; reminders; newsletters. We try to balance it so we’re not overloading people but we’re also not just doing it one time and they forget. But it’s getting across that it’s their responsibility, too, not just ours.”

There was interest expressed by others around the table as to what extent that message was resonating within the company’s workforce. Cybanetix takes the view that the responsibility should definitely be shared. The firm’s head of presales, Martin Luff, was optimistic about the ability for emerging technologies to help, but not without joined-up thinking within organisations.

“Whether it’s finance, cyber… actually, it’s everybody’s problem. It’s got to be a joint effort,” he said. “There are things you can do in terms of technology, some guardrails you can put around the use of AI, but there’s no perfect answer yet.”

 

We’ve Only Just Begun

Of course, it has long been understood – however ruefully – that there is no better corporate learning experience than a major data breach. When the business struggles to get back on its feet after a serious incident, or when that incident results not just in increased workloads but perhaps even redundancies – that is generally when the penny drops among the rank-and-file staffers for whom security had felt like someone else’s job. The ideal situation, attendees agreed, would be to be able to get into that post-attack mindstate without having had to go through the incident in the first place. Whether that is achievable is another matter.

“Do we need the horse to bolt before we close the stable door?” one attendee with experience of trying to inculcate better security cultures in multiple businesses mused. “You go into organisations and you’re trying to preach to the ones who aren’t converted. They work on old systems that aren’t secure, they share credentials, and they say, ‘It’s always been like this and we’ve never had an incident.’ They just don’t know.”

“It remains complex,” Whitelaw said. “Humans are diverse, unpredictable, multifaceted. While I agree not everything we’ve discussed is cyber directly, everything we do that uses these tools and digitised platforms is, by default – rightly or wrongly – going to be seen as an element of security. Cyber, legal, HR, business units – it’s an everybody issue. But I don’t think we’re there yet. And I’ve not seen any company get it right.”

Interested in learning more from experts at Exabeam and Cybanetix? Find out more here: Exabeam & Cybanetix

Connect with Exabeam’s Findlay Whitelaw.