
Protean Threat: Why Is Human Risk Such a Hard Problem To Tackle?
Conventional wisdom has it that we begin a story at the beginning. But sometimes, it can be useful to start at the end. One such occasion seemed to present itself at the close of a particularly fast-moving RANT roundtable held in Manchester, where an opinionated and voluble group of CISOs, BISOs and senior cybersecurity practitioners were discussing insider threats with staffers from human risk management platform providers Mimecast. In his closing remarks, the company’s insider-risk specialist Malcolm Murphy managed to silence the room by asking whether they listened to the BBC Radio Four show In Our Time. Evidently, it’s not a big hit with cybersecurity folks. So Murphy offered a short explanation of why he had decided to evoke the long-running history and philosophy programme.
“When [Lord Melvyn] Bragg used to host it,” Murphy explained, “at the end, he would ask: ‘Is there anything you wished we’d explored further?’ And tonight, we could have spent the whole hour just talking about leavers.”
Murphy was referring to a path that had been signposted during the evening’s discussion, when one of the security specialists attending had remarked that a company they had been doing some work with had assessed that a vast 80 per cent of its risk profile came from insider threats, and reckoned that the majority of that chunk came from people leaving the organisation and taking proprietary data with them to their next employer. But such was the pace of the conversation, so varied were the thoughts and ideas brought up by the senior cyber staff in the room, that this ended up being a pathway that the roundtable’s guests simply didn’t head down.
“The fact that we didn’t get to spend any time on that at all highlights how diverse this subject is,” Murphy said. “Just look at what we couldn’t agree on! We can’t agree if the regulations are clear in all circumstances; we can’t agree if the responsibility should sit with the users or with the IT security teams. This is why these problems are so hard to solve. But what I do know, because we see it all the time, is that this is where the frontier of risk is moving to for most organisations.”
A Burden Snared
“The practicalities of data protection,” RANT’s guest host for the evening, Jon Mattey, CISO of the Forge Holiday Group, had said about an hour earlier, “are something that resonates. We deal with data-privacy teams, who say: ‘Here’s what you have to do,’ and we have to go away and deliver it. But can we actually, practically solve these problems? And if we can, what is involved in doing that? Is it culture? Technology? Behaviour?”
“All of them!” piped up one CISO immediately, if not entirely helpfully.
“That’s the easy answer!” Mattey gently chided.
Another wag chimed in by suggesting that, surely, buying a product would provide a quick and easy solution. Nobody dared to respond with the dread cliche about there being no silver bullets, but that was the general gist of any serious response to this admittedly jocular suggestion.
These are knotty, complex issues, so the existence of a single, simple solution is inherently unlikely. “You can’t get all the way unless you look at the whole thing,” one leader summed up, sagely.
“You need to build a culture,” another CISO said, before adding: “People, processes and technology – if you get the first two right, the technology will solve itself.”
And part of getting that culture right, attendees seemed to suggest, lies in having corporate structures in place that are flexible enough to allow employees to use their common sense, while ensuring enough protections can be wrapped around those staff members so that they are protected from doing bad things by accident. Because if the malicious insider is potentially the business’s worst enemy, the staffer who leaks data or opens a back door to an external threat by accident can be just as dangerous – and there are many more of them.
Keys To the Castle
Setting and establishing a culture that prioritises security yet still empowers staff to use their initiative to help the business to succeed competitively became the foundational element of the ensuing discussion. Sometimes it may not have been the main focus, but it always seemed to sit underneath whatever topic was getting the majority of the attention, and each of those separate areas felt like they relied upon a strong and flexible culture being in place before anyone could think about resolving their complications and contradictions. And at the heart of some of the most contentious discussions that were directly concerned with corporate culture was the vexed question of policy – or, as one attendee put it, “the P word.”
“It’s a dirty word in my organisation,” they said. “Policies are just something written down. In these conversations about culture, you’re going back to targeting people for doing the wrong thing. Really, you need processes in place to stop those things happening. If someone’s had a bad day – maybe they’ve had a fight with their partner, or the baby’s been up all night crying – they’ll lose focus and make a mistake. And if your controls aren’t in place to stop that happening, no policy will help you.”
It was a strong point, but not universally accepted.
“People find a way,” another security leader countered. “No matter what the controls, they’ll find a way around them. I was working for an organisation where an executive wanted to get a piece of information out: they had every right to, so they were shown how they could change the label to allow them to do that. Three weeks later, people were passing that information around the organisation, about how to bypass the data-loss prevention filters.”
The conversation kept returning to that three-pronged requirement set mentioned earlier: people, processes, technology. And how all that needed to be wrapped up with the conversations around culture, so the whole thing makes coherent sense to everyone in the organisation.
“I’ve made a new mantra recently – and I do mean recently – with my dev team,” one security veteran said. “You can’t replace practice with policy. You can’t say, ‘I’ve got a policy, so it’s someone else’s problem.’ I can give loads of examples from my experience of bad leaks where someone copied some over-given permissions. We still see it today.”
“That’s where process comes in – for how permissions are given,” another leader said. “But it can’t be your only control. Culture is one fundamental, but it can’t be the only one. You need a culture where your policies can be written honestly, and then you have the technology to support the policies, and complete your security environment.”
The Experimenter
All fine in theory, of course. But what might this all look like in practice? And is it even possible? One CISO was willing to share their direct experience.
“We’ve gone through a transformation, with an emphasis on changing the culture, and it’s come from the top down,” they began. “That’s where those discussions need to be held.”
This, it seems, is the key to getting culture right – and therefore at least allowing for policy, process, people and technology to all fall into helpful alignment. And it’s possible, this leader argued, even in organisations that begin the journey from a starting point where flexibility is given a higher priority than security.
“At one point we had a lot of freedom when you’re on your own machines – a lot of trust, and quite convoluted policies which read like a legal document,” they continued. “We’ve changed that. But we’ve added an exemption process, with a mechanism for if you need to break a policy. There’s a way to ask permission; it gets put on the risk register, and gets owned by someone who’s invested in that decision. We try to say that the security team will advise on the risk without saying ‘No’. I think that helps.”
“We’ve also put in technology controls, around what people can do on their machines, which exposes to them the things they’ve been doing without realising they’re breaking policies,” they added. “Sometimes that’s resulted in them asking for more budget, because it’s business-critical to do that insecure process.”
A comprehensive case study, then. Not easy, not quick, but it seems to be working, and it appears to have succeeded in allowing the security team to be seen as enabling and supporting individual innovation, rather than just being The Department For Saying No. The critical element, though, is that the transformation has been driven by the organisation’s senior leaders. “For us, the biggest thing is the involvement of the people at the top,” the CISO concluded.
Psy-Ops Dispatch
Another “P-word” found its way into the discussion, though the first person to mention it chose to do so with great care. And they did so only after the conversation – inevitably, if perhaps slightly belatedly (it was about 25 minutes before the two letters made their first appearance) – began to look at how so-called AI systems might be making many of these problems worse. With these tools so widely used by people in their personal lives, the temptation to use them in the workplace is increasing daily – and if businesses or their security teams hope to minimise risk by blocking unauthorised AI in the workplace, all they often end up doing is pushing employees into working around the controls.
“I hope there’s no psychologists in the room,” a CISO began – and they actually looked like they were checking – before adding, “but this is a psychology problem. We all want the quickest route to the best outcome, and that drives behaviour. So does culture, but behaviours and cultures are interlinked. We’re humans, and we want to do the right thing, but we want to do it quickly. How we can enable people to be as good as they can be and to work as fast as they can? We’re naturally inquisitive, and people will look at AI, and think it’s an answer to that. What can we do to help?”
It was a good question, but it was perhaps even more complicated than it appeared – and, by asking it of present company, any answers offered may not, ultimately, be the most helpful.
“This is a room of people who work in cybersecurity,” Murphy pointed out. “We all think about risk-versus-reward. Either intentionally or informally, we will always do that risk assessment. But we are not the norm. Everyone else? They won’t even see it, let alone mitigate it. Nor should they.”
“When you talk about data-protection regulations, how many people in your organisation would even know what they are?” asked another security leader, as those around the table began to warm to the theme.
“I don’t think those regulations are clear enough,” another attendee argued. They had had some direct experience of working with the Information Commissioner’s Office, supposedly the authority on such matters in the UK, and had found that, even there, nuance was all, and clarity often proved hard to achieve. “Any question you ask of the ICO about GDPR,” they added, referring to the European Union’s General Data Protection Regulation, “the answer is, ‘It depends’.”
“But is it not clear when you’re talking about an employee putting information into a large language model?” another CISO asked.
“No, it isn’t,” came the immediate response from a contemporary. They pointed out that there would be nuances not just resulting from what the lawful basis and the legitimate interest of the data processor were, but different ones based around whether it was client data, company proprietary information, whether the LLM in question was a corporate-only version or the public version, and where the servers were hosted.
Oh, Lordy, troubles so hard.
Destroyed Fortress Reappears
The AI part of the discussion was spirited, feisty and – inevitably – inconclusive. The issues differ in their specifics but are common to all companies, in all sectors. Some situations stand out, where an unusual approach may have been taken, but even in those cases the challenges remain vexing.
“We were talking about getting buy-in from the top,” one CISO noted. “Well, with us, our CTO was on board with it, so overnight, AI was completely blocked. The thought process was, ‘If it’s important to a user, they’ll come to us and ask where their AI’s gone.’ For a while there was a huge scramble, with people saying they had access and were using it for X, Y and Z. That was amazing for us – we saw those use cases come through. It was a very bold move, and it’s still being worked on. We know we can’t do it forever. But it’s one way to find out how AI is being used in your organisation.”
The methodology came as a surprise, but other businesses have had experience of discovering how staff are putting AI to use. Not all of it has been particularly encouraging.
“I’ve seen people come in with use cases where you could literally have done it just by using a formula in Excel,” one CISO said, wearily.
“They jump the logical steps and go straight to AI,” another agreed. “It’s baffling sometimes.”
“It’s getting like whack-a-mole now,” another beleaguered security leader lamented. “We brought in a third-party app for the legal department, and all of a sudden there were loads of requests from outside legal to access it. Why? Turned out you could access Grok through it.”
“We had a developer who installed the open version of Claude on his laptop,” one attendee began.
“Awesome!” another replied. “Where’s his P45?”
“It’s an impressive tool,” a third attendee said, “but there was a story just this morning – one of my analysts sent it at 7am while he was walking his dog – about Claude hacking the Mexican government. People don’t realise what they’re allowing these things to do.”
“Everyone has a different view of what AI means to the business, and what it should mean,” Mattey said in his concluding remarks. “But coming back to those fundamentals seems to be the only thing that’s been agreed on.”
“The more we can have these discussions, the more we can figure out,” Murphy agreed. “There also seems to be unanimity that collaboration with users, to help them make the right choice at the decision point, was loud and clear. I like being in a room with collaborative people!”
Learn how Mimecast addresses the complex challenge of insider risk with its dedicated Insider Risk Management solution – blending people, processes, and technology to help organisations reduce human-driven threats and protect critical data. Find out more here!
