preloader
Resources

There Is Definitely No Logic To Human Behaviour: Mimecast Panel Dissects HRM Concept

Insights from a RANT Roundtable in partnership with Mimecast

In Partnership With

During the second decade of the 21st century, the military intelligence community – always a group of people fond of an acronym or three – came up with a new concept they called ABI. Essentially a new discipline facilitated by the arrival of vastly expanded computing power, Activity Based Intelligence described a process by which the lens was flipped around on traditional surveillance and monitoring philosophies. Rather than asking analysts to focus on potential threats and find out as much as they can about them, the proponents of ABI argued that, with the huge capability cloud-based computing systems now offered them, they could instead be logging every piece of information, using the understanding that automated analysis processes could give them to set their baseline of what constitutes normal behaviour in the environment they were interested in, and then having their surveillance systems set to flag up any anomalies.

Instead of trying to find the needle in the haystack – keeping track of the terrorist leader who changed vehicles and transport modes during a criss-crossing path through two cities to attend a meeting with his boss; the pile of soil that might indicate where an insurgent had planted a roadside bomb – ABI practitioners would know what was normally going on in the regions and areas they were interested in, and be able to detect the signs that would point them to the precise locations where they ought to be focusing their attention. The way traffic was suddenly not using a particular stretch of road and instead driving around the block to avoid a stretch of land where, perhaps, word had got around a roadside bomb had been placed; the cafe in the out-of-the-way part of town where, analysis of cell tower data showed, the cellphones belonging to the two leaders had both just shown up at. By “knowing” what normal looked like, the ABI tools would alert the analysts to these anomalies, rather than them having to keep looking for them across wide areas.

None of the attendees at a RANT roundtable discussion on managing the risks posed by insiders, nor representatives from the event’s hosts, Mimecast, mentioned ABI by name. But the kinds of security concepts advocated and dissected during the wide-ranging conversation definitely seemed to fall into the ABI orbit.

Be Ready to Get Confused

Traditionally, the “insider threat” has been seen as a malicious individual – either an employee who started out loyal but has grown jaded or been “turned” by an adversary, or someone who premeditatedly manages to insert themselves into the enterprise with the intent to exfiltrate trade secrets – but, increasingly, data breaches caused by individuals are the result of employees who are either duped by sophisticated phishing attacks, who make innocent mistakes, or who are trying so hard to comply with hard-to-follow security protocols that they inadvertently expose confidential information. According to Mimecast, these people make up 8% of the workforce in large organisations; and 87% of the organisations the company has asked say they have

“dismissed people for negligently or deliberately sending out data – and the same 87% haven’t got time to investigate this,”

Mikey Molfessis, the company’s sales engineering manager, said.

Mitigating this type of threat requires not just better monitoring of data flowing in to and out of the organisation, but it impels organisations to better understand their workforce and
their motivations, so as to prevent these incidents from happening. Altering employee attitudes and workplace culture will help – but you won’t know what needs to change without a detailed picture of what’s going wrong right now. Which is why Mimecast’s concept, which it calls Human Risk Management, offers so much potential, Molfessis argued.

“Gone are the days where you could take a blanket-type security approach for all your users,” he said. “We need to see behavioural change in an organisation. If we see a user has never done certain things and is now doing them, we need to know. It’s about understanding your organisation and your people, and then taking the appropriate action.”

There Is No Map… 

The interest around the room was considerable, but – considering the backgrounds and experience of the assembled CISOs, BISOs and other senior security and compliance leaders present – there was understandable caution.

“In criminology, one of the main ways of finding a criminal is defining a profile,”

one security executive pointed out.

“If you’re profiling people in the organisation as being most likely to cause a breach, isn’t there a risk of targeting them? And they’ll get worried they’ll be under the microscope from IT?”

“That is a danger,” Molfessis acknowledged. “But the way we need to look at it is not as profiling. It’s based on your behaviour – and your behaviour will change. This happened with a colleague who sits next to me,” he continued. “He downloaded 25GB of data from the server where we store documents, presentations and so on. His manager was alerted, and asked what was going on. And he explained that he’d got a new laptop and was loading it up.”

“If they’re doing something different, you need to know,” another CISO suggested. “It’s not really profiling – it’s layering those factors. It’s not responding to one thing, but looking at the complete picture means we need to talk to you. So those people that cause problems – that big group who do it by accident – you’re not saying to them, ‘You’re a problem,’ you’re putting your arms around them and trying to help.”

Institutional approaches to these problems will of course differ – and some of those differences will in and of themselves help to reinforce messaging and change user behaviours, some attendees suggested.

“We say ‘profiling’ and that brings in criminal connotations – there’s probably a better word for it,” one CISO said. “In the SOC we’ve changed our language so we’re less threatening. You
need to look at privilege; you need to look at patterns. But instead of having these groups under what we called ‘heightened monitoring’, we’ve changed that to ‘heightened response’. Instead of saying they’re going to be monitored, we tell them we’re going to respond. There’s something in that: it drives a response rather than driving a deep dive into someone’s behaviour.”

“I agree,” another risk manager said. “If someone’s got mental-health issues, back in the day that would have been a red flag – but, really, they just need some help.”

…And a Compass Wouldn’t Help At All 

Talk turned to whether businesses had managed to successfully work out who their riskiest users were. The consensus that quickly emerged was that while behaviours and intent were clearly big factors in any such assessment, those on their own would not provide much insight into comparative levels of risk until they were cross-referenced with the levels of access and privilege an individual had, and what kinds and quantities of data their role gave them access to. So the combined risk represented by, say, a malicious user with low levels of access and working under strict system controls, and a user who may be considered vulnerable but who worked in the payroll department, would be very different.

This kind of nuanced understanding is vital if the organisation is to hope to get on top of the problem, but requires levels of insight into users, behaviours and system configurations that may not be easy to correlate and will rarely be found in a coherently combined condition. Yet it’s only then that sensible, workable, useful security measures can be enacted. And those measures need to be capable of tackling the problem properly – not just to appear to do so.

“The data is only as good as the people using it,” one CISO said. “We had an instance where auditors fought to have a control in place where someone who was leaving the company couldn’t print everything off and carry it out of the building under their arm. So we put that control in. But the control was, once a month, a manager would get a report. By that time the user had left.”

“If the data’s important you have to make sure that the control fits the threat,” another leader pointed out. “That example is a classic of people designing controls who don’t have a clue what they’re doing.”

“When something like this happens, the indicator we have is that someone has taken an action,” Molfessis explained of the Mimecast approach. “You need to see what access they had.”

Even this can be politically difficult, though – especially when the staffer exhibiting the problematic behavioural patterns is a senior leader of the entire organisation.

“We did an exercise of phishing attacks, and the worst performer was a board member,” one CISO recalled. “They were particularly susceptible to sending out information based on a phone call, thinking it was OK. You’d never believe that person would do that. It’s a very difficult situation. Are board members going to be willing to do the quarterly training we have now? It was meant to be instant dismissal, but board members are protected.”

They’re Terribly, Terribly, Terribly Moody 

But is an ABI-style, collect-everything approach a proportionate response to the problems the modern business faces? Not every attendee was convinced this is the case.

“For me, you need to know what you want to monitor,” one security executive said, arguing that some direction of surveillance activity was needed. “There’s oceans of data, and people are trying to make sense of the ocean. Instead, you need to work out what you’re interested in and just look at that. There’s no point fishing in the ocean.”

Wider questions of proportionality, and allocation of effort, were also raised by other security leaders. There will, one argued, be a cost associated with any increase in intensity, particularly if the focus of that increased activity is on members of the workforce deemed to be high-risk.

“How much are we spending to protect against what is, in reality, going to be a very small attack, versus other things that are far bigger problems?” they asked.

In that latter category they cited those vulnerable users who make mistakes that cost the company dearly. Targeting what they called “the unintentional risk” would have, they said,

“a far better ROI than going after the one or two people who are intending to do harm.”

But differentiating between the malicious insider and the accident-prone staffer who makes an innocent mistake may be a mistake, others suggested.

“You’ve got bad employees; people who make a mistake; or people who are in the employ of an adversary,” one said. “And then there’s the threat that comes from an employee who’s trying to do a really good job. This is one I’ve been seeing a lot more of recently. They don’t work in cybersecurity, they’re doing the right things, and they think they’re helping the business by moving money to their own account and paying a bill to help someone get around a blockage that appears to be holding the business back. I’m seeing that as a growing trend.”

To Get Involved In the Exchange of Human Emotions Is Ever So Satisfying 

Ultimately, these questions are ones that businesses will have to resolve for themselves. And in each case, finding a balance will require the organisation to adapt in ways that it may find challenging or unusual. At their heart, these issues will be less about technologies and procurement programs than they will be to do with workplace culture and internal policy.

“My father’s fallen ill, and I’ve not told work – but it’s changed my pattern at work,” one attendee said. “If we’re talking about behaviour changes triggering responses, does that mean I’m being monitored?”

And even if the company has access to this kind of information – as sensitive and personal as it is – how can this be turned into useful internal risk-reduction intelligence? Another security leader suggested this would require nothing less than a wholesale re-evaluation of corporate mindset.

“That’s why I use the term ‘colleague protection’,” this leader said. “If I have to help a thousand people to find the one insider threat, I think that’s a better business case for the organisation than other ways. But, knowing someone has a sickness in the family, has missed a payment, isn’t feeling great – how do you turn those into data points you can add up and say it’s time to put your arms around them? I don’t have the answer. When do you see enough of these things happening, and who do you involve? An email from the security office is going to add to the problem – but a nod to a department head to say, ‘Now would be a really good time to do a session on romance scams’ might be a big help.”

“I came here to listen and learn – and, my gosh, I have,” said Mimecast’s enterprise sales director, Alastair Dickson, in his summing up. “The key bit I’ll take away is it’s going to be an ongoing challenge between the professional and the personal, and how you can best protect the organisation and its people. There’ll always be a blend between technology and process. How you get to a happier, healthier, more productive workforce is difficult – but if you can do that, you probably reduce the problem, too.”

For more information on how Mimecast addresses human risk while protecting the collaboration tools organisations rely on every day, visit the website here.