preloader
Resources

Automatic for the people: Targeting human risk with data

Insights from a Virtual RANT Roundtable in partnership with Mimecast

In Partnership With

In cybersecurity, real risk always lags behind hype. As security teams are warned to defend against semi-sentient computers, this is truer than ever, as the most prevalent risk remains subconscious link-clicking.

This is not to say attempts to compromise humans haven’t advanced. Protecting this attack surface has come a long way from keyword filtering and rapping the offender over the knuckles. Yet, it remains one of the most successful options for any threat actor seeking ingress.

Security leaders from around the UK public sector gathered recently to discuss this problem from a strategic standpoint.

Changing risk

As ever, one of the main issues identified was the difficulty in keeping abreast of a continually morphing, fragmenting threat.

It was highlighted that attack chains are changing to become more complex. For example, targets are no longer taken to a simple credential harvesting site but are instead redirected through big-brand cloud hosting and CAPTCHA processes for more authentic experiences.

Mainstream adoption of AI is also playing a part in the redefinition of human risk—allowing for more convincing phishing emails and even leading to staff intentionally loading IP into LLMs without considering where the data goes.

Mitigating this fast-changing threat requires greater dynamism. Ironically, however, those present agreed this first requires asking a very basic question: Who should be protected, and to what degree?

Simple questions, complex answers

It was agreed that the simplicity of the question belies the complexity of the answer.

Understanding some fundamental factors goes a long way toward providing clarity—categorizing users by seniority and their level of access to critical systems, for example. However, often, the chaotic, ever-changing nature of any organization quickly renders this an overly simplistic approach.

For example, people move jobs. They may have been vetted and given the right permissions when they joined, but moving departments, getting new roles, or working on projects means their access creeps – often without controls being updated. A single longstanding employee might have access to multiple “crown jewels” without it being known. This makes them a target.

It also means that every time a security team feels access boundaries are defined, they have to be redrawn. Assumptions quickly become obsolete.

Another contributing factor, it was agreed, is that human risk is highly contextual. It changes depending on the technical environment specific to each organization. A low-maturity posture or flat network, for example, might preclude the zero-trust architecture necessary to stop privilege escalation – or organizations undergoing digital transformation might try to circumvent controls. Any number of details can add human risk.

It’s not just technical specifics that contribute to the difficulty in answering the question of who should be protected and how. Cultural differences also add complexity. Operating across a multitude of countries and departments, for instance, makes an aggregate understanding of the problem hard.

It was concluded that this is a problem space that is not only always evolving but doing so in ways very specific to each network: granular risks, against multiple human exposures, in fast-changing environments.

Mitigations

Security Awareness Training and Anti-Phishing controls are, and always have been, part of the solution. Yet, it was agreed by those in the discussion, these need to evolve to stay relevant.

As one put it:

“If an email is targeted enough and someone clicks on it, just responding by blasting out training is a bad idea. People get bored and have a lot on. Too much, and they switch off.”

The most mature organizations use data to take an individual view of human risk and target mitigations accordingly. Is there a high-volume campaign against companies in our sector that requires mass workforce training? Does a user proceed to a “second-order behavior” after clicking on the link and need more focused training? Is someone always trying to go to the same URLs and therefore should have a 1-to-1 with their manager?

Enriching security interventions with data can help with targeted resolutions – but a positive culture can also build deeper security awareness at a foundational level.

It is accepted that this means moving toward a zero-blame approach. As one attendee put it:

“Not using security as a stick to whack offenders but rather encouraging champions. “

It was agreed that a smart way of achieving this is to acknowledge positive behaviors when demonstrated. For example, if someone has reported a genuine phish, email them back and let them know. If they have stopped a potentially high-level attack, find a platform for highlighting them as a positive example in front of their peers.

Ultimately, no matter how much automation begins to pick up daily enterprise tasks, humans will always be a critical part of organizational decision-making loops. Those present agreed that this means that a granular, highly contextual, and hard-to-predict risk is set to remain on the balance sheet. With targeted interventions and positive cultural nudges using data, they can begin to move forward more effectively.

For more information on how Mimecast addresses human risk while protecting the collaboration tools organisations rely on every day, visit the website here.