The Digital Workplace Dilemma: AI-Powered Collaboration That’s Secure and Compliant
When the pandemic swept the world in 2020, the world of IT was caused to change rapidly, and new tools were deployed to enable daily work and collaboration. A few years have now passed and those tools have become part of the corporate IT fabric – so the question now is how can these be secured, when visibility is often an issue and you want users to be trusting of monitoring, and also not reckless with your data.
In a recent roundtable discussion, the subject of “The Digital Workplace Dilemma: AI-Powered Collaboration That’s Secure and Compliant” was discussed, with a specific focus on the actions of employees using collaboration tools, as these are often the hardest to detect, and the most tricky to spot. Particularly when it is the insider threat who misuses their access before being discovered.
How can you spot and stop this kind of behaviour before it causes harm? Is there a way to anticipate risky activity? Leveraging tools like sentient detection not only helps reduce the likelihood of sensitive data loss and non-compliance, but also supports data protection and compliance with regulatory requirements.
Opening the discussion, Mimecast’s Alastair Dickson said that companies often know the percentage of users who could be an ‘insider threat’ but the majority of data loss incidents are accidental, and this presents the opportunity to educate on what is acceptable.
Mimecast’s Mikey Molfessis said that his conversations on collaboration tools shows that “organisations now only realise what mess ups they made in COVID” and getting everything back into some form of normality leaves a lot of teams overwhelmed.
“How to manage Slack, email, Teams and places where data is stored, and when get rid of it and guarantee it is done, is now relevant to conversation,” he said, calling this a change in the way we work and how to understand risky users, and how to preemptively know if they will steal data.
Tool Hygiene
Into the discussion, and one participant said they faced a situation when a CEO insisted on using a SaaS technology, and this included using ChatGPT “and need full access to the AI suite.” This led to discover a tool to aid disability, where meetings would be automatically recorded by an AI, being used.
“So when a meeting starts at 1am and AI jumps in, panic starts!” The discussion continued by the participant acknowledging that as much as cybersecurity can be about data hygiene, it also needs to be about tool hygiene, and knowing where elements are.
Another participant said that users are aware of what they are using and the impact, and “unless you’ve properly locked down estate” you don’t think how it can morph over time.
On the use of AI, another participant said it was a better option to create their own GenAI tool, as if you have internal AI and don’t touch anything that it is being used for and keep it internal, the embrace was slow but offered a better solution.
Another participant said they worked for a company who had acquired startup AI companies, and the security leader was left to determine how to bring these companies into the corporate fold. “It was a challenge to allow freedom to some extent, and keep moving forwards and keep it under control,” they said, so they also developed their own AI model and used that to “play safely.”
For AI tools, specifically GenAI, one participant said they added controls on “anything that looks like PII and limit access to anything sensitive.” The chair summarised that allowing employees to “do their thing and find their way” was a solution.
One participant said the issue with using commercial GenAI tools was the cost of use, and also employees were using it and asking the same questions over and over, so this led to another making the decision to build their own tool. When building their own large language model (LLM), their employees fed the same questions into their internal AI tool, and the overall operational cost dropped.
Identify Risks
On to the discussion of those collaboration tools, how does AI fit into the mix? Well one participant said this can help identify risks “outside of the data itself.” One participant said that employees discussing dissatisfaction on the likes of Slack and Teams can be better detected with AI, as well as spotting someone trying to exfiltrate data.
Comments were made about the Orwellian concept of this level of moderation, especially with AI, but a participant said that the concept of ‘Sentient AI’ was a solution here, as chatbots are used. AI-enabled analysis can determine the frequency of what is typed, and keystrokes show frustration, so conclusions can be drawn.
As part of this discussion, the element of data loss prevention (DLP) was raised, and maybe this use of AI is a better solution as Mimecast’s Molfessis said that he knew of someone who had worked on a DLP project for ten years as a “static way of finding data.”
He also commented that AI can be used where if someone had said in an employee satisfaction survey that they were not happy in their role and were repeating those comments in an online chat, that person could be better detected as a potential insider threat.
Let off Steam
A participant said the “idea of running sentient analysis is terrifying” and reiterated the ‘Orwellian point of view’ claim, as there is an element of an employee letting off steam, and “all people vent their feelings, we’re all human.” They called these collaboration tools “amazing,” as they do offer the ability to remain in constant communication with colleagues, and share files without being in the same location.
However from an IT security perspective, there are times when an employee wants to be heard and attributed, and sometimes not, and the moral obligation on the security leader “moves from infosec and feels over lordy” if you’re watching every word.
The discussion concluded by a participant saying that you “don’t want to be first to adopt tech, and also don’t want to be last” as ultimately the users are there to be protected.
Mimecast’s Molfessis said that “sentiment is key”, as AI can be used to see when corporate data is moved between devices to personal accounts, but also IT leaders need to know what works best for them.
Dickson said that there has to be an understanding of individual sentiment, rather than risk losing employees over monitoring policies. He commented that he found discussions on the focus of the user’s frustrations “refreshing”, and the next stage is how to help protect users as most don’t want to impact their organisation negatively, and 90 percent who want to do a good job – and have a moan on a Monday.
“They are good people and focus on how to help them do a better job and move on from being a disabler and keep the brand safe.”
For more information on how Mimecast addresses human risk while protecting the collaboration tools organisations rely on every day, visit the website here.