
From Prompts To Poetry: How Should Security Teams Respond To The Rising AI Tide?
As the first RANT roundtable of 2026 rolled towards its conclusion, moderator Rob Black asked each of the CISOs, BISOs and senior security leaders to offer an off-the-top-of-the-head thought about what they reckoned their current key concerns and challenges were, following a high-protein hour of conversation about the adoption of generative so-called Artificial Intelligence across pretty much every business of any size in every conceivable sector or industry. Answers included the rising price of silicon and its impacts on the costs of data storage; agentic AI, and the proliferation of non-human identities; and the use of genAI tools by attackers to help them improve their success rate.
But then one security leader raised the important topic of the next generation soon to be entering the workforce. In light of all the developments with genAI and the way its adoption is transforming the world of work, they wondered, what should we be advising young people to concentrate on?
There was a brief moment of silence, while the dozen or so leaders around the room, and representatives of the event’s hosts, Mimecast, gave that some serious thought. Then the answers came thick and fast.
“Carpentry,” said one CISO.
“Farming,” suggested another.
“Subsistence. How to live off the land,” a third offered.
“Poetry,” another mused.
While they all gave the lie to any suggestion that security people are overly serious and lack a sense of humour, the speed and alacrity with which those around the table jumped on this train of thought did seem to speak to a legitimate concern. GenAI may not be all its boosters claim, and it certainly will not be the answer to every prayer. But it has captured the public imagination and its use is mushrooming far faster than guardrails can be put in place to prevent its misuse. And that misuse – which is happening and expanding in lock step with the rate of adoption – poses serious risks of significant harms, which could, ultimately, lead to the collapse of entire businesses, if not whole industries. And in the meantime, automated processes taking actions independently of humans could conceivably usher in some kind of techno-apocalypse which sends us all back into hunter-gatherer mode.
Cheery stuff!
You Must Learn
The evening had started off a little more optimistically, even if those early discussion points derived from challenges security leaders are having. Mimecast’s director of new solutions, James Hathaway, welcomed attendees by saying he was keen to hear about the areas they felt genAI could make them more efficient, but also where his company might be able to deploy these tools to aid defensive operations. After some initial discussions about enhanced phishing campaigns and the difficulties of ensuring security of supply chains, talk turned to preventative measures.
“I worked somewhere previously where we assumed everyone’s laptop was infected anyway, and focused much more on their ability to hand over credentials,” one CISO recalled. “So if someone did give away their credentials on WhatsApp, can we prevent that being exploited?” Rather than trying to prevent the leak of data occurring, they argued, there may be more chance of preventing that leak having negative consequences.
One way in which security vendors could leverage genAI tools to help with this would involve obtaining a reliable picture of a business’s network and the activity on it. Over time, tools can then be trained, through recognising what constitutes standard behaviour and activity, to then flag up anything that appears unusual. But this requires a high degree of trust from the customer – and although this may appear to be a technology question, the answer will be cultural or legal.
“This is one of the things we struggle with,” said Mimecast’s field CTO, Khetan Gajjar. “You want AI to understand what normal is, so you can pick up what isn’t. But how many of you are willing to share what normal looks like? This is not a tech problem. One of the workarounds we have is we’ve built connectors. You don’t have to give it to us, but if you already trust INTTRA or Opta, then we can get it from there.”
Word From Our Sponsor
It was only seconds after Black moved the discussion on to “shadow AI” – all the large language model tools that get used by staff, but which are not part of the business’s own technology stack – and employee usage of genAI tools that the first whole-room guffaw of the evening took place. All he had to do was ask where each attendee’s company was at on their genAI policy journey. Hathaway added a codicil to the question: “On top of that – how concerned are you that corporate data gets fed into AI and then is out there?” he asked.
“Hugely concerned,” one CISO admitted. “It’s challenging. At the same time, we want to enable the business and be pragmatic. One big challenge is, what are the use cases? We’re avoiding things that could have bias, because we don’t want to become a case-study for our regulators.”
Different companies have adopted different approaches, running the gamut from allowing employees to use whatever tools they feel will help and worrying about the consequences later, through to locking down every genAI completely and instructing staff not to circumvent network blocks by using tools like ChatGPT on their personal devices. Several companies have established committees to assess the ways forward, but for every firm that views this as a pragmatic and nuanced approach to a complicated issue, others will feel that it adds a cumbersome additional layer onto an already sclerotic management system, with the risk that any policies that result will arrive too late to be of use.
“It reminds me of when the cloud first came out,” the hugely concerned CISO added. “Suddenly you’ve got 30 decision-makers. Our developers want to develop a model, but you have to put it into a governance process that takes weeks to get a decision made on one. So they go ahead and play around on their own. We’re driving people to go and do things with AI themselves.”
“We’re trying to do it more on a principle-based basis,” another security leader said. “We want to encourage innovation, but we don’t have the technology to go around it. You have to encourage usage, but also encourage people to think about what they’re doing.”
There was some enthusiasm in the room for pop-up warnings – the system telling a user that an action they are about to take would result in data being sent outside the corporate network. Another security leader explained how their company was adopting a three-pronged approach, with warning messages part of a broader plan.
“One point was awareness,” they said. “Data isn’t going to jump on its own – someone will click a button. Do they understand what they’re doing? The second was to turn a lot of our people into security partners. We gave them a helpline number and said, if you’re not sure about something, come and ask. And the third was the message – and a disclaimer – if they try to complete the action. While you’re building up the long-term governing body, or a technical solution, this will help – educate them and trust them.”
Gajjar pointed out that, although the technology and the pace of its adoption appears to be new, a lot of the issues it raises still fit into existing processes and priorities.
“From most perspectives, it’s just another way data’s being used,” he said. “So a lot of the principles around how we use it, who we share it with, should be a guiding light.”
Ya Know The Rules
Rob got another round of laughs with a question that surely wasn’t intended to raise any. He wondered whether the security teams represented around the room had good visibility of genAI use inside their different businesses.
“Did anyone ever answer that with a ‘yes’?” one CISO asked, after they and a few others had collected themselves following their bout of cackling.
“We have technical tools to block usage, but I’ve no idea how effective they are,” one leader said, quickly homing in on the heart of the challenge. “Different things are being spun up every five minutes. If all your internet traffic goes through a trusted proxy, do you know if they’ve classified every AI provider out there?”
“We did API impressions, to see what people were doing, which was useful,” another security team leader said. “But then there’s all the other AIs that pop up everywhere else. We use Mirrorboard – it’s got its own AI. Excel’s got AI.”
“We have an enterprise Chat GPT, and we block everything else – but I wouldn’t rely on that,” a third leader said. “You’re effectively relying on someone else to recognise the unknown.”
Another challenge comes when other entities that the business connects with use genAI tools, entirely innocently, but without necessarily notifying their partner businesses of it. The growing use of meeting assistants is a particular bugbear for many security leaders.
“For example, let’s say I have a meeting with Third Party A, and they use an AI to take notes,” one CISO postulated. “There’s should be a policy where we say, ‘We won’t take the meeting if you do that.’ But even if we had one, I’m pretty sure that, somewhere in the organisation, someone would take that meeting – and company information ends up in those AIs.”
Who Protects Us From You?
Another round of hilarity – though muted this time, as if everyone was running a 10-second gif inside their heads of tumbleweed blowing across a desert – was occasioned when Black asked this question:
“How many of you have been empowered by the board to do something about AI, and where are the budgets coming from?”
“We’re being forced to use AI – it’s part of our performance review now that we do use it,” one leader said, after shaking themselves out of the shock that the question prompted. “New budgets? Ha!”
“There’s a fair amount of hysteria, and we need to show that we’re doing something,” another CISO said. “There’s less scrutiny of the material benefits.”
This issue appeared to be widely shared, even if not everyone put it into the same form of words. Businesses’ leaderships are so convinced of the benefits of AI adoption, and so concerned about the possibility that slow uptake might cede advantage to a competitor, that it’s not just the security of the tools that is being overlooked – so too is identifying the ways in which the company will actually derive those benefits from using genAI. Another gif sprang to mind: Jeff Goldblum in Jurassic Park, lamenting that things get done because they’re possible, not because anybody has really stopped to work out whether they’re a good idea or not.
“Within about 18 months, we’d set up a defined programme where we did analysis [of AI adoption]: there was a set of criteria, very tightly managed, and the ROI was tracked,” another CISO recalled. Those days have gone, though. “Now it’s a free-for-all,” they continued. “Everyone’s doing whatever they want. It’s much more accessible, and people have been enabled to do their own business process engineering…”
The thought tailed off; nobody in the room needed to be told what that could lead to.
“But it worked well for about 18 months,” they said, looking desperately for an upside.
“For us it was about six,” another CISO with a similar set of experiences said. “If we’re going to use any business data in any AI system, we have to run it through a responsible AI committee, just to make sure they’re not doing something crazy. I’m not on that committee – no-one in my team is. But in IT, we have a performance requirement to use AI. I don’t know how that works. We work with a lot of sensitive information.”
By All Means Necessary
And so to the final round of thoughts. What would security leaders like to see automated? And what are the key concerns and challenges more broadly? The overall tone was of grim gallows humour, some blurts of muted optimism occasionally rising to the surface.
“I’m too miserable to even answer that question,” was the first response.
“Everything!” was the second.
“People,” was the third.
There was some enthusiasm for setting automated systems loose on filling in due-diligence questionnaires, though even there, the most beneficial long-term solution would probably be to get sector organisations to come together and standardise what are at present entirely individual, but very repetitive, questionnaires and processes.
And then we were off into how to prepare for a return to a pre-industrial world.
“We’re all feeling it,” Hathaway acknowledged in his closing remarks. “We’re being drawn to use AI more and more and more, because it can have benefits. But we’ve all got to keep one eye on the threats it can bring. What tonight has done is to cement that theory. There’s great things that can happen, but we’re not being able to address the concerns to take advantage. We’ve not been able to control AI properly to be able to use it properly.”
“If it helps to give a sense of perspective, this is the type of discussion that all of your peers are having,” Gajjar said. “You are not alone!”
