<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>RANT Community</title>
	<atom:link href="https://rantcommunity.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://rantcommunity.com/</link>
	<description>Better Cyber Security Through Shared Opinions</description>
	<lastBuildDate>Tue, 31 Mar 2026 10:56:37 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Collaboration Not Conflict: Are CISOs Getting Third-Party Risk Management Wrong?</title>
		<link>https://rantcommunity.com/resources/collaboration-not-conflict-are-cisos-getting-third-party-risk-management-wrong/</link>
		
		<dc:creator><![CDATA[Benjamin Randell]]></dc:creator>
		<pubDate>Tue, 31 Mar 2026 10:56:37 +0000</pubDate>
				<category><![CDATA[Resources]]></category>
		<guid isPermaLink="false">https://rantcommunity.com/?p=2870</guid>

					<description><![CDATA[<p>If there were any lingering doubts about the importance of third-party risk management (TPRM), the events of the past year</p>
<p>The post <a href="https://rantcommunity.com/resources/collaboration-not-conflict-are-cisos-getting-third-party-risk-management-wrong/">Collaboration Not Conflict: Are CISOs Getting Third-Party Risk Management Wrong?</a> appeared first on <a href="https://rantcommunity.com">RANT Community</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>If there were any lingering doubts about the importance of third-party risk management (TPRM), the events of the past year must surely have banished them. From SaaS vendors to high street retailers, a string of high-profile data breaches and ransomware intrusions have offered plenty for CISOs and their boards to think about. When an outsourced IT helpdesk is blamed for a breach costing upwards of £100m, supply chain risk management becomes everyone’s business.</p>
<p>But are boards genuinely paying attention? And are today’s questionnaire-based supplier checks still fit for purpose? To find out more, GRC specialist Diligent gathered a group of typically outspoken cybersecurity leaders for a recent RANT roundtable in London.</p>
<h3></h3>
<h3><strong>AI Talking to AI</strong></h3>
<p>Diligent’s Jelle Groenendaal, Co-founder of the firm’s 3rdRisk business, kicked off proceedings with some words of comfort for attendees round the table: “I feel your pain”. Groenendaal was a Cyber Resilience Manager at Deloitte when he realised that legacy approaches to TPRM were woefully outdated.</p>
<blockquote><p>“You’re asking highly educated people to chase emails and send out questionnaires, then collate all that information in a central spreadsheet. It just wasn’t working,” he explained.</p>
<p>Most of the CISOs around the table were in agreement. One argued that, too often, TPRM is a “one and done” process. “How do we get more agile? How do we recognise that our risk changes frequently over the course of the year?” he said. “I fundamentally think something is broken.”</p></blockquote>
<p>Text-heavy, point-in-time questionnaires can soon overwhelm teams. The danger, one CISO argued, is that you skim the details and assume that everything’s OK. Another suggested that if they receive a document with featuring than 50 questions, they’ll simply deal with it via AI, rather than engage personally.</p>
<blockquote><p>“People are automating their responses to a questionnaire that has been created by automation,” said another concerned CISO. “If there’s so much being written, it’s not going to be read.”</p></blockquote>
<p>In any case, large suppliers like the hyperscalers are likely to ignore detailed questionnaires, and simply point the organisation to their “trust centre” page, another attendee claimed. That highlights a challenge that many said they face: devising an effective way to manage risk across a broad range of suppliers and partners, with different risk profiles.</p>
<blockquote><p>“There’s a level of real discrepancy,” argued host Matt Ford, Third Party Risk Manager at Howden. “How to get something that’s manageable for all, that represents the genuine size of business risk you have associated with these partners.”</p></blockquote>
<p>For many, there’s no beating the human touch. Several attendees noted the benefits of simply picking up the phone for a CISO-to-CISO chat with their suppliers. But that’s not going to be possible for every single supplier. This is where smarter, more targeted questionnaires could help.</p>
<h3></h3>
<h3><strong>A false sense of security?</strong></h3>
<p>Security leaders around the table were conflicted about the value of standards, frameworks and certifications, like Cyber Essentials and ISO 27001. One labelled the latter “both useful and a complete sham”. The challenge is that it is often used as a tick-box exercise, with companies failing to dig deeper into the Statement of Applicability which reveals exactly what Annex A controls suppliers have put in place.</p>
<blockquote><p>“Even the latest version of ISO 27001 doesn’t mean you’re secure,” said one CISO. “It just means you’ve written something down.”</p></blockquote>
<p>Context is therefore critical to effective TPRM, attendees agreed. Exactly what kind of service a supplier provides, the size and complexity of their IT infrastructure, and even their financial stability or otherwise are all important risk factors. But there are many more.</p>
<p>In the defence of standards and certifications, one security leader argued that no single compliance attestation should be treated in isolation. The trick is to build a more coherent picture of risk by combining multiple sources of information. Another suggested that, at least compliance with Cyber Essentials could create a “pathway” for a supplier to improve their security posture over time.</p>
<h3></h3>
<h3><strong>It’s a partnership, stupid</strong></h3>
<p>For many, this gets to the heart of what TPRM should really be about. Rather than “hammer” smaller suppliers with lengthy questionnaires and rigorous requirements, organisations should see the relationship as a mutually beneficial partnership, several attendees argued.</p>
<blockquote><p>“It’s not about ticking those boxes. You’re looking for threats, risks and ways to ensure they don’t materialise,” said Ford. “We’re here to support and help them to develop. We may want them to get SOC 2 or ISO 27001, but that’s a journey. Seriously consider your suppliers are part of your own journey to success just as much as you’re going to be for them.”</p></blockquote>
<p>Echoing these views, another shared that they ditched one supplier following a breach, but this came with a huge cost, as they were forced to find another mid-project. “In that situation it’s a lose-lose,” she said.</p>
<p>Traditional questionnaires were criticised around the table for being too “crude” and encouraging a “pass/fail” culture which leaves both parties worse off.</p>
<blockquote><p>“There’s an obligation here for senior leaders that TPRM shouldn’t be a tick box. It should be something UK PLC is working on to enhance the security of everyone,” said one senior risk leader. “Security should never be a competitive advantage. It should be a leveller. We all want to be as secure as possible. So stop sending out 200-page questionnaires, and work to help your suppliers understand what they need to do to get to a level that benefits us, them and everyone.”</p></blockquote>
<h3></h3>
<h3><strong>The board may already be bored of this</strong></h3>
<blockquote><p>The challenge facing CISOs, another attendee opined, is convincing the board that TPRM is still value for money. Like immunisation, if it’s working, there will be no visible result to crow about. “We need to turn to something usable and make sure it resonates with boards, so they don’t see this as dead money that will be gone after a certain point,” argued one security leader.</p>
<p>“That is why we acquired 3rdRisk,” responded Tom Ryan, Diligent GRC Sales Director. “Because what we’re trying to help CISOs and CROs give to their board is the context … So you can say to the board ‘you wanted to achieve these objectives, well, this is the level of risk we’re currently carrying.’”</p>
<p>Ultimately, “there’s a time and a place” for questionnaires, said Howden’s Ford. They can help to make the board care about TPRM, as long as they’re treated as part of a “multi-source” strategy.</p>
<p>“The one thing we care about is breach probability,” he concluded. “How easy will they be to take down? How much are they going to take down from us? And how bad are their nth party suppliers?”</p></blockquote>
<p>When couched in those terms, maybe supply chain risk management isn’t so complicated after all.</p>
<p>Ready to transform cyber risk oversight with integrated GRC capabilities? <a href="https://www.diligent.com/request-a-demo">Schedule a demo</a> to see how Diligent&#8217;s platform delivers comprehensive cyber risk intelligence to boards.</p>
<p>The post <a href="https://rantcommunity.com/resources/collaboration-not-conflict-are-cisos-getting-third-party-risk-management-wrong/">Collaboration Not Conflict: Are CISOs Getting Third-Party Risk Management Wrong?</a> appeared first on <a href="https://rantcommunity.com">RANT Community</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Protean Threat: Why Is Human Risk Such a Hard Problem To Tackle?</title>
		<link>https://rantcommunity.com/resources/protean-threat-why-is-human-risk-such-a-hard-problem-to-tackle/</link>
		
		<dc:creator><![CDATA[Benjamin Randell]]></dc:creator>
		<pubDate>Mon, 09 Mar 2026 11:53:44 +0000</pubDate>
				<category><![CDATA[Resources]]></category>
		<guid isPermaLink="false">https://rantcommunity.com/?p=2780</guid>

					<description><![CDATA[<p>Conventional wisdom has it that we begin a story at the beginning. But sometimes, it can be useful to start</p>
<p>The post <a href="https://rantcommunity.com/resources/protean-threat-why-is-human-risk-such-a-hard-problem-to-tackle/">Protean Threat: Why Is Human Risk Such a Hard Problem To Tackle?</a> appeared first on <a href="https://rantcommunity.com">RANT Community</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Conventional wisdom has it that we begin a story at the beginning. But sometimes, it can be useful to start at the end. One such occasion seemed to present itself at the close of a particularly fast-moving RANT roundtable held in Manchester, where an opinionated and voluble group of CISOs, BISOs and senior cybersecurity practitioners were discussing insider threats with staffers from human risk management platform providers Mimecast. In his closing remarks, the company&#8217;s insider-risk specialist Malcolm Murphy managed to silence the room by asking whether they listened to the BBC Radio Four show <em>In Our Time</em>. Evidently, it&#8217;s not a big hit with cybersecurity folks. So Murphy offered a short explanation of why he had decided to evoke the long-running history and philosophy programme.</p>
<blockquote><p>   &#8220;When [Lord Melvyn] Bragg used to host it,&#8221; Murphy explained, &#8220;at the end, he would ask: &#8216;Is there anything you wished we&#8217;d explored further?&#8217; And tonight, we could have spent the whole hour just talking about leavers.&#8221;</p></blockquote>
<p>Murphy was referring to a path that had been signposted during the evening&#8217;s discussion, when one of the security specialists attending had remarked that a company they had been doing some work with had assessed that a vast 80 per cent of its risk profile came from insider threats, and reckoned that the majority of that chunk came from people leaving the organisation and taking proprietary data with them to their next employer. But such was the pace of the conversation, so varied were the thoughts and ideas brought up by the senior cyber staff in the room, that this ended up being a pathway that the roundtable&#8217;s guests simply didn&#8217;t head down.</p>
<p>&#8220;The fact that we didn&#8217;t get to spend any time on that at all highlights how diverse this subject is,&#8221; Murphy said. &#8220;Just look at what we couldn&#8217;t agree on! We can&#8217;t agree if the regulations are clear in all circumstances; we can&#8217;t agree if the responsibility should sit with the users or with the IT security teams. This is why these problems are so hard to solve. But what I do know, because we see it all the time, is that this is where the frontier of risk is moving to for most organisations.&#8221;</p>
<p>&nbsp;</p>
<h3>A Burden Snared</h3>
<blockquote><p>&#8220;The practicalities of data protection,&#8221; RANT&#8217;s guest host for the evening, Jon Mattey, CISO of the Forge Holiday Group, had said about an hour earlier, &#8220;are something that resonates. We deal with data-privacy teams, who say: &#8216;Here&#8217;s what you have to do,&#8217; and we have to go away and deliver it. But can we actually, practically solve these problems? And if we can, what is involved in doing that? Is it culture? Technology? Behaviour?&#8221;</p>
<p>&#8220;All of them!&#8221; piped up one CISO immediately, if not entirely helpfully.</p>
<p>&#8220;That&#8217;s the easy answer!&#8221; Mattey gently chided.</p></blockquote>
<p>Another wag chimed in by suggesting that, surely, buying a product would provide a quick and easy solution. Nobody dared to respond with the dread cliche about there being no silver bullets, but that was the general gist of any serious response to this admittedly jocular suggestion.</p>
<p>These are knotty, complex issues, so the existence of a single, simple solution is inherently unlikely. &#8220;You can&#8217;t get all the way unless you look at the whole thing,&#8221; one leader summed up, sagely.</p>
<blockquote><p>   &#8220;You need to build a culture,&#8221; another CISO said, before adding: &#8220;People, processes and technology &#8211; if you get the first two right, the technology will solve itself.&#8221;</p></blockquote>
<p>And part of getting that culture right, attendees seemed to suggest, lies in having corporate structures in place that are flexible enough to allow employees to use their common sense, while ensuring enough protections can be wrapped around those staff members so that they are protected from doing bad things by accident. Because if the malicious insider is potentially the business&#8217;s worst enemy, the staffer who leaks data or opens a back door to an external threat by accident can be just as dangerous &#8211; and there are many more of them.</p>
<p>&nbsp;</p>
<h3>Keys To the Castle</h3>
<p>Setting and establishing a culture that prioritises security yet still empowers staff to use their initiative to help the business to succeed competitively became the foundational element of the ensuing discussion. Sometimes it may not have been the main focus, but it always seemed to sit underneath whatever topic was getting the majority of the attention, and each of those separate areas felt like they relied upon a strong and flexible culture being in place before anyone could think about resolving their complications and contradictions. And at the heart of some of the most contentious discussions that were directly concerned with corporate culture was the vexed question of policy &#8211; or, as one attendee put it, &#8220;the P word.&#8221;</p>
<blockquote><p>   &#8220;It&#8217;s a dirty word in my organisation,&#8221; they said. &#8220;Policies are just something written down. In these conversations about culture, you&#8217;re going back to targeting people for doing the wrong thing. Really, you need processes in place to stop those things happening. If someone&#8217;s had a bad day &#8211; maybe they&#8217;ve had a fight with their partner, or the baby&#8217;s been up all night crying &#8211; they&#8217;ll lose focus and make a mistake. And if your controls aren&#8217;t in place to stop that happening, no policy will help you.&#8221;</p></blockquote>
<p>It was a strong point, but not universally accepted.</p>
<blockquote><p>   &#8220;People find a way,&#8221; another security leader countered. &#8220;No matter what the controls, they&#8217;ll find a way around them. I was working for an organisation where an executive wanted to get a piece of information out: they had every right to, so they were shown how they could change the label to allow them to do that. Three weeks later, people were passing that information around the organisation, about how to bypass the data-loss prevention filters.&#8221;</p></blockquote>
<p>The conversation kept returning to that three-pronged requirement set mentioned earlier: people, processes, technology. And how all that needed to be wrapped up with the conversations around culture, so the whole thing makes coherent sense to everyone in the organisation.</p>
<blockquote><p>   &#8220;I&#8217;ve made a new mantra recently &#8211; and I do mean recently &#8211; with my dev team,&#8221; one security veteran said. &#8220;You can&#8217;t replace practice with policy. You can&#8217;t say, &#8216;I&#8217;ve got a policy, so it&#8217;s someone else&#8217;s problem.&#8217; I can give loads of examples from my experience of bad leaks where someone copied some over-given permissions. We still see it today.&#8221;</p>
<p>&#8220;That&#8217;s where process comes in &#8211; for how permissions are given,&#8221; another leader said. &#8220;But it can&#8217;t be your only control. Culture is one fundamental, but it can&#8217;t be the only one. You need a culture where your policies can be written honestly, and then you have the technology to support the policies, and complete your security environment.&#8221;</p></blockquote>
<p>&nbsp;</p>
<h3>The Experimenter</h3>
<p>All fine in theory, of course. But what might this all look like in practice? And is it even possible? One CISO was willing to share their direct experience.</p>
<blockquote><p>   &#8220;We&#8217;ve gone through a transformation, with an emphasis on changing the culture, and it&#8217;s come from the top down,&#8221; they began. &#8220;That&#8217;s where those discussions need to be held.&#8221;</p></blockquote>
<p>This, it seems, is the key to getting culture right &#8211; and therefore at least allowing for policy, process, people and technology to all fall into helpful alignment. And it&#8217;s possible, this leader argued, even in organisations that begin the journey from a starting point where flexibility is given a higher priority than security.</p>
<blockquote><p>   &#8220;At one point we had a lot of freedom when you&#8217;re on your own machines &#8211; a lot of trust, and quite convoluted policies which read like a legal document,&#8221; they continued. &#8220;We&#8217;ve changed that. But we&#8217;ve added an exemption process, with a mechanism for if you need to break a policy. There&#8217;s a way to ask permission; it gets put on the risk register, and gets owned by someone who&#8217;s invested in that decision. We try to say that the security team will advise on the risk without saying &#8216;No&#8217;. I think that helps.&#8221;</p>
<p>&#8220;We&#8217;ve also put in technology controls, around what people can do on their machines, which exposes to them the things they&#8217;ve been doing without realising they&#8217;re breaking policies,&#8221; they added. &#8220;Sometimes that&#8217;s resulted in them asking for more budget, because it&#8217;s business-critical to do that insecure process.&#8221;</p></blockquote>
<p>A comprehensive case study, then. Not easy, not quick, but it seems to be working, and it appears to have succeeded in allowing the security team to be seen as enabling and supporting individual innovation, rather than just being The Department For Saying No. The critical element, though, is that the transformation has been driven by the organisation&#8217;s senior leaders. &#8220;For us, the biggest thing is the involvement of the people at the top,&#8221; the CISO concluded.</p>
<p>&nbsp;</p>
<h3>Psy-Ops Dispatch</h3>
<p>Another &#8220;P-word&#8221; found its way into the discussion, though the first person to mention it chose to do so with great care. And they did so only after the conversation &#8211; inevitably, if perhaps slightly belatedly (it was about 25 minutes before the two letters made their first appearance) &#8211; began to look at how so-called AI systems might be making many of these problems worse. With these tools so widely used by people in their personal lives, the temptation to use them in the workplace is increasing daily &#8211; and if businesses or their security teams hope to minimise risk by blocking unauthorised AI in the workplace, all they often end up doing is pushing employees into working around the controls.</p>
<blockquote><p>   &#8220;I hope there&#8217;s no psychologists in the room,&#8221; a CISO began &#8211; and they actually looked like they were checking &#8211; before adding, &#8220;but this is a psychology problem. We all want the quickest route to the best outcome, and that drives behaviour. So does culture, but behaviours and cultures are interlinked. We&#8217;re humans, and we want to do the right thing, but we want to do it quickly. How we can enable people to be as good as they can be and to work as fast as they can? We&#8217;re naturally inquisitive, and people will look at AI, and think it&#8217;s an answer to that. What can we do to help?&#8221;</p></blockquote>
<p>It was a good question, but it was perhaps even more complicated than it appeared &#8211; and, by asking it of present company, any answers offered may not, ultimately, be the most helpful.</p>
<blockquote><p>   &#8220;This is a room of people who work in cybersecurity,&#8221; Murphy pointed out. &#8220;We all think about risk-versus-reward. Either intentionally or informally, we will always do that risk assessment. But we are not the norm. Everyone else? They won&#8217;t even see it, let alone mitigate it. Nor should they.&#8221;</p>
<p>&#8220;When you talk about data-protection regulations, how many people in your organisation would even know what they are?&#8221; asked another security leader, as those around the table began to warm to the theme.</p>
<p>&#8220;I don&#8217;t think those regulations are clear enough,&#8221; another attendee argued. They had had some direct experience of working with the Information Commissioner&#8217;s Office, supposedly the authority on such matters in the UK, and had found that, even there, nuance was all, and clarity often proved hard to achieve. &#8220;Any question you ask of the ICO about GDPR,&#8221; they added, referring to the European Union&#8217;s General Data Protection Regulation, &#8220;the answer is, &#8216;It depends&#8217;.&#8221;</p>
<p>&#8220;But is it not clear when you&#8217;re talking about an employee putting information into a large language model?&#8221; another CISO asked.</p></blockquote>
<p>&#8220;No, it isn&#8217;t,&#8221; came the immediate response from a contemporary. They pointed out that there would be nuances not just resulting from what the lawful basis and the legitimate interest of the data processor were, but different ones based around whether it was client data, company proprietary information, whether the LLM in question was a corporate-only version or the public version, and where the servers were hosted.</p>
<p>Oh, Lordy, troubles so hard.</p>
<p>&nbsp;</p>
<h3>Destroyed Fortress Reappears</h3>
<p>The AI part of the discussion was spirited, feisty and &#8211; inevitably &#8211; inconclusive. The issues differ in their specifics but are common to all companies, in all sectors. Some situations stand out, where an unusual approach may have been taken, but even in those cases the challenges remain vexing.</p>
<blockquote><p>   &#8220;We were talking about getting buy-in from the top,&#8221; one CISO noted. &#8220;Well, with us, our CTO was on board with it, so overnight, AI was completely blocked. The thought process was, &#8216;If it&#8217;s important to a user, they&#8217;ll come to us and ask where their AI&#8217;s gone.&#8217; For a while there was a huge scramble, with people saying they had access and were using it for X, Y and Z. That was amazing for us &#8211; we saw those use cases come through. It was a very bold move, and it&#8217;s still being worked on. We know we can&#8217;t do it forever. But it&#8217;s one way to find out how AI is being used in your organisation.&#8221;</p></blockquote>
<p>The methodology came as a surprise, but other businesses have had experience of discovering how staff are putting AI to use. Not all of it has been particularly encouraging.</p>
<blockquote><p>   &#8220;I&#8217;ve seen people come in with use cases where you could literally have done it just by using a formula in Excel,&#8221; one CISO said, wearily.</p>
<p>&#8220;They jump the logical steps and go straight to AI,&#8221; another agreed. &#8220;It&#8217;s baffling sometimes.&#8221;</p>
<p>&#8220;It&#8217;s getting like whack-a-mole now,&#8221; another beleaguered security leader lamented. &#8220;We brought in a third-party app for the legal department, and all of a sudden there were loads of requests from outside legal to access it. Why? Turned out you could access Grok through it.&#8221;</p>
<p>&#8220;We had a developer who installed the open version of Claude on his laptop,&#8221; one attendee began.</p>
<p>&#8220;Awesome!&#8221; another replied. &#8220;Where&#8217;s his P45?&#8221;</p>
<p>&#8220;It&#8217;s an impressive tool,&#8221; a third attendee said, &#8220;but there was a story just this morning &#8211; one of my analysts sent it at 7am while he was walking his dog &#8211; about Claude hacking the Mexican government. People don&#8217;t realise what they&#8217;re allowing these things to do.&#8221;</p>
<p>&#8220;Everyone has a different view of what AI means to the business, and what it should mean,&#8221; Mattey said in his concluding remarks. &#8220;But coming back to those fundamentals seems to be the only thing that&#8217;s been agreed on.&#8221;</p>
<p>&#8220;The more we can have these discussions, the more we can figure out,&#8221; Murphy agreed. &#8220;There also seems to be unanimity that collaboration with users, to help them make the right choice at the decision point, was loud and clear. I like being in a room with collaborative people!&#8221;</p></blockquote>
<p>Learn how <b>Mimecast</b> addresses the complex challenge of insider risk with its dedicated <i>Insider Risk Management</i> solution &#8211; blending people, processes, and technology to help organisations reduce human-driven threats and protect critical data. <a href="https://www.mimecast.com/solutions/insider-risk-management">Find out more here!</a></p>
<p>The post <a href="https://rantcommunity.com/resources/protean-threat-why-is-human-risk-such-a-hard-problem-to-tackle/">Protean Threat: Why Is Human Risk Such a Hard Problem To Tackle?</a> appeared first on <a href="https://rantcommunity.com">RANT Community</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Direct Route: How To Stop The Path Of Least Resistance Becoming The Highway To Hell</title>
		<link>https://rantcommunity.com/resources/direct-route-how-to-stop-the-path-of-least-resistance-becoming-the-highway-to-hell/</link>
		
		<dc:creator><![CDATA[Galena]]></dc:creator>
		<pubDate>Thu, 19 Feb 2026 10:39:53 +0000</pubDate>
				<category><![CDATA[Resources]]></category>
		<guid isPermaLink="false">https://rantcommunity.com/?p=2753</guid>

					<description><![CDATA[<p>Sometimes, you just need someone who&#8217;s willing to cut to the chase. And that was exactly what Lee Quinn, group</p>
<p>The post <a href="https://rantcommunity.com/resources/direct-route-how-to-stop-the-path-of-least-resistance-becoming-the-highway-to-hell/">Direct Route: How To Stop The Path Of Least Resistance Becoming The Highway To Hell</a> appeared first on <a href="https://rantcommunity.com">RANT Community</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Sometimes, you just need someone who&#8217;s willing to cut to the chase. And that was exactly what Lee Quinn, group head of cyber security for the Coventry Building Society, did in his opening remarks at a RANT roundtable in Manchester, hosted by SpecterOps.</p>
<blockquote><p>   &#8220;Controls are built in boxes, but attackers don&#8217;t think in boxes &#8211; they think in routes,&#8221; Quinn said. &#8220;Everybody here works for companies which have invested in all the usual defensive tech. It all looks solid. But attackers aren&#8217;t looking to smash through those. They&#8217;re looking for the small weaknesses &#8211; the service account that&#8217;s no longer used but is still active, things like that &#8211; and they&#8217;ll use those to move around our defences.&#8221;</p></blockquote>
<p>Quinn cited the example of the 2024 ransomware attack on the American health technology firm Change Healthcare, where the lack of multi-factor authentication on a remote access server led to an identity compromise which the attackers were then able to exploit, escalating the account&#8217;s privileges and moving laterally through the corporation&#8217;s network. The impact, as he notes, was huge.</p>
<blockquote><p>   &#8220;Paralysis across the whole of U.S. healthcare; loads of patient data locked down; billions in knock-on costs,&#8221; he said.</p></blockquote>
<p>From that first foothold, they were able to move to the heart of the company&#8217;s digital infrastructure. The end results could have cost lives; they certainly cost huge amounts of money.</p>
<p>Clearly, a company of that size and scale, handling data as sensitive as medical records, and working in a highly regulated, literally life-and-death sector, would not have skimped on its digital defences. But the point Quinn made was that all the expensive solutions to specific cybersecurity challenges can only solve their own discrete problem set. You still need them, and if an adversary launches themselves directly at them, they&#8217;ll probably prove strong enough to repel the attack. But because they&#8217;re there, most attacks will be designed to go around them. And that requires businesses to be thinking differently about security.</p>
<blockquote><p>   &#8220;An attack path isn&#8217;t a tool &#8211; it&#8217;s a mindset shift,&#8221; he said. Instead of thinking, where should I build my walls, and where should I put my moat? A business needs to be thinking like an attacker, and working out how defences can be bypassed, he argued. The conversation, Quinn said, needs to move from, how do I shore up my defences, to: &#8220;How would I go from here [the perimeter] to domain admin if I was attacking?&#8221;</p></blockquote>
<h4>Give The Dog A Bone</h4>
<p>SpecterOps provide businesses with a tool called BloodHound that can help with this mindset change by, essentially, mapping all the routes an attacker could potentially take around their defences. Once the problem is made visible &#8211; in the form of a diagram that even non-technical people can easily understand &#8211; changing that mindset starts to become possible.</p>
<p>There are two main iterations of BloodHound &#8211; a free version, Community; and the paid, all-bells-and-whistles Enterprise. The purpose of having a free version is not just to tempt businesses to become paying customers: SpecterOps believes in the principle of a rising tide floating all boats, and its founders feel strongly that if they can help any user to understand how attackers could breach their systems and reveal ways in which they could prevent that happening, then everyone will benefit.</p>
<p>One challenge, though, is that the tool &#8211; in both versions &#8211; can reveal rather more routes than most business leaders would be willing to believe could exist from the tiniest cracks in the perimeter to the vital data that the business relies on to function. For the most part, this is because most businesses rely on their active directory (AD) to manage their identities, and all their access protocols are based on those identities.</p>
<p>AD is an ageing technology, so is well past the stage where &#8220;creaky&#8221; or &#8220;clunky&#8221; really do it justice &#8211; but its centrality to so much of what the business does means no big organisation is willing to risk seeing what might happen if they tried to replace it. Also: its existence predates many organisation&#8217;s cybersecurity departments, so its corporate ownership resides elsewhere. Consequently, its security is rarely something security teams have ownership of, despite its centrality to the entire enterprise&#8217;s digital estate. In effect, AD security isn&#8217;t just something that attackers can leverage as they waltz around and between expensive security technologies &#8211; it&#8217;s something that network defenders have to work around too, since they&#8217;re not empowered by the business to fix it.</p>
<blockquote><p>   &#8220;When you look at your AD, it&#8217;ll be a shitshow,&#8221; Colin Makin, SpecterOps&#8217; sales director for Europe, said, deploying a piece of highly specialised technical terminology. &#8220;We did a proof-of-concept for a core national infrastructure provider: how many routes did it show? Two billion.&#8221;</p></blockquote>
<p>This provides a fundamental challenge for the company: how do you persuade people to acquire a technology that appears to be showing them how badly they&#8217;re doing? Worse, one that appears to imply that all the spending they&#8217;ve made on security tools, while it may not have been wasted exactly, still hasn&#8217;t managed to provide much in the way of solid protection. &#8220;People don&#8217;t want to buy a tool that shows them how dirty their laundry is,&#8221; is the way he put it.</p>
<p>Again, the challenge here is one of encouraging the decision-makers in the business to accept that it&#8217;s time they started to re-examine their mindsets. Quinn argued that expecting a security tool to fix a problem isn&#8217;t really the right way to think about it.</p>
<blockquote><p>   &#8220;Tools like BloodHound make the attack paths easy to see, but it doesn&#8217;t fix things for you,&#8221; he said. &#8220;If forces clarity &#8211; but it&#8217;s up to you to respond.&#8221;</p></blockquote>
<h4>Problem Child</h4>
<p>Businesses which have deployed BloodHound &#8211; whether in the Enterprise or the Community version &#8211; have started down the road towards those necessary mindset shifts. But it is not always smooth going.</p>
<blockquote><p>   &#8220;We&#8217;ve done it in our corporate environment, but in our customer environment it&#8217;s difficult,&#8221; one CISO, whose company supplies software to the public sector, explained. &#8220;We connect lots of information for public bodies. They all share data and software. Those attack paths are the most dangerous for me. It could start in one region and quickly take out the whole country. We model all the attack paths for our software &#8211; but our customers? No. And it&#8217;s them that worry me.&#8221;</p>
<p>&#8220;I understand our environment really well,&#8221; another senior security manager said, &#8220;but trying to get that mind shift of getting rid of the accounts that you don&#8217;t need any more&#8230;? People who work in the organisation should only have credentials that are necessary. But many organisations are very naive to what&#8217;s left lying around in their own environment.&#8221;</p></blockquote>
<p>Just as worrying are those users who have been given greater privileges than their role actually requires. There are many reasons why this happens, but none of them appear to justify the risks created.</p>
<blockquote><p>   &#8220;A lot of people, when they ask for privileges, don&#8217;t know what they&#8217;re asking for,&#8221; Quinn said. &#8220;They say they need a domain admin identity, but all they need to do is go from A to B.&#8221;</p>
<p>&#8220;It&#8217;s like using a Ferrari to go to the shops,&#8221; Makin agreed.</p>
<p>&#8220;Or sometimes,&#8221; offered another CISO, who had clearly seen the same thing happen more than a few times, &#8220;it&#8217;s more, &#8216;He&#8217;s got a Ferrari, why can&#8217;t I have one too?'&#8221;</p></blockquote>
<p>And organisations are exacerbating this problem by, as one leader put it, placing &#8220;compatibility layer on top of compatibility layer because nobody wants to switch off their AD. When are we going to architect these rules?&#8221; they said, pitching the comment partway between an exasperated plea and a frustrated <em>cri de coeur</em>.</p>
<p>Leavers, joiners and movers in the organisation are another fertile breeding ground for overprivileged accounts, duplicate identities, or worse.</p>
<blockquote><p>   &#8220;I started off in my company being a UNIX system administrator,&#8221; one veteran security leader recalled. &#8220;My job has changed, and has ranged from being really low level on some production bits, to now, where, in the nicest possible way, I shouldn&#8217;t be trusted to be on those servers. I believe all my old accounts have been disabled. They certainly should have been. But are some of them still lingering? I don&#8217;t know.&#8221;</p></blockquote>
<p>Other attendees contributed other troubling real-life scenarios, from the firm with four people who have the same name, to the person at the table who shared a name with a colleague in their company, the only way to distinguish them on a list being one of them has a doctorate.</p>
<blockquote><p>   &#8220;The number of times privileges get mixed up between people is ridiculous,&#8221; one of these leaders said. &#8220;People add privileges to one account, but they don&#8217;t apply them to the right one. Or they don&#8217;t know which to apply them to, so they apply them to all of them.&#8221;</p></blockquote>
<h4>Dirty Deeds Done Dirt Cheap</h4>
<p>The differences between the two BloodHound products provoked questions, with some attendees wondering how those might manifest themselves. One CISO who had moved from a rival service and taken up BloodHound&#8217;s Community edition had noticed a huge increase in the number of attack paths revealed. Is the step from Community to Enterprise likely to cause as big of a further increase, they asked?</p>
<blockquote><p>   &#8220;Yes and no,&#8221; said Mark Wilson, a SpecterOps senior sales engineer, before offering an explanation. &#8220;When you use Community you&#8217;re trying to work out a route, but you can&#8217;t see the whole map. With Enterprise, we analyse the whole map. If a company has two million paths, you&#8217;re never going to remediate them all. We map out the terrain, and work out what are the points of convergence that an attacker must traverse in order to get control.&#8221;</p></blockquote>
<p>This methodology will help businesses to turn awareness into effective action. As Wilson notes, no SOC will have the time or the resources to individually disrupt millions of attack paths &#8211; but if three-quarters of those routes have to pass through a handful of nodes on the network, then work can be focused on those, and ways of disrupting adversaries as they move through them can be deployed.</p>
<p>Moreover, BloodHound Enterprise doesn&#8217;t just look at the company and customer networks. As well as the IT, it can map paths across the OT environment.</p>
<blockquote><p>   &#8220;We can add in backup servers, and take it further,&#8221; Wilson said. &#8220;We look at all the assets that are critical.&#8221;</p>
<p>&#8220;Hopefully OT is separate from IT,&#8221; Makin added. &#8220;But if a user can pivot between them, an adversary can too.&#8221;</p></blockquote>
<p>Once a map exists, those embedded mindsets can, perhaps, start to shift. If a security staffer tries to tell someone in HR, or in sales, or on the board, about a security challenge, eyes tend to glaze over. Show them a map of the environment, which describes the problem in a way that they can see at a glance, and there is at least the chance for understanding to dawn, and that deeply engrained habits and ways of thinking might begin to change.</p>
<blockquote><p>   &#8220;This is a good contextual narrative to move into,&#8221; Kay Daskalakis, a SpecterOps sales engineer, said. &#8220;You can say to a colleague in HR: &#8216;Look &#8211; you can go from here to the server, from the server to the principal, and take over the company.&#8217; That&#8217;s interesting to all these people. It&#8217;s not a niche problem.&#8221;</p></blockquote>
<p>And those lessons tend to stick.</p>
<blockquote><p>   &#8220;Once you see it, you can&#8217;t unsee it,&#8221; Makin says.</p></blockquote>
<h4>Down Payment Blues</h4>
<p>Ultimately, it all comes down to money. Around the room were CISOs, BISOs and senior security practitioners whose experience spanned numerous different companies and several different sectors. Getting together to talk through these challenges and share experiences and ideas of best practice is vital. But boards usually view security as a sunk cost, and talking them into putting extra resources into a new service is always difficult. It will be even more difficult if the service you are asking them to acquire will appear to show that much of what they have already paid for has had a more limited effect than expected.</p>
<blockquote><p>   &#8220;As a community, we&#8217;re here tonight to try to work out the best approach,&#8221; one attendee said. &#8220;We get the sales spin; there&#8217;s lots of tools. We&#8217;ve all invested so much money, and I wonder if it&#8217;s been spent wisely. Is this the best bang for our buck? The issue is: are we really, truly understanding where the risk is? This [an attack-path mapping capability] sounds like it could do some real good &#8211; but I don&#8217;t just want people to spend money on it. Do we really understand our risk?&#8221;</p>
<p>&#8220;Personally, I don&#8217;t think we do &#8211; often, we just install the tool,&#8221; Quinn replied.</p>
<p>&#8220;We invest so much money, but do we truly fix the problem?&#8221; the first leader asked. &#8220;I genuinely want everyone in this room to do that. I think everything comes back to risk: assessing it, getting that right, so you can focus your resources and your money.&#8221;</p>
<p>&#8220;This is a fantastic perspective,&#8221; Daskalakis said. &#8220;Why? Because it ties in 100 per cent with where we are and where we should be. You&#8217;ve put it right: there&#8217;s a load of money spend on detection, but when you look at the attack timeline, that&#8217;s during the attack &#8211; it&#8217;s not before. We need to sink some of that money into prevention. That means clarifying what is a risk, limiting the unknowns, knowing what asset risk looks like.&#8221;</p></blockquote>
<p>But it&#8217;s that mindset shift that needs to happen before anything is going to start to change. As Makin noted, cybersecurity budgets usually only get raised after something bad has happened. Companies need to be looking at spending in ways that will prevent incidents, rather than spending after those incidents have caused damage.</p>
<blockquote><p>   &#8220;You can&#8217;t just rely on what you&#8217;ve historically got on your system to understand what your attack paths are,&#8221; Quinn said. &#8220;When people can move from initial foothold to dominance, relying on EDR and MFA is not going to work.&#8221;</p>
<p>&#8220;We see billions of attack paths &#8211; an attacker only needs one,&#8221; Wilson said. &#8220;My only advice is to get visibility over it.&#8221;</p>
<p>&#8220;There are qualitative factors you need to take into account, but there are quantitative factors you need to assess risk, too,&#8221; Daskalakis added. &#8220;BloodHound Enterprise helps you do that. It&#8217;s not about telling you what the risk is, but it does tell you how many identities have risks associated with them &#8211; and you can create a map to see the risk before it happens.&#8221;</p></blockquote>
<p><a href="https://specterops.io/">Find out more about SpecterOps here!</a></p>
<p>The post <a href="https://rantcommunity.com/resources/direct-route-how-to-stop-the-path-of-least-resistance-becoming-the-highway-to-hell/">Direct Route: How To Stop The Path Of Least Resistance Becoming The Highway To Hell</a> appeared first on <a href="https://rantcommunity.com">RANT Community</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>From Prompts To Poetry: How Should Security Teams Respond To The Rising AI Tide?</title>
		<link>https://rantcommunity.com/resources/from-prompts-to-poetry-how-should-security-teams-respond-to-the-rising-ai-tide/</link>
		
		<dc:creator><![CDATA[Galena]]></dc:creator>
		<pubDate>Wed, 18 Feb 2026 10:00:32 +0000</pubDate>
				<category><![CDATA[Resources]]></category>
		<guid isPermaLink="false">https://rantcommunity.com/?p=2726</guid>

					<description><![CDATA[<p>As the first RANT roundtable of 2026 rolled towards its conclusion, moderator Rob Black asked each of the CISOs, BISOs</p>
<p>The post <a href="https://rantcommunity.com/resources/from-prompts-to-poetry-how-should-security-teams-respond-to-the-rising-ai-tide/">From Prompts To Poetry: How Should Security Teams Respond To The Rising AI Tide?</a> appeared first on <a href="https://rantcommunity.com">RANT Community</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>As the first RANT roundtable of 2026 rolled towards its conclusion, moderator Rob Black asked each of the CISOs, BISOs and senior security leaders to offer an off-the-top-of-the-head thought about what they reckoned their current key concerns and challenges were, following a high-protein hour of conversation about the adoption of generative so-called Artificial Intelligence across pretty much every business of any size in every conceivable sector or industry. Answers included the rising price of silicon and its impacts on the costs of data storage; agentic AI, and the proliferation of non-human identities; and the use of genAI tools by attackers to help them improve their success rate.</p>
<p>But then one security leader raised the important topic of the next generation soon to be entering the workforce. In light of all the developments with genAI and the way its adoption is transforming the world of work, they wondered, what should we be advising young people to concentrate on?</p>
<p>There was a brief moment of silence, while the dozen or so leaders around the room, and representatives of the event&#8217;s hosts, Mimecast, gave that some serious thought. Then the answers came thick and fast.</p>
<blockquote><p> &#8220;Carpentry,&#8221; said one CISO.</p>
<p>&#8220;Farming,&#8221; suggested another.</p>
<p>&#8220;Subsistence. How to live off the land,&#8221; a third offered.</p>
<p>&#8220;Poetry,&#8221; another mused.</p></blockquote>
<p>While they all gave the lie to any suggestion that security people are overly serious and lack a sense of humour, the speed and alacrity with which those around the table jumped on this train of thought did seem to speak to a legitimate concern. GenAI may not be all its boosters claim, and it certainly will not be the answer to every prayer. But it has captured the public imagination and its use is mushrooming far faster than guardrails can be put in place to prevent its misuse. And that misuse &#8211; which is happening and expanding in lock step with the rate of adoption &#8211; poses serious risks of significant harms, which could, ultimately, lead to the collapse of entire businesses, if not whole industries. And in the meantime, automated processes taking actions independently of humans could conceivably usher in some kind of techno-apocalypse which sends us all back into hunter-gatherer mode.</p>
<p>Cheery stuff!</p>
<h4>You Must Learn</h4>
<p>The evening had started off a little more optimistically, even if those early discussion points derived from challenges security leaders are having. Mimecast&#8217;s director of new solutions, James Hathaway, welcomed attendees by saying he was keen to hear about the areas they felt genAI could make them more efficient, but also where his company might be able to deploy these tools to aid defensive operations. After some initial discussions about enhanced phishing campaigns and the difficulties of ensuring security of supply chains, talk turned to preventative measures.</p>
<blockquote><p>   &#8220;I worked somewhere previously where we assumed everyone&#8217;s laptop was infected anyway, and focused much more on their ability to hand over credentials,&#8221; one CISO recalled. &#8220;So if someone did give away their credentials on WhatsApp, can we prevent that being exploited?&#8221; Rather than trying to prevent the leak of data occurring, they argued, there may be more chance of preventing that leak having negative consequences.</p></blockquote>
<p>One way in which security vendors could leverage genAI tools to help with this would involve obtaining a reliable picture of a business&#8217;s network and the activity on it. Over time, tools can then be trained, through recognising what constitutes standard behaviour and activity, to then flag up anything that appears unusual. But this requires a high degree of trust from the customer &#8211; and although this may appear to be a technology question, the answer will be cultural or legal.</p>
<blockquote><p>   &#8220;This is one of the things we struggle with,&#8221; said Mimecast&#8217;s field CTO, Khetan Gajjar. &#8220;You want AI to understand what normal is, so you can pick up what isn&#8217;t. But how many of you are willing to share what normal looks like? This is not a tech problem. One of the workarounds we have is we&#8217;ve built connectors. You don&#8217;t have to give it to us, but if you already trust INTTRA or Opta, then we can get it from there.&#8221;</p></blockquote>
<h4>Word From Our Sponsor</h4>
<p>It was only seconds after Black moved the discussion on to &#8220;shadow AI&#8221; &#8211; all the large language model tools that get used by staff, but which are not part of the business&#8217;s own technology stack &#8211; and employee usage of genAI tools that the first whole-room guffaw of the evening took place. All he had to do was ask where each attendee&#8217;s company was at on their genAI policy journey. Hathaway added a codicil to the question: &#8220;On top of that &#8211; how concerned are you that corporate data gets fed into AI and then is out there?&#8221; he asked.</p>
<blockquote><p>   &#8220;Hugely concerned,&#8221; one CISO admitted. &#8220;It&#8217;s challenging. At the same time, we want to enable the business and be pragmatic. One big challenge is, what are the use cases? We&#8217;re avoiding things that could have bias, because we don&#8217;t want to become a case-study for our regulators.&#8221;</p></blockquote>
<p>Different companies have adopted different approaches, running the gamut from allowing employees to use whatever tools they feel will help and worrying about the consequences later, through to locking down every genAI completely and instructing staff not to circumvent network blocks by using tools like ChatGPT on their personal devices. Several companies have established committees to assess the ways forward, but for every firm that views this as a pragmatic and nuanced approach to a complicated issue, others will feel that it adds a cumbersome additional layer onto an already sclerotic management system, with the risk that any policies that result will arrive too late to be of use.</p>
<blockquote><p>   &#8220;It reminds me of when the cloud first came out,&#8221; the hugely concerned CISO added. &#8220;Suddenly you&#8217;ve got 30 decision-makers. Our developers want to develop a model, but you have to put it into a governance process that takes weeks to get a decision made on one. So they go ahead and play around on their own. We&#8217;re driving people to go and do things with AI themselves.&#8221;</p>
<p>&#8220;We&#8217;re trying to do it more on a principle-based basis,&#8221; another security leader said. &#8220;We want to encourage innovation, but we don&#8217;t have the technology to go around it. You have to encourage usage, but also encourage people to think about what they&#8217;re doing.&#8221;</p></blockquote>
<p>There was some enthusiasm in the room for pop-up warnings &#8211; the system telling a user that an action they are about to take would result in data being sent outside the corporate network. Another security leader explained how their company was adopting a three-pronged approach, with warning messages part of a broader plan.</p>
<blockquote><p>   &#8220;One point was awareness,&#8221; they said. &#8220;Data isn&#8217;t going to jump on its own &#8211; someone will click a button. Do they understand what they&#8217;re doing? The second was to turn a lot of our people into security partners. We gave them a helpline number and said, if you&#8217;re not sure about something, come and ask. And the third was the message &#8211; and a disclaimer &#8211; if they try to complete the action. While you&#8217;re building up the long-term governing body, or a technical solution, this will help &#8211; educate them and trust them.&#8221;</p></blockquote>
<p>Gajjar pointed out that, although the technology and the pace of its adoption appears to be new, a lot of the issues it raises still fit into existing processes and priorities.</p>
<blockquote><p>   &#8220;From most perspectives, it&#8217;s just another way data&#8217;s being used,&#8221; he said. &#8220;So a lot of the principles around how we use it, who we share it with, should be a guiding light.&#8221;</p></blockquote>
<h4>Ya Know The Rules</h4>
<p>Rob got another round of laughs with a question that surely wasn&#8217;t intended to raise any. He wondered whether the security teams represented around the room had good visibility of genAI use inside their different businesses.</p>
<p>&#8220;Did anyone ever answer that with a &#8216;yes&#8217;?&#8221; one CISO asked, after they and a few others had collected themselves following their bout of cackling.</p>
<blockquote><p>   &#8220;We have technical tools to block usage, but I&#8217;ve no idea how effective they are,&#8221; one leader said, quickly homing in on the heart of the challenge. &#8220;Different things are being spun up every five minutes. If all your internet traffic goes through a trusted proxy, do you know if they&#8217;ve classified every AI provider out there?&#8221;</p>
<p>&#8220;We did API impressions, to see what people were doing, which was useful,&#8221; another security team leader said. &#8220;But then there&#8217;s all the other AIs that pop up everywhere else. We use Mirrorboard &#8211; it&#8217;s got its own AI. Excel&#8217;s got AI.&#8221;</p>
<p>&#8220;We have an enterprise Chat GPT, and we block everything else &#8211; but I wouldn&#8217;t rely on that,&#8221; a third leader said. &#8220;You&#8217;re effectively relying on someone else to recognise the unknown.&#8221;</p></blockquote>
<p>Another challenge comes when other entities that the business connects with use genAI tools, entirely innocently, but without necessarily notifying their partner businesses of it. The growing use of meeting assistants is a particular bugbear for many security leaders.</p>
<blockquote><p>   &#8220;For example, let&#8217;s say I have a meeting with Third Party A, and they use an AI to take notes,&#8221; one CISO postulated. &#8220;There&#8217;s should be a policy where we say, &#8216;We won&#8217;t take the meeting if you do that.&#8217; But even if we had one, I&#8217;m pretty sure that, somewhere in the organisation, someone would take that meeting &#8211; and company information ends up in those AIs.&#8221;</p></blockquote>
<h4>Who Protects Us From You?</h4>
<p>Another round of hilarity &#8211; though muted this time, as if everyone was running a 10-second gif inside their heads of tumbleweed blowing across a desert &#8211; was occasioned when Black asked this question:</p>
<blockquote><p>   &#8220;How many of you have been empowered by the board to do something about AI, and where are the budgets coming from?&#8221;</p>
<p>&#8220;We&#8217;re being forced to use AI &#8211; it&#8217;s part of our performance review now that we do use it,&#8221; one leader said, after shaking themselves out of the shock that the question prompted. &#8220;New budgets? Ha!&#8221;</p>
<p>&#8220;There&#8217;s a fair amount of hysteria, and we need to show that we&#8217;re doing something,&#8221; another CISO said. &#8220;There&#8217;s less scrutiny of the material benefits.&#8221;</p></blockquote>
<p>This issue appeared to be widely shared, even if not everyone put it into the same form of words. Businesses&#8217; leaderships are so convinced of the benefits of AI adoption, and so concerned about the possibility that slow uptake might cede advantage to a competitor, that it&#8217;s not just the security of the tools that is being overlooked &#8211; so too is identifying the ways in which the company will actually derive those benefits from using genAI. Another gif sprang to mind: Jeff Goldblum in <em>Jurassic Park</em>, lamenting that things get done because they&#8217;re possible, not because anybody has really stopped to work out whether they&#8217;re a good idea or not.</p>
<blockquote><p> &#8220;Within about 18 months, we&#8217;d set up a defined programme where we did analysis [of AI adoption]: there was a set of criteria, very tightly managed, and the ROI was tracked,&#8221; another CISO recalled. Those days have gone, though. &#8220;Now it&#8217;s a free-for-all,&#8221; they continued. &#8220;Everyone&#8217;s doing whatever they want. It&#8217;s much more accessible, and people have been enabled to do their own business process engineering&#8230;&#8221;</p></blockquote>
<p>The thought tailed off; nobody in the room needed to be told what that could lead to.</p>
<blockquote><p> &#8220;But it worked well for about 18 months,&#8221; they said, looking desperately for an upside.</p>
<p>&#8220;For us it was about six,&#8221; another CISO with a similar set of experiences said. &#8220;If we&#8217;re going to use any business data in any AI system, we have to run it through a responsible AI committee, just to make sure they&#8217;re not doing something crazy. I&#8217;m not on that committee &#8211; no-one in my team is. But in IT, we have a performance requirement to use AI. I don&#8217;t know how that works. We work with a lot of sensitive information.&#8221;</p></blockquote>
<h4>By All Means Necessary</h4>
<p>And so to the final round of thoughts. What would security leaders like to see automated? And what are the key concerns and challenges more broadly? The overall tone was of grim gallows humour, some blurts of muted optimism occasionally rising to the surface.</p>
<blockquote><p> &#8220;I&#8217;m too miserable to even answer that question,&#8221; was the first response.</p>
<p>&#8220;Everything!&#8221; was the second.</p>
<p>&#8220;People,&#8221; was the third.</p></blockquote>
<p>There was some enthusiasm for setting automated systems loose on filling in due-diligence questionnaires, though even there, the most beneficial long-term solution would probably be to get sector organisations to come together and standardise what are at present entirely individual, but very repetitive, questionnaires and processes.</p>
<p>And then we were off into how to prepare for a return to a pre-industrial world.</p>
<blockquote><p> &#8220;We&#8217;re all feeling it,&#8221; Hathaway acknowledged in his closing remarks. &#8220;We&#8217;re being drawn to use AI more and more and more, because it can have benefits. But we&#8217;ve all got to keep one eye on the threats it can bring. What tonight has done is to cement that theory. There&#8217;s great things that can happen, but we&#8217;re not being able to address the concerns to take advantage. We&#8217;ve not been able to control AI properly to be able to use it properly.&#8221;</p>
<p>&#8220;If it helps to give a sense of perspective, this is the type of discussion that all of your peers are having,&#8221; Gajjar said. &#8220;You are not alone!&#8221;</p></blockquote>
<p>The post <a href="https://rantcommunity.com/resources/from-prompts-to-poetry-how-should-security-teams-respond-to-the-rising-ai-tide/">From Prompts To Poetry: How Should Security Teams Respond To The Rising AI Tide?</a> appeared first on <a href="https://rantcommunity.com">RANT Community</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>On The Long And Winding Road Towards Better Data Security, Is &#8220;Assume Breach&#8221; A Dead End?</title>
		<link>https://rantcommunity.com/resources/on-the-long-and-winding-road-towards-better-data-security-is-assume-breach-a-dead-end/</link>
		
		<dc:creator><![CDATA[Galena]]></dc:creator>
		<pubDate>Thu, 29 Jan 2026 14:12:42 +0000</pubDate>
				<category><![CDATA[Resources]]></category>
		<guid isPermaLink="false">https://rantcommunity.com/?p=2702</guid>

					<description><![CDATA[<p>Conversations between practitioners about cybersecurity challenges, tools and strategies have a tendency toward unpredictability. True, there are certain themes, ideas</p>
<p>The post <a href="https://rantcommunity.com/resources/on-the-long-and-winding-road-towards-better-data-security-is-assume-breach-a-dead-end/">On The Long And Winding Road Towards Better Data Security, Is &#8220;Assume Breach&#8221; A Dead End?</a> appeared first on <a href="https://rantcommunity.com">RANT Community</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Conversations between practitioners about cybersecurity challenges, tools and strategies have a tendency toward unpredictability. True, there are certain themes, ideas or topics that will inevitably come up &#8211; items on the bingo card that the more cynical participants in these conversations can tick off with a practicedly weary sigh &#8211; but it is by no means a given that the talk will travel along pre-determined pathways, or even remain between some of the vague or aspirational guardrails that may have been set in place beforehand.</p>
<p>A RANT roundtable held just before the Christmas break proved to be one such. Sponsored by investigation and response automation specialists Binalyze, and with attendees briefed to expect a chat about whether the long-established &#8220;assume breach&#8221; concept was still worth the candle, the discussion began in one detailed corner of the topic and progressed outward via some apparently random leaps, as the underlying concepts continued to move and flow forward regardless &#8211; a bit like a frog jumping from lily pad to lily pad on the surface of a slowly moving stream.</p>
<p>Partly, perhaps, timing was a factor: nearing the end of another difficult year, with all the elements of looking back and taking stock that inevitably colour our thinking, a December dinner maybe encourages more musing atmosphere than might be the case at other points on the calendar. Partly, too, the overarching topic contains many complicatedly interconnected moving parts, each worthy of detailed discussion, so some dotting around from point to point was perhaps inevitable.</p>
<p>But what felt particularly critical &#8211; at least to the hack with the laptop in the corner, tasked with taking notes and later trying to condense them into this report &#8211; was the huge degree to which the specific contexts of the individual business affect the highest-level strategic considerations. Clearly, every organisation is unique: but even when challenges are widely shared, and there is broad agreement that a particular approach will benefit more or less everyone, the most urgent and important aspects this throws up for an individual security leader can often be entirely different, and are entirely dependent on the specifics of their business.</p>
<p>So, after a wide-ranging and engaging set of opening remarks from Binalyze&#8217;s senior vice president of growth, Steve Jackson, what followed was not a series of different specific perspectives on a shared central problem, but a series of individual questions about specific problems experienced by different organisations. This presents certain difficulties for your summariser, since the discussion &#8211; as with all RANT events &#8211; was held under the Chatham House Rule, and even revealing the sector an organisation operates in could give certain readers enough information to identify them. But some common areas of concern did emerge.</p>
<h4>Think For Yourself</h4>
<p>One shared challenge appeared to be how best to conduct cyber due diligence during mergers and acquisitions. This came up after a couple of attendees had lamented how the organisational structures within their businesses limited their room for manoeuvre when deploying security controls pre-emptively, and when thinking about the respond-and-recover phase.</p>
<blockquote><p>   &#8220;For us, the challenge is that we tend to think about critical systems rather than critical processes,&#8221; one security leader said. &#8220;That level of maturity, for us, may be behind other businesses.&#8221;</p>
<p>&#8220;We operate at different data classifications, and that gives people an excuse not to tell you about something,&#8221; another CISO added. &#8220;It&#8217;s always quite siloed. It becomes quite hard to join the dots.&#8221;</p></blockquote>
<p>Although usually conducted prior to an acquisition or a merger, cyber due diligence may well help with problems such as these even in a business which isn&#8217;t in the middle of such a process. Jackson, who pointed out that Binalyze&#8217;s platform is being used by customers to assist with pre-acquisition forensic due diligence, was keen to hear how this work was being done in different organisations.</p>
<blockquote><p>   &#8220;It&#8217;s quite light touch, and varies depending on the sector: everyone does it slightly differently,&#8221; one security leader, whose experience was drawn from different businesses in different industries, explained. &#8220;My first step is, let&#8217;s have a standard framework, and get those costs and challenges [identified] up front. When you&#8217;re buying smaller companies, they want you to leave them alone, not corporatise them. [But then you may] have to buy bolt-on solutions,&#8221; which can happen without proper oversight, they warned.</p></blockquote>
<p>That&#8217;s assuming such a process is allowed, of course. Using forensic tooling during an acquisition could be problematic, another senior security leader argued. &#8220;There&#8217;s competitive disadvantage to sharing some information prior to an acquisition,&#8221; they said. &#8220;The idea of deploying forensic tools is alien.&#8221;</p>
<p>Jackson described how Binalyze can square some of these circles for customers. The firm supplies an agent &#8220;that sits on the network access,&#8221; he explained. &#8220;The company [can then] deploy that into the target environment, and once it&#8217;s there, we provide forensic visibility &#8211; basically run a compromise assessment.&#8221;</p>
<p>But even if it can be done for or by one firm, others around the table said, that doesn&#8217;t mean it will be possible for everyone.</p>
<blockquote><p>   &#8220;That use case will not work in our enterprise,&#8221; one security leader said. &#8220;There is a level of competition law that would prevent us from doing that.&#8221;</p></blockquote>
<p>Jackson also added that another way in which Binalyze is being used in due-diligence projects is in cyberinsurance, where the due diligence is performed prior to the policy being issued. Still, other CISOs suggested that if they were to attempt to do this on the systems of a company their firm was looking to acquire, it would not go down well. This part of the discussion was drawn to a close by one participant, who noted that &#8220;nobody does M&amp;A because of security: they do it because of the business.&#8221;</p>
<h4>Fixing A Hole</h4>
<p>The conversation headed down a different but adjacent avenue when RANT&#8217;s co-host for the evening, Matt Summers, divisional CISO for Philip Morris International, asked about regulatory requirements. Acknowledging that there may be reasons why detailed access to systems ahead of an acquisition may be problematic, he suggested that in certain contexts, such access may well be mandatory. Indeed, access requirements may be considerably more detailed than simply getting a decent picture of the network, with red-teaming &#8211; simulated attacks, designed to test defences with realistic threats &#8211; sometimes being mandatory.</p>
<blockquote><p>   &#8220;From a regulatory perspective, [some processes] require red teams,&#8221; he noted. &#8220;In finance, for example, the threat intelligence phase and the red-team phase, they&#8217;re required to do it.&#8221; This sparked some spirited exchanges around physical security maturity, though these were inconclusive.</p></blockquote>
<p>Various participants noted the slow pace of gathering data during due diligence, red-team campaigns and through security tools, never mind understanding it. Summers stressed the importance of providing visibility of areas that SOC analysts would not normally have oversight of. What those teams can do with that information emerged as a problem for many around the table, but, Binalyze argues, their tool can not just help to triage this information and make sense of the flow, but also enables analysts, who presently may be confined to lower-level tasks, to carry out work usually devolved to more senior team members. In effect, the company says, this will help both to advance the maturity of the organisation, and ensure that junior staff are given compelling enough work, and opportunities to advance, which ought to help the business with the ongoing challenge of staff retention.</p>
<blockquote><p>   &#8220;We give [SOC teams] access to more guided forensic data,&#8221; Jackson explained. &#8220;We don&#8217;t just give them data without context &#8211; we prioritise it. If you give that to your Level One and Level Two analysts, it shifts them up and to the right a bit. The outcomes are generally good. Of course, there&#8217;s some initial training we have to provide, but it&#8217;s not particularly extensive.&#8221;</p></blockquote>
<h4>You Never Give Me Your Money</h4>
<p>The clash between regulatory requirements and meaningful business outcomes was a prevalent undercurrent to much of the conversation. In response to a question from Summers about whether any attendees took data from red-team activity as a basis for testing their respond-and-recover controls, one CISO summed up their view pithily.</p>
<blockquote><p>   &#8220;Knowing I can rebuild a server is great &#8211; but if you don&#8217;t tie it back to business outcomes, it&#8217;s crap,&#8221; they said. &#8220;You need to understand the value chain that&#8217;s supported. It&#8217;s a waste of time, effort and money, unless it&#8217;s to comply with a regulation.&#8221;</p></blockquote>
<p>The vital importance of having a detailed and nuanced understanding of the specifics of what is important to each individual business &#8211; and the likelihood that, in most cases, this understanding is lacking &#8211; was raised in this context by another security leader.</p>
<blockquote><p>   &#8220;If you ask a business, &#8216;What are the Top Ten systems that&#8217;ll bring you to your knees?&#8217;, they wouldn&#8217;t be able to tell you,&#8221; they argued. &#8220;How can you protect the business if you don&#8217;t know what it relies on?&#8221;</p></blockquote>
<p>Business priorities and detailed understanding of security may not be one and the same thing, another CISO warned.</p>
<blockquote><p>   &#8220;Top of mind for our board is: &#8216;What happened to Jaguar Land Rover &#8211; could it happen to us?&#8217;,&#8221; they acknowledged.</p>
<p>&#8220;There&#8217;s a good point about disparity between investments across the different business processes,&#8221; Summers agreed. &#8220;Do you think we&#8217;re not investing enough in understanding the business context around controls, and perhaps over-investing in the wrong respond-and-recover tools?&#8221;</p></blockquote>
<p>This question prompted a response which seemed to unite and crystallise a number of the different strands of the discussion.</p>
<blockquote><p>   &#8220;I think, generally, the cyber world is disproportionately funded,&#8221; one CISO replied. &#8220;Most of our loss events are due to bad luck &#8211; people pressing the wrong button and doing the wrong thing. We lose far more from people screwing up than from people trying to screw us. There are tools that tell me there&#8217;s stuff that&#8217;s misconfigured, and there&#8217;s lots to patch &#8211; but I&#8217;ve been hearing that for 20 years.&#8221;</p>
<p>&#8220;Why don&#8217;t you fix them, then?&#8221; asked another leader, pointedly, to laughter around the room.</p>
<p>&#8220;Because the likely loss will be less than the cost to do it,&#8221; the CISO replied. &#8220;It&#8217;s literally not worth it.&#8221;</p></blockquote>
<h4>Tomorrow Never Knows</h4>
<p>With the hour of discussion nearing its end, Summers brought the conversation back to the original objective. It has been some years since cybersecurity moved, conceptually, from being about building robust defensive walls to keep threats out, and on to assuming that threat actors would gain access and therefore prioritising limiting their ability to extract anything of value. But since adopting that position, has anything meaningfully changed? Threats still get through; ransomware and other attacks still succeed in their aims and objectives; businesses are still investing heavily in security tools yet still suffer breaches, losses and service interruptions. Is it time to let this shibboleth slide away into the background?</p>
<blockquote><p>   &#8220;I don&#8217;t assume my car&#8217;s stolen or my house has been burgled,&#8221; he noted. &#8220;Is saying &#8216;assume compromise&#8217; just something we say to make ourselves look smart?&#8221;</p>
<p>&#8220;I&#8217;m going to take this back to regulatory compliance,&#8221; one security leader replied. &#8220;Times for mandatory reporting are reducing, yet most of our suppliers are not finding out [that they have had an incident] until very late. How can you prove you weren&#8217;t aware until six hours beforehand?&#8221;</p>
<p>&#8220;Boards are paranoid about this,&#8221; another leader agreed. &#8220;How can you prove you didn&#8217;t know? And will this make you liable for a fine?&#8221;</p>
<p>&#8220;Who&#8217;s going to ask you to prove that?&#8221; another CISO pushed back. &#8220;You can only report something when you know it.&#8221;</p>
<p>&#8220;Exactly,&#8221; another leader replied. &#8220;But how do you prove that you didn&#8217;t know?&#8221;</p>
<p>&#8220;The kind of tooling we&#8217;re talking about can help,&#8221; Summers said. &#8220;Because it can give forensic data, it has chain of custody, and you can produce the report faster. And if you can get to the root cause, [stopping the incident from spreading] becomes easier. You can remediate it once you know what the root cause is.&#8221;</p></blockquote>
<p>Summing up, Jackson agreed that &#8220;the assume-breach mantra is a little tired these days,&#8221; and proposed a refinement of it as perhaps being more appropriate to today&#8217;s realities.</p>
<blockquote><p>   &#8220;We should assume we&#8217;re under scrutiny, and that there&#8217;s risk there,&#8221; he said. &#8220;But it&#8217;s clear from what we&#8217;ve heard that there&#8217;s a lot of complexity.&#8221;</p></blockquote>
<p>Binalyze is a cybersecurity company delivering AIR &#8211; Automated Investigation and Response. Binalyze builds on the threat intelligence and alerts from your security stack &#8211; using AIR to dive deeper into systems, uncover root cause, and deliver the visibility and forensically sound context your tools alone can’t provide. This empowers security teams to investigate both proactive and reactive threats faster, respond with certainty, and stay ahead of attackers with precision. <a href="https://www.binalyze.com/" target="_blank" rel="noopener">Find out more here!</a></p>
<p>The post <a href="https://rantcommunity.com/resources/on-the-long-and-winding-road-towards-better-data-security-is-assume-breach-a-dead-end/">On The Long And Winding Road Towards Better Data Security, Is &#8220;Assume Breach&#8221; A Dead End?</a> appeared first on <a href="https://rantcommunity.com">RANT Community</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Fear And Loathing In The Active Directory: Can Mapping Attack Paths Point You In The Right Direction?</title>
		<link>https://rantcommunity.com/resources/fear-and-loathing-in-the-active-directory-can-mapping-attack-paths-point-you-in-the-right-direction/</link>
		
		<dc:creator><![CDATA[Galena]]></dc:creator>
		<pubDate>Tue, 27 Jan 2026 10:44:27 +0000</pubDate>
				<category><![CDATA[Resources]]></category>
		<guid isPermaLink="false">https://rantcommunity.com/?p=2671</guid>

					<description><![CDATA[<p>We were somewhere around 10 minutes into a RANT roundtable in an exclusive dining establishment high above the City of</p>
<p>The post <a href="https://rantcommunity.com/resources/fear-and-loathing-in-the-active-directory-can-mapping-attack-paths-point-you-in-the-right-direction/">Fear And Loathing In The Active Directory: Can Mapping Attack Paths Point You In The Right Direction?</a> appeared first on <a href="https://rantcommunity.com">RANT Community</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>We were somewhere around 10 minutes into a RANT roundtable in an exclusive dining establishment high above the City of London when the unavoidable cybersecurity conversation topic du jour began to take hold. Even though the event had been convened by the vendor SpecterOps, whose BloodHound platform is designed to enable security teams to visualise and understand the pathways adversaries might be able to use to compromise the active directory (AD) and move around and across networks, it was inevitable that talk would turn, at some point, to so-called &#8220;AI&#8221;. The only surprise was, perhaps, that it didn&#8217;t come up sooner.</p>
<p>&#8220;Can we not use heuristic tools like AI to predict future attack paths?&#8221; one CISO wondered, a wry grin perhaps visible as they asked a question which, if not straying into territory we could quite categorise as trolling, was certainly designed to cajole and provoke some forthright responses. For all that large language models and generative AI applications are causing conniptions across the security-leadership landscape, business leaders seem to remain gung-ho on deploying them inside businesses &#8211; so folks like this senior network defender are understandably keen to come up with ways to use them to help, rather than hinder, the security mission.</p>
<p>Unfortunately, the replies were not particularly encouraging.</p>
<blockquote><p>&#8220;You&#8217;d need to go and proactively identify [the attack paths], which was what a threat actor would do: they&#8217;d find out how to escalate their privilege,&#8221; said Stephen Tate, global head of incident response at the London Stock Exchange Group, who guest-hosted the event on RANT&#8217;s behalf. &#8220;There is a case, I guess, to use AI, but personally I don&#8217;t think you can use AI to map out what the attack path is if the attack hasn&#8217;t already happened. You can use it to go through the logs and understand what&#8217;s gone on already &#8211; but I don&#8217;t think you can use it for the proactive part.&#8221;</p>
<p>&#8220;You can possibly use it to predict the next points in an attack path,&#8221; another CISO suggested, arguing that the technology as it stands at present might prove capable of projecting possible pathways a single step forward. &#8220;But the problem is, you&#8217;ll end up with too many permutations. It can be useful for a lot of things, but it&#8217;s not a silver bullet.&#8221;</p>
<p>&#8220;Nowhere near,&#8221; another leader agreed. &#8220;It can speed up trend analysis, but if you don&#8217;t know how it&#8217;s come up with that analysis it can send you off into a whole different forest.&#8221;</p>
<p>&#8220;You can ask it the same question twice and it gives you a different answer each time,&#8221; someone else moaned.</p>
<p>&#8220;And if you challenge it,&#8221; another leader agreed, &#8220;it goes, &#8216;Oh, yeah! You&#8217;re right!&#8217;.&#8221;</p>
<p>&#8220;We&#8217;re keen for our analysts to use it,&#8221; Tate said. &#8220;Logs are very complex when you&#8217;re doing analysis, so it&#8217;s useful to say, &#8216;Describe what this event means.&#8217; The bit it&#8217;s not good at yet, in my opinion, is being able to tell me why I should care, and why a threat actor might want to do something.&#8221;</p></blockquote>
<h4>Kingdom Of Fear</h4>
<p>While discussion of generative AI in this context might appear gratuitous or tangential, there is clearly some need for assistance when security teams are using a tool that does as much as SpecterOps&#8217; BloodHound to reveal hitherto unknown routes attackers could use to compromise businesses&#8217; most sensitive and closely held data. The company&#8217;s director for Europe, the Middle East and Africa, Tony Sheldrake, said that, when the tool is first deployed in a company&#8217;s systems, the effect on those inside the business is &#8220;shock and awe. We sometimes find millions of attack pathways.&#8221;</p>
<p>Another reason why bringing AI into the conversation makes sense is because that&#8217;s the direction some of the big players in the network-defence space have gone. In its E5 licenses, for instance, Microsoft seeks to leverage its Copilot LLM for AD security. This is the sort of work which BloodHound was specifically developed to help with, and helping to automate security of Entra ID &#8211; formerly known as Azure AD &#8211; is among the benefits touted of upgrading to the more expensive licenses.</p>
<blockquote><p>&#8220;AD has grown and grown,&#8221; said one CISO, who described themselves as &#8220;a big fan&#8221; of BloodHound, having used it in several previous roles. &#8220;Part of the problem is, even if you have huge powershell capabilities, with the existing tools it&#8217;s impossible to see what you&#8217;re exposed to. Even with the E5 license it&#8217;s not there. Secure Copilot doesn&#8217;t do this.&#8221;</p>
<p>&#8220;I agree,&#8221; another CISO said. &#8220;We&#8217;re a big Microsoft house, and every time I&#8217;m looking at [Entra ID security] it&#8217;s all within this bubble. It can only see what Microsoft can see. It&#8217;s really valuable to get something external looking at it. I&#8217;m always having this challenge with my board! They say Microsoft can do this, but they can&#8217;t.&#8221;</p></blockquote>
<p>&#8220;Interestingly,&#8221; said Sheldrake&#8217;s technical colleague, Colin Makin, &#8220;Microsoft is a customer.&#8221; So, he noted, are OpenAI, and the data-analysis platformer Palantir. Even companies as deep as those firms are in managing data, understanding connections, and securing businesses and governments, are turning to SpecterOps to help them with the parts of their mission they can&#8217;t carry out themselves.</p>
<h4>The Proud Highway</h4>
<p>SpecterOps are open about the organisational challenges that deploying their technology may pose. And, at least for security leaders, the advantages in being made aware of the myriad ways an attacker could access sensitive data far outweigh the problems that appear to be raised when the business is suddenly confronted with the knowledge that there are many more of them than were previously thought possible. But the journey from denial to acceptance is one that the business will have to map for itself.</p>
<blockquote><p>&#8220;A lot of what I do is running proof-of-values, and every time I run one it comes as an absolute shock for anyone who&#8217;s had AD for any length of time,&#8221; said Makin. &#8220;I just ran one recently, in an organisation of around 8,000 people, and we surfaced 25 million attack paths.&#8221;</p>
<p>&#8220;The problem I have with that,&#8221; one CISO said, &#8220;is that I end up with 25 million things to fix.&#8221;</p>
<p>&#8220;We totally recognise that, and we flip it on its head,&#8221; Makin said. &#8220;You&#8217;re never going to fix 25 million paths. That&#8217;s why the concept of cyber hygiene is really flawed. I&#8217;m not saying you shouldn&#8217;t do it, but if you do, you don&#8217;t have a measurable reduction of risk. Our whole enterprise solution works on an attack graph. We work out which is the final path that takes an attacker to your critical assets.&#8221;</p>
<p>&#8220;I often say, it&#8217;s like going from Manhattan to Brooklyn,&#8221; suggested Mark Wilson, SpecterOps&#8217; sales engineer. &#8220;There&#8217;s millions of routes to go from one to the other. But if you take away the bridges, most of them are pretty irrelevant.</p>
<p>&#8220;We focus on chokepoints,&#8221; he added. &#8220;After you surface loads of things and realise you&#8217;ve got the biggest to-do list in the world, we have a playbook &#8211; a remediation plan &#8211; which helps you to close those routes down.&#8221;</p></blockquote>
<p>Sometimes, though, the attack paths revealed by BloodHound aren&#8217;t obscure routes an attacker could use to compromise the business and which should therefore be blocked. Sometimes they&#8217;re part of why the business works well.</p>
<blockquote><p>&#8220;It [BloodHound] can surface so many things it can be overwhelming,&#8221; Sheldrake acknowledged, &#8220;but it also surfaces things that are necessary. If certain paths didn&#8217;t exist, then the business wouldn&#8217;t function.&#8221;</p></blockquote>
<p>The fact that there may be many pathways that have to be kept clear could, therefore, simply be a function of the nature, scale and requirements of a successful enterprise. While some of the leaders around the table wondered whether this implied that the solution might lie less in tooling or technology than in recruiting, training and retaining sufficient specialist staff, Sheldrake argued that both are necessary.</p>
<blockquote><p>&#8220;It&#8217;s twofold,&#8221; he said. &#8220;Having people who can interpret [BloodHound&#8217;s output] and who can understand what they can improve, [is important]. But also, for the SOC and for incident responders, it&#8217;s about having an awareness of those attack paths &#8211; so that, in an incident scenario, a defender can say: &#8216;OK, there&#8217;s an alert on this asset &#8211; what&#8217;s possible for an attacker to do from there, and what should we be concerned about?&#8217; There are things [the business] should be able to concentrate on, in terms of strategy.&#8221;</p></blockquote>
<h4>The Great Shark Hunt</h4>
<p>But to use an attack-path analysis effectively, there&#8217;s one other important piece that has to be in place. The business needs to be clear on its priorities, and &#8211; while this might seem like a separate issue, they tend to be linked &#8211; it needs to have clear, cogent and useful policies in place to support not just the security teams, and the rank-and-file users, but the decision-makers, too.</p>
<blockquote><p>&#8220;You can keep your existing policies and process, and use BloodHound to check&#8221; whether everything is working well, one leader suggested. &#8220;Part of the problem,&#8221; they added, &#8220;is that policies often don&#8217;t let people see what&#8217;s going on. If you&#8217;re a domain admin you can&#8217;t find [the necessary information] so you can&#8217;t ask the question. And the people approving the process have no idea what it means.&#8221;</p>
<p>&#8220;There are so many players, and they can break policies without you knowing,&#8221; another said.</p>
<p>&#8220;In my organisation,&#8221; a third offered, &#8220;we&#8217;ve just done the vulnerability-management stuff, using discovery tools. We have two tools doing that, and they&#8217;re telling us different things. And then there&#8217;s the separate thing about attack paths, from an identity perspective.&#8221;</p>
<p>&#8220;You need someone to make a clear decision about what&#8217;s a priority,&#8221; one CISO said. &#8220;Too often I don&#8217;t see that. They&#8217;ll say, &#8216;They&#8217;re all a priority&#8217;. That&#8217;s not a decision. Well, it is, but it&#8217;s a crap decision.&#8221;</p></blockquote>
<p>After another digression into genAI territory, a conclusion of sorts was reached. Not that it was anybody&#8217;s idea of a perfect solution.</p>
<blockquote><p>&#8220;Everybody has to go back to basics,&#8221; was the way one CISO summed it up.</p>
<p>&#8220;I don&#8217;t think anyone&#8217;s saying BloodHound is going to fix all your problems,&#8221; Tate said. &#8220;But, by using it in tandem with all your other controls, you can reduce your attack surface and make it harder for the adversaries. And that gives your SOC team and defenders more time. There&#8217;s a really important point around needing the right people and the right skills, so you can apply context, interpret and contextualise effectively. Without being able to do that, you can see all your problems, but you can&#8217;t solve them. I have a bit of a love-hate relationship with people who say this, but it&#8217;s ultimately about making your defences harder to get through than your neighbours&#8217;.&#8221;</p></blockquote>
<p><a href="https://specterops.io/" target="_blank" rel="noopener">Find out more about SpecterOps and what sets them apart!</a></p>
<p>The post <a href="https://rantcommunity.com/resources/fear-and-loathing-in-the-active-directory-can-mapping-attack-paths-point-you-in-the-right-direction/">Fear And Loathing In The Active Directory: Can Mapping Attack Paths Point You In The Right Direction?</a> appeared first on <a href="https://rantcommunity.com">RANT Community</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>From frameworks to fundamentals: rethinking risk in 2026</title>
		<link>https://rantcommunity.com/resources/from-frameworks-to-fundamentals-rethinking-risk-in-2026/</link>
		
		<dc:creator><![CDATA[Galena]]></dc:creator>
		<pubDate>Mon, 05 Jan 2026 13:41:46 +0000</pubDate>
				<category><![CDATA[Resources]]></category>
		<guid isPermaLink="false">https://rantcommunity.com/?p=2622</guid>

					<description><![CDATA[<p>Regulatory frameworks like NIS2, DORA and GDPR have raised the stakes for compliance. Yet treating compliance as the end goal</p>
<p>The post <a href="https://rantcommunity.com/resources/from-frameworks-to-fundamentals-rethinking-risk-in-2026/">From frameworks to fundamentals: rethinking risk in 2026</a> appeared first on <a href="https://rantcommunity.com">RANT Community</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Regulatory frameworks like <a href="https://www.diligent.com/resources/guides/prepare-for-nis2">NIS2,</a> <a href="https://www.diligent.com/en-gb/resources/blog/digital-operational-resilience-act-dora">DORA</a> and GDPR have <a href="https://www.diligent.com/resources/blog/reporting">raised the stakes for compliance</a>. Yet treating compliance as the end goal can leave organisations exposed to risks that threaten operations and long-term resilience. At a recent RANT roundtable, security leaders debated whether compliance is overshadowing risk management, and how organisations can strike a better balance that supports business objectives.</p>
<h3>Compliance is a baseline, not the destination</h3>
<p>Compliance provides assurance. It <a href="https://www.diligent.com/resources/guides/enterprise-risk-management-framework">does not guarantee resilience.</a> Too often, organisations focus on passing audits rather than addressing real threats that could disrupt operations.</p>
<p>As one participant put it, <em>“</em><a href="https://www.diligent.com/resources/blog/enterprise-risk-compliance-optimize-relationship"><em>Compliance is second line assurance.”</em></a> Another noted that audits can become a tick box exercise when external teams do not fully understand how the business operates. This exposes a broader tension.<strong> Are risk decisions being made in line with business objectives, or are frameworks driving the agenda on their own?</strong></p>
<p>The consensus was clear. Compliance sets minimum standards. <a href="https://www.diligent.com/resources/blog/strategies-corporate-risk-management">Effective risk management protects continuity, supports growth</a> and aligns decisions with what matters most to the organisation.</p>
<h3>The reality of tooling: clarity before complexity</h3>
<p>Many teams still rely on spreadsheets to track risks and evidence compliance. For small environments this can be workable. Scale introduces fragility.</p>
<p><a href="https://www.diligent.com/resources/videos/spreadsheets-to-smarter-risk-management"><em>“All it takes is for someone to delete the Excel</em></a><em>,”</em> one attendee observed. Others highlighted a disconnect between vendor promises and practical needs. <em>“The dream is sold as one tool,”</em> said a participant, while another added, <em>“Policies can map and do assessments and have action plans, yet we still end up in Excel.”</em></p>
<p>GRC platforms can help prioritise risks, document activity and maintain a reliable source of truth. But success depends on clarity of purpose. A single tool rarely solves every challenge. The roundtable highlighted how overambitious rollouts often fail because expectations do not match reality. While some teams expect full maturity within 12 to 16 weeks, experienced practitioners cautioned that meaningful implementation often takes two to three years.</p>
<p>The advice was consistent. <a href="https://www.diligent.com/resources/videos/spreadsheets-to-smarter-risk-management">Start small.</a> Define what you want the tool to deliver, whether that is visibility, accountability or stronger alignment with business objectives. Build momentum through measurable, incremental wins.</p>
<h3>Sector context shapes priorities</h3>
<p>Risk and compliance priorities differ by sector. Some organisations emphasise continuity and operational resilience. Others focus on enabling faster delivery and supporting rapid scaling. The roundtable chair noted that a CISO’s priorities often reflect the organisation’s tolerance for disruption and the pace of its growth strategy.</p>
<p>Participants also highlighted challenges working with auditors who lack full context on the organisation’s business model. When audit expectations diverge from operational realities, security teams can be pulled away from addressing high impact risks. Bridging that gap requires better internal alignment and a shared understanding of what the organisation values most.</p>
<h3>Leadership and language matter</h3>
<p>Technology alone cannot close the gap between compliance and effective risk management. <a href="https://www.diligent.com/resources/guides/cyber-leadership-playbook">Engagement from senior leadership is essential.</a></p>
<p><em>“The business has got to want to be engaged,”</em> one attendee said. Another noted that leaders do not want to be told, <em>“You are doing it wrong.”</em> They want clarity on trade-offs, not roadblocks.</p>
<p>The discussion also surfaced a language problem. Different teams often use different terminology to describe issues, risks and controls. Without a shared vocabulary, assessments do not translate into clear decisions.</p>
<p>When risk is framed in business terms, engagement improves. Leaders want to understand the commercial impact of inaction. They respond to clear evidence of which risks could halt operations, delay high priority initiatives or damage customer trust, and what pragmatic steps will strengthen resilience without slowing delivery.</p>
<h3>A practical playbook for progress</h3>
<p>The roundtable surfaced a set of practical steps that any organisation can apply, regardless of size or sector.</p>
<ol>
<li><strong>Start with one priority area</strong><br />
Select a process or unit with clear impact on the business. Map risks and controls, establish a simple reporting rhythm and build from there.</li>
<li><strong>Define outcomes before choosing tools</strong><br />
Decide what success looks like. Visibility of risks, better alignment to objectives, faster evidence collection or clearer accountability. Choose tools that serve those outcomes.</li>
<li><strong>Standardise language</strong><br />
Create shared definitions for issues, risks and controls. Align scoring so that assessments convert into clear decisions.</li>
<li><strong>Set realistic timelines</strong><br />
Expect incremental progress. Use phased implementation rather than a big bang approach. Review adoption and impact quarterly.</li>
<li><strong>Prioritise risks that move the business</strong><br />
Focus on exposures that could disrupt operations, delay initiatives or erode trust. Avoid trying to address everything at once.</li>
<li><strong>Build leadership engagement early</strong><br />
Frame risk in terms of commercial impact. Present scenarios, trade-offs and measurable improvements to secure ongoing support.</li>
<li><strong>Measure and share results</strong><br />
Track time saved, reductions in repeat findings, improvements in closure rates and changes in exposure levels. Sharing progress reinforces momentum.</li>
</ol>
<h3>What good looks like</h3>
<p>Participants who reported success described a disciplined focus on outcomes. They resisted making platforms do everything. They agreed success measures upfront. They concentrated on creating a reliable, shared source of truth for risks, controls and evidence. They used data to prioritise action and demonstrate improvement over time.</p>
<p>One attendee summarised the reality well: <em>“Risk is obvious if you have not done the basics.”</em> The message is not to chase complexity. It is to get the fundamentals right, show progress and keep risk aligned with what the business needs most.</p>
<h3>The bottom line</h3>
<p>Compliance frameworks will continue to evolve. Resilience depends on understanding and managing the risks that matter most. Integrating compliance into a broader risk strategy allows organisations to protect operations, maintain trust and move forward with confidence.</p>
<p>The discussion made one thing clear. Compliance is essential, but it is not the strategy. Focus on fundamentals, build incrementally and keep risk aligned with business objectives. That is where resilience starts.</p>
<p>Ready to transform cyber risk oversight with integrated GRC capabilities? <a href="https://www.diligent.com/request-a-demo">Schedule a demo</a> to see how Diligent&#8217;s platform delivers comprehensive cyber risk intelligence to boards.</p>
<p>The post <a href="https://rantcommunity.com/resources/from-frameworks-to-fundamentals-rethinking-risk-in-2026/">From frameworks to fundamentals: rethinking risk in 2026</a> appeared first on <a href="https://rantcommunity.com">RANT Community</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Right Here, Right Now: Is Cybersecurity Focusing On The Most Important AI Threats?</title>
		<link>https://rantcommunity.com/resources/right-here-right-now-is-cybersecurity-focusing-on-the-most-important-ai-threats/</link>
		
		<dc:creator><![CDATA[Galena]]></dc:creator>
		<pubDate>Mon, 08 Dec 2025 12:28:29 +0000</pubDate>
				<category><![CDATA[Resources]]></category>
		<guid isPermaLink="false">https://rantcommunity.com/?p=2593</guid>

					<description><![CDATA[<p>A RANT Forum is always a free-form and unpredictable kind of security conversation, and so it proved when the great</p>
<p>The post <a href="https://rantcommunity.com/resources/right-here-right-now-is-cybersecurity-focusing-on-the-most-important-ai-threats/">Right Here, Right Now: Is Cybersecurity Focusing On The Most Important AI Threats?</a> appeared first on <a href="https://rantcommunity.com">RANT Community</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>A RANT Forum is always a free-form and unpredictable kind of security conversation, and so it proved when the great and the good of the London-based cyber community gathered to discuss whether or not the right risks are being considered as businesses and their SOCs assess the mushrooming implementation of generative AI tools. A short answer? Probably. But, as so often, the devil is in the detail.</p>
<blockquote><p>&#8220;I&#8217;d never say any risk is wrong to focus on, it all depends on the organisation,&#8221; said Sapna Patel, head of cyber at the King&#8217;s Trust, and one of RANT&#8217;s two invited panellists for the event. &#8220;In my organisation, I&#8217;m not as worried about shadow AI as I was a year ago, but I&#8217;m worried about embedded AI that pops up without us knowing about it.&#8221; Security leaders need to balance competing demands, she argued &#8211; from those within the workplace who want to ramp up use of these tools, and those within the workplace who remain at best sceptical, at worst actively distrusting of the technology.</p>
<p>&#8220;There&#8217;s three broad areas of risk associated with the use of AI,&#8221; said Joe Mulhall, Technical Information Security Officer with the Financial Conduct Authority. &#8220;Data leakage is a huge one &#8211; and while it was certainly bigger last year it&#8217;s not gone away yet. Then there&#8217;s the security of the large language models built within our organisations, which could introduce risks of model theft and data poisoning as well as data leakage. And the other area of risk we&#8217;re all looking at is productivity risk. How do we leverage this to keep on top of all the other risks?&#8221;</p>
<p>&#8220;I don&#8217;t think we&#8217;re focusing on the wrong AI risks, because there&#8217;s so many of them,&#8221; Ben van Enckevort, chief technology officer and co-founder of Metomic, the night&#8217;s sponsors. &#8220;There are today risks and tomorrow risks &#8211; and today there are three ways I&#8217;ve seen individuals adopt AI. First, people are going to integrate something like Chat GPT, or employees are using it &#8211; and then we get worried about employees uploading sensitive data. That was a year ago: now they&#8217;re saying, &#8216;OK, we&#8217;re not really that worried about shadow AI &#8211; we&#8217;re going to plug our data into various systems because we want our employees to do the best job.&#8217; This is where the biggest risk is, because they may introduce risk accidentally.&#8221;</p></blockquote>
<p>More troubling, though, van Enckevort argued, are the challenges that become amplified when automation is applied to the picture, and AI agents are authorised by companies to take actions without human intervention.</p>
<blockquote><p>&#8220;People want to automate things; companies want automatic employees,&#8221; he added. &#8220;At that point, the visibility on what&#8217;s sensitive data, where it is, and what people are doing with it, is the biggest risk. And going on to the tomorrow risks: tomorrow, it&#8217;s about when agents are acting on instructions that may not come from your employees.&#8221;</p></blockquote>
<h4>Build It Up, Tear It Down</h4>
<p>Immediately, as so often at a RANT Forum, the first speaker from the floor took issue with the parameters of the discussion as just outlined by the speakers.</p>
<blockquote><p>&#8220;Never mind today and tomorrow risks &#8211; let&#8217;s talk about 10-year risks,&#8221; they challenged. &#8220;The rise of agentic AI means we get rid of all the entry-level jobs we use to train our juniors, who, over time, become our seniors.&#8221; The eventual hollowing out of the experience base of the enterprise, they suggested, was inevitable. &#8220;It&#8217;s all utterly predictable,&#8221; they said.</p></blockquote>
<p>Not so fast, the panel cautioned.</p>
<blockquote><p>&#8220;Automation is kind of what we in the cyber security industry do &#8211; or what we protect,&#8221; Mulhall replied. &#8220;I don&#8217;t see less people in IT, I see more. We&#8217;ll be doing different jobs. Do people need to come in and do the jobs we already do? I&#8217;m not sure.&#8221;</p>
<p>&#8220;I share the concern, and I agree, though I don&#8217;t know the answer,&#8221; van Enckevort said. &#8220;Looking at patterns over time, the jobs we have to do will completely change. Things juniors are being trained to do now will be right at the edge of what we once needed them to do. The only thing I&#8217;m confident about is that in 10 years the jobs will be different.&#8221;</p>
<p>&#8220;I think there&#8217;ll be a lot of regret,&#8221; Patel acknowledged. &#8220;And after we regret, we will see the need to humanise everything&#8221; &#8211; to bring things back to the way things are now, she added.</p>
<p>&#8220;It&#8217;s an interesting thing to explore,&#8221; Mulhall added. &#8220;Thinking of security &#8211; not any other job &#8211; I look at history for inspiration. Cities have traded for thousands of years. There&#8217;s always been security problems. We&#8217;ve always worked to solve them. We&#8217;ll still need security professionals to solve them.&#8221;</p></blockquote>
<h4>Going Out Of My Head</h4>
<p>Talk turned to knock-on risks &#8211; risks associated with AI use, but that do not arise directly from it. For example, the risk of a company being sued because of actions taken by AI agents which had carried out those actions without any direct instruction to do so from a member of staff. The first response, from Mulhall, garnered a spontaneous round of applause from the majority in the room, when he said that he didn&#8217;t view this as a cybersecurity problem. Yet there was rueful acceptance that, whether it was properly a challenge for security teams or not, it was likely to end up in their in-trays.</p>
<p>Future regulation of AI &#8211; in the European Union and the UK in particular &#8211; was also an issue that had been on people&#8217;s minds. In answer to a question about preparation for those, Patel said &#8220;I&#8217;m keeping an inventory and just making sure I&#8217;m keeping ready&#8221;.</p>
<p>Clearly, how AI risk will be assessed is going to keep on changing, as both new products and new regulations emerge. Preparing a business for all of that may be challenging, but it should not be impossible.</p>
<blockquote><p>Patel said: &#8220;I treat it in the same way I treat everything right now &#8211; dissect it, look at it more, then decide.&#8221;</p>
<p>Van Enckevort suggested that there may be some useful examples to be found in recent data-protection history. &#8220;What kickstarted companies taking GDPR [the EU&#8217;s General Data Protection Regulation] seriously was real risk. I think we&#8217;re driving head-long into some large lawsuits, and that will start focusing attention.&#8221;</p></blockquote>
<p>Mitigating these kinds of risk is possible by enhanced, focused and forensic attention on precisely what you&#8217;re using in your organisation, and minimising exposure by shutting the door on redundant or unnecessary tools, he argued.</p>
<blockquote><p>&#8220;Personally, we drill very, very strongly into the employee base, to understand what&#8217;s appropriate,&#8221; he continued, &#8220;and if you generate something you don&#8217;t use, that&#8217;s something that&#8217;s inappropriate. But that won&#8217;t last long. What I see coming is greater focus on what&#8217;s appropriate to use. This is something we see in our customer base. I think the answer will change over a couple of years, but I expect this will still be an issue.&#8221;</p></blockquote>
<p>Yet where these risks arise from using a tool supplied by a third party, the responsibility will likely depend on the precise contractual terms the customer company has agreed with the AI vendor. And the more companies are reliant on wide-ranging AI technologies, the less likely those contracts are to be worded in a way that is particularly helpful. And, when one wag wondered whether anyone in the room had a data-processing agreement with Chat GPT, the chorus of &#8220;No&#8221;s was close to unanimous. Whether these kinds of risks will be considered jobs for the security team or not, they will represent threats to the business, and someone will have to bear them in mind.</p>
<h4>Slash Dot Dash</h4>
<p>Perhaps surprisingly, data protection issues &#8211; and the governance, regulation and compliance questions they raise for businesses &#8211; went on to become a major talking point. One attendee told a story of being asked to input their passport and other details into the automated chat feature of a large airline&#8217;s website. They declined, and when they asked the airline how those data were being protected, the airline said they didn&#8217;t know.</p>
<blockquote><p>&#8220;Should there be measures in place to make sure that when you put data into a form on an automated chat that it&#8217;s protected?&#8221; they asked. &#8220;And who should be responsible? If I park my car at my own risk, I know it: do we input our data into automated web chat forms on the same basis?&#8221;</p>
<p>Patel, her role as the voice of reason for the evening already well established, pointed out that &#8220;how they use such information should be on the privacy policy of the website. You should know that up front.&#8221;</p>
<p>&#8220;Unfortunately, this is very common,&#8221; van Enckevort said. &#8220;I moved into an apartment recently, with a company I&#8217;d used 10 years ago. They needed me to prove my identity. I said, &#8216;Haven&#8217;t you already identified me?&#8217; And they said, &#8216;Oh yes, here&#8217;s your passport from 10 years ago&#8217;.&#8221;</p>
<p>Mulhall was optimistic that common sense may, eventually, prevail &#8211; &#8220;I think what will happen is that people will stop asking for data that they really don&#8217;t need,&#8221; he said &#8211; but did not venture a guess as to when that may happen.</p></blockquote>
<h4>Praise You</h4>
<p>The conversation began to coalesce around what might be termed the pragmatic solutions or approaches that businesses and their security teams can meaningfully and usefully start to adopt. And the first step to assessing what is viable as a security posture (as opposed to what would be great to have but will be unaffordable, impractical or otherwise unachievable) may lie in deciding to what extent the challenges posed by this new technology are, in and of themselves, novel or unprecedented.</p>
<blockquote><p>&#8220;I&#8217;ve been in security for quite a while,&#8221; one attendee admitted, to sniggers from among some friends and colleagues. &#8220;We&#8217;ve always got something new to deal with. It&#8217;s always a major issue, and we need to fix it, so our focus goes on it. And then eventually all these things move into the organisation and they become part of normal cybersecurity hygiene &#8211; part of the default minimum requirements. What do we think, at the moment, is AI security hygiene? To me, it&#8217;s all over the place. We attack everything as a priority. It some point, I guarantee, it will settle down, and there&#8217;ll be AI security hygiene requirements that you do as part of your normal work. What will those be, by default?&#8221;</p></blockquote>
<p>It was a good question, and occasioned some brow-furrowing and visibly evident deep thought. And the first response, which arrived surprisingly quickly, felt both achievable and inspired.</p>
<blockquote><p>&#8220;We need to apply GDPR wherever we use AI, and apply it as deeply as possible,&#8221; Patel said.</p>
<p>&#8220;Classification as standard &#8211; classification for all your data,&#8221; van Enckevort then suggested. &#8220;Classification is the thing that stands out to me, where you have to understand what&#8217;s inside each piece of data. Classification comes first.&#8221; Such classification would, he added, need to be done automatically.</p>
<p>&#8220;I agree about GDPR. But if you put information into an LLM, you can&#8217;t delete it &#8211; so GDPR is an issue,&#8221; one attendee argued. &#8220;Automation of classification systems &#8211; that hasn&#8217;t worked so far. And AI systems are non-deterministic, so you can&#8217;t test them in the way you normally do.&#8221;</p>
<p style="text-align: left;">&#8220;It doesn&#8217;t have to be perfect to deliver value,&#8221; Mulhall said. &#8220;We have to remember that. If we as an industry get the majority of security right, we&#8217;re still far better than we are now.&#8221;</p>
</blockquote>
<p>The post <a href="https://rantcommunity.com/resources/right-here-right-now-is-cybersecurity-focusing-on-the-most-important-ai-threats/">Right Here, Right Now: Is Cybersecurity Focusing On The Most Important AI Threats?</a> appeared first on <a href="https://rantcommunity.com">RANT Community</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Change is coming, but can networking and security get their act together?</title>
		<link>https://rantcommunity.com/resources/change-is-coming-but-can-networking-and-security-get-their-act-together/</link>
		
		<dc:creator><![CDATA[Galena]]></dc:creator>
		<pubDate>Mon, 01 Dec 2025 12:42:18 +0000</pubDate>
				<category><![CDATA[Resources]]></category>
		<guid isPermaLink="false">https://rantcommunity.com/?p=2581</guid>

					<description><![CDATA[<p>For an industry built on innovation, it can be surprising how many barriers there are to IT progress. The mix</p>
<p>The post <a href="https://rantcommunity.com/resources/change-is-coming-but-can-networking-and-security-get-their-act-together/">Change is coming, but can networking and security get their act together?</a> appeared first on <a href="https://rantcommunity.com">RANT Community</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>For an industry built on innovation, it can be surprising how many barriers there are to IT progress. The mix of products and services has led to enormous technical debt that can complicate even the simplest decision. As enterprises double down on cloud and AI, the challenges only become bigger, particularly as  security teams face new threats across an expanded attack surface</p>
<p>To find a way through this impasse, <strong>Cato Networks</strong> recently convened a group of 15 like-minded security leaders with skin in the game.</p>
<p>Kicking off an evening of robust discussion as only RANT knows how, Cato Networks VP Northern Europe &amp; MEA, Kanwar Loyal, articulated the challenge facing NetOps and SecOps teams.</p>
<blockquote><p>“Only 10-15 years ago, everything was point products, on-premises and best of breed. We created complexity that’s actually now causing a problem,” he said. “We consume technology very differently now. We’ve seen an inflexion around traditional networking and security. Enterprises want to consume network and security services exactly the way they’re consuming other SaaS-based applications.”</p></blockquote>
<p>The first task for CISOs hoping to embark on this journey, is to break down traditional silos that exist between security and networking.</p>
<h4>It’s good to talk</h4>
<p>Our speaker for the evening, Grant Thornton UK Head of IT Security, John Dunne, highlighted the scale of the challengeA show of hands revealed that four security leaders around the table had discussions with their network team in the previous fortnight about the need for holistic security approach.</p>
<blockquote><p>“My network team recognise the size of the challenge” he admitted. “But it is a continual process to bring all of the disparate streams on-board so they understand what we are trying to achieve..”</p></blockquote>
<p>Silo-isation can be one of the reasons preventing departments from working harmoniously together. One security leader suggested a reason why the walls between both functions are so hard to break down. Networking teams traditionally focus on availability, while their security counterparts are more bothered about confidentiality and integrity. “When they say ‘no’, it’s not because they think you’re horrible, it’s because they’re worried you will break something,” he argued.</p>
<p>Another factor is that networking is becoming increasingly marginalised in some organisations as more infrastructure moves to the cloud. One attendee explained that if he wants to talk about cloud services, he speaks to the DevOps team. “The cloud isn’t a network-supported entity,” he added.</p>
<p>So how can security leaders evolve the relationship to a more collaborative one? One CISO shared that he built trust with his networking team because he had previously worked as an engineer installing routers and switches. “It’s about demonstrating that trust and understanding what an engineer goes through,” he said.</p>
<p>Another claimed “friction doesn’t exist” in his organisation because he has had “grown up conversations” about who is responsible for what, and all functions understand there will inevitably be overlaps. Ultimately, they know that “if we don’t work together we’re going to fall apart”, he explained.</p>
<h4>Skills, silos and budgets</h4>
<p>Technology can also play a role in spanning the network-security divide. For Cato Networks, Secure Access Service Edge (SASE) platforms can be the ideal bridge between the two, by converging networking and security functions in a unified cloud-delivered service. So why aren’t more organisations embracing it? Budget constraints were cited by many around the table, as was the perceived threat that SASE may pose to traditional teams.</p>
<p>Skills gaps were also mentioned by several CISOs as a challenge, especially as organisations build out their cloud infrastructure.</p>
<p>One opined that even cloud engineers “still need to understand” the networking basics. Another, who is the security lead at a cloud-centric organisation, admitted “that’s where we come unstuck”. She shared that she has product security engineers, cloud security engineers and IT engineers, but no network engineers, because “we don’t have a network, we have a mesh of interconnected cloud services and SaaS.” She urged her peers around the table: “Don’t let go of all your network security engineers.”</p>
<p>Basic networking skills are especially vital in a modern SASE environment given the shared responsibility model for managing cloud security.</p>
<blockquote><p>“It’s a nice way of saying, ‘we’ll do a bit of it then over to you guys’,” explained one CISO. “The problem is finding people that have the experience &#8211; finding enough people that know about these things. Because with the shared responsibility model, if you mess up in the way you’ve configured your cloud, it’s your fault.”</p></blockquote>
<h4>Starting the journey</h4>
<p>Whatever the challenges, change is coming, according to Cato Network’s Loyal. “There’s always been change,” he argued. “Today’s innovation becomes tomorrow’s commodity. All of us are being driven by change, but we’re being stuck with traditional constraints.”</p>
<p>Traditional mindsets can actively harm an organisation, one attendee warned. “The castle and moat thing is no more,” he said. “But there is still a bit of clinging on to that. It’s a bit dangerous … because all the sensitive stuff is in Workday and Salesforce, not inside [the corporate network].”</p>
<p>This new world of distributed workforces and multi/hybrid cloud environments is one that Zero Trust and SASE were built for. Both require multi-year transformation journeys that put some IT leaders off. But Cato Networks’ Loyal was keen to point out that SASE can be split into more manageable, modular projects depending on which use cases are most urgent.</p>
<p>Even then, plans are often scuppered by unrealistic expectations about time to value, and a desire for a uniform approach that will cover even “niche and edge cases”, attendees argued.</p>
<p>To leap these obstacles, security leaders will need to learn how to speak the language of networking as well as the business, in order to win hearts and minds. One CISO spoke of the need to manage expectations with the board. “Take small steps to win people’s confidence, and the rest will fall into place,” he added.</p>
<p>Another signalled the importance of covering third-party risk in any SASE/Zero Trust discussions, as these are often the edge cases that can make or break a project.</p>
<h4>Time for change</h4>
<p>Whatever happens, sticking with the status quo is not an option.</p>
<blockquote><p>“Everyone round this table wants to deliver connectivity, security, control and visibility to every edge and identity,” concluded Loyal. “It needs to be seamless. But how do I deliver a service for the business that accelerates the way the business wants to go?”</p></blockquote>
<p>SASE may well be the destination. But CISOs will need to think carefully about how they start their journey.</p>
<p>The post <a href="https://rantcommunity.com/resources/change-is-coming-but-can-networking-and-security-get-their-act-together/">Change is coming, but can networking and security get their act together?</a> appeared first on <a href="https://rantcommunity.com">RANT Community</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>People Get Ready: Like It Or Not, The Human Perimeter Is A Cybersecurity Challenge</title>
		<link>https://rantcommunity.com/resources/people-get-ready-like-it-or-not-the-human-perimeter-is-a-cybersecurity-challenge/</link>
		
		<dc:creator><![CDATA[Benjamin Randell]]></dc:creator>
		<pubDate>Tue, 22 Jul 2025 09:52:12 +0000</pubDate>
				<category><![CDATA[Resources]]></category>
		<guid isPermaLink="false">https://rantcommunity.com/?p=2381</guid>

					<description><![CDATA[<p>The fact that there&#8217;s always a human dimension to every cybersecurity incident is a given. But is every data breach,</p>
<p>The post <a href="https://rantcommunity.com/resources/people-get-ready-like-it-or-not-the-human-perimeter-is-a-cybersecurity-challenge/">People Get Ready: Like It Or Not, The Human Perimeter Is A Cybersecurity Challenge</a> appeared first on <a href="https://rantcommunity.com">RANT Community</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The fact that there&#8217;s always a human dimension to every cybersecurity incident is a given. But is every data breach, necessarily, a cybersecurity incident? This became the recurring theme of a typically lively RANT roundtable discussion in Manchester, hosted by security intelligence partners Exabeam and Cybanetix, which was convened to assess how to improve the human aspect of cybersecurity operations.</p>
<p>&#8220;One emergent threat we&#8217;re seeing is AI being weaponised, especially around deepfakes,&#8221; Findlay Whitelaw, Exabeam&#8217;s security researcher and strategist, said. Whitelaw recounted how, as an exercise, a friend managed to construct a convincing deepfake video of her from only 30 seconds of footage shot on a phone. She then recalled the notorious example of the UK company in which an employee was socially engineered to transfer £25 million out of the business following a videoconference with the chief financial officer &#8211; which turned out to be a deepfake.</p>
<blockquote><p>&#8220;The CFO was putting pressure on the accidental insider to transfer the money, but what they did that was really clever was to deepfake two other board members who were sitting in on this Teams conference call,&#8221; she said. &#8220;They were giving reinforcement through nodding of heads. The accidental insider had no idea &#8211; they trusted every element of it, and they made the payment. That&#8217;s what we&#8217;re going to be seeing more of: it&#8217;s not just the technology that&#8217;s being exploited &#8211; the human element is being exploited.&#8221;</p></blockquote>
<p>While there was agreement around the table that such episodes are clearly very worrying, there were several high-level security staffers in the room who took issue with the extent to which such highly targeted attacks could be easily repeated. And one CISO wondered whether, in fact, this was even a security issue at all.</p>
<blockquote><p>&#8220;That&#8217;s not a cybersecurity failure &#8211; that&#8217;s a financial-controls failure,&#8221; they argued. &#8220;There&#8217;s no way anyone should be in a situation where a senior leader in the business comes to them and puts them in that situation. I&#8217;m putting the blame for this one on the CFO! It&#8217;s nothing to do with cyber, yet very often the cyber team are brought in to deal with any failure like this.&#8221;</p></blockquote>
<p>In turn, they argued, this raises another human dimension in the cybersecurity debate: the pressure that is being placed on security staff by failures that may be enabled by technology but result from ineffective policies or patchy implementation in other parts of the business.</p>
<blockquote><p>&#8220;It&#8217;s why so many people here are worn thin,&#8221; this security leader suggested. &#8220;They&#8217;re dumped into everything that happens like this, even though it&#8217;s nothing to do with cybersecurity.&#8221;</p></blockquote>
<p>&nbsp;</p>
<h3>(Don&#8217;t Worry) If There&#8217;s A Hell Below, We&#8217;re All Going To Go</h3>
<p>Alsa Tibbit, an advisor and researcher on AI cybersecurity with Sheffield Hallam University &#8211; and RANT&#8217;s guest co-host for the discussion &#8211; raised other ways in which AI and deepfakes are being used to manipulate human behaviour. AI agents, she argued, will &#8220;create a huge revolution in cybersecurity because they don&#8217;t have a hierarchy of needs&#8221; that apply to, and affect the behaviours of, every human being.</p>
<blockquote><p>&#8220;Why do we think the human is the weakest link in cybersecurity? Because the human has that hierarchy of needs,&#8221; she said. &#8220;Food, family, health &#8211; so many things. These elements can all at some point be shaky. Due to stress you might click a link, then feel guilty, so you don&#8217;t tell anyone about the mistake. AI agents just have one aim.&#8221;</p></blockquote>
<p>This ability to focus on the task in hand is what makes generative AI such a potentially powerful tool, but also what makes its deployment in the workplace such a challenge from a security perspective. Tibbit mentioned, also, genAI&#8217;s capacity to amplify other problems faced not just by businesses but by entire societies, such as the spread of misinformation &#8211; whether deliberately or by accident. Again, though, such challenges, because they are technology-enabled, are being turned into cybersecurity staff&#8217;s problems &#8211; even though, the leaders in the room agreed, they are not actually cyber challenges.</p>
<blockquote><p>&#8220;This is the perennial problem with cybersecurity people: we take on everyone else&#8217;s burden,&#8221; one CISO lamented. &#8220;Misinformation is not a cybersecurity problem.&#8221;</p>
<p>&#8220;Phishing, absolutely, is a security problem,&#8221; another security leader acknowledged, adding that genAI was putting &#8220;phishing on steroids&#8221;. However, they pointed out, genAI has not created a new challenge here, just amplified an existing one.</p></blockquote>
<p>&nbsp;</p>
<h3>Hard Times</h3>
<p>Another problem that is being felt by security teams, even if it is not strictly speaking their responsibility to solve, is around how businesses are encouraging their staff to feel comfortable with genAI tools. In general, the approach seems to be to humanise the technology by giving it a name, a role or a title that would normally belong to a human member of staff.</p>
<blockquote><p>&#8220;Now we&#8217;ve got call centres using AI, giving the AI assistant a name: &#8216;This is Gemma from wherever, how are you today?&#8217;,&#8221; another CISO said. &#8220;The more we start doing that, the more normal people think it is. We&#8217;re humanising interaction with AI.&#8221;</p>
<p>&#8220;And we&#8217;re becoming over-familiar with it,&#8221; Whitelaw agreed. &#8220;AI is not going to go away. People like it. It&#8217;ll bring its own benefits. But where we are in security, companies aren&#8217;t writing AI into their policies and standards. Colleagues and employees are not being trained on the implications of the damaging impacts. There is a gulf between the individual and the corporate stance.&#8221;</p></blockquote>
<p>Bridging that chasm is going to take not just a major education effort &#8211; but may well require a wholesale retraining of the attitude of the average employee. Attendees were quick to highlight the kind of mindset shift that is needed: encouraging employees not just to care about security, but to care about the future of the company in order to promote the idea of security as being a shared task that every staffer needs to feel fully invested in.</p>
<blockquote><p>&#8220;Until the person cares about what they&#8217;re protecting they&#8217;ll carry on&#8221; accidentally putting data at risk, one CISO argued.</p>
<p>&#8220;It&#8217;s not just about caring &#8211; it&#8217;s about knowledge,&#8221; another suggested.</p>
<p>&#8220;And impact,&#8221; pointed out Dharm Vashi, Cybanetix&#8217;s sales lead.</p>
<p>&#8220;We all understand the impact of clicking on something, but is it sustainable?&#8221; Whitelaw asked. &#8220;In different roles I&#8217;ve had, the consequences were never consistent.&#8221; Neither, therefore, she suggested, were the interpretations staff would draw over what the company&#8217;s standards and policies actually mean. In such an environment, the best training programme in the world will struggle to gain much purchase.</p></blockquote>
<p>&nbsp;</p>
<h3>Power To The People</h3>
<p>The conversation began to explore ideas that might start to move the needle on effecting that kind of change. Throughout, there was recognition that these, in and of themselves, are not security issues: but among the many qualities required by cybersecurity professionals, pragmatism is among the most paramount. So regardless of how reasonable it may  or may not be, everyone in the room seemed to accept that it would likely be them who were left to pick up the pieces from failures elsewhere in the business.</p>
<blockquote><p>&#8220;Going back to human needs &#8211; they&#8217;re not thinking about security: they&#8217;re thinking about how to feed their family, and how to be more efficient,&#8221; another security leader said. &#8220;So the training isn&#8217;t going to stick in people&#8217;s minds. Once they&#8217;re outside that environment, it goes out of their heads.&#8221;</p>
<p>&#8220;We&#8217;ve been trying to say that security is everyone&#8217;s responsibility, not just mine or my department&#8217;s,&#8221; another CISO said. &#8220;But you&#8217;ve got to keep doing it. Annual training isn&#8217;t enough. Most people don&#8217;t want to do it, or they do it begrudgingly, and they don&#8217;t take anything away from it. We have training and then we have short, sharp nudges; reminders; newsletters. We try to balance it so we&#8217;re not overloading people but we&#8217;re also not just doing it one time and they forget. But it&#8217;s getting across that it&#8217;s their responsibility, too, not just ours.&#8221;</p></blockquote>
<p>There was interest expressed by others around the table as to what extent that message was resonating within the company&#8217;s workforce. Cybanetix takes the view that the responsibility should definitely be shared. The firm&#8217;s head of presales, Martin Luff, was optimistic about the ability for emerging technologies to help, but not without joined-up thinking within organisations.</p>
<blockquote><p>&#8220;Whether it&#8217;s finance, cyber&#8230; actually, it&#8217;s everybody&#8217;s problem. It&#8217;s got to be a joint effort,&#8221; he said. &#8220;There are things you can do in terms of technology, some guardrails you can put around the use of AI, but there&#8217;s no perfect answer yet.&#8221;</p></blockquote>
<p>&nbsp;</p>
<h3>We&#8217;ve Only Just Begun</h3>
<p>Of course, it has long been understood &#8211; however ruefully &#8211; that there is no better corporate learning experience than a major data breach. When the business struggles to get back on its feet after a serious incident, or when that incident results not just in increased workloads but perhaps even redundancies &#8211; that is generally when the penny drops among the rank-and-file staffers for whom security had felt like someone else&#8217;s job. The ideal situation, attendees agreed, would be to be able to get into that post-attack mindstate without having had to go through the incident in the first place. Whether that is achievable is another matter.</p>
<blockquote><p>&#8220;Do we need the horse to bolt before we close the stable door?&#8221; one attendee with experience of trying to inculcate better security cultures in multiple businesses mused. &#8220;You go into organisations and you&#8217;re trying to preach to the ones who aren&#8217;t converted. They work on old systems that aren&#8217;t secure, they share credentials, and they say, &#8216;It&#8217;s always been like this and we&#8217;ve never had an incident.&#8217; They just don&#8217;t know.&#8221;</p>
<p>&#8220;It remains complex,&#8221; Whitelaw said. &#8220;Humans are diverse, unpredictable, multifaceted. While I agree not everything we&#8217;ve discussed is cyber directly, everything we do that uses these tools and digitised platforms is, by default &#8211; rightly or wrongly &#8211; going to be seen as an element of security. Cyber, legal, HR, business units &#8211; it&#8217;s an everybody issue. But I don&#8217;t think we&#8217;re there yet. And I&#8217;ve not seen any company get it right.&#8221;</p></blockquote>
<p>Interested in learning more from experts at Exabeam and Cybanetix? Find out more here: <a href="https://www.exabeam.com/">Exabeam</a> &amp; <a href="https://cybanetix.com/">Cybanetix</a></p>
<p>Connect with Exabeam&#8217;s <a href="https://www.linkedin.com/in/findlay-whitelaw/">Findlay Whitelaw</a>.</p>
<p>&nbsp;</p>
<p>The post <a href="https://rantcommunity.com/resources/people-get-ready-like-it-or-not-the-human-perimeter-is-a-cybersecurity-challenge/">People Get Ready: Like It Or Not, The Human Perimeter Is A Cybersecurity Challenge</a> appeared first on <a href="https://rantcommunity.com">RANT Community</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
