Insider Threat Simulation: A Red Team Perspective

Most organisations focus their cybersecurity efforts on external threats; they invest in firewalls, intrusion detection, and endpoint protection. Insiders however are already on the networks, they are trusted and know where to find the corporate data stores. Preparing to manage that sort of threat is very different.

That’s where red team insider threat simulations come into play. These exercises mimic the actions of a malicious or compromised employee to test how resilient an organisation truly is when the attacker is already inside.

Insider threats are hard to detect. Unlike external attackers, insiders already have access to systems, credentials, and sometimes even elevated privileges; they don’t need to try and bypass external controls, they don’t need to conduct noisy reconnaissance, and they often don’t need to rely on malicious software.

When we test these sorts of scenarios, our simulations help answer crucial questions:

  • Can security tools detect abnormal internal behaviour?
  • Are data access policies and least privilege enforced?
  • How quickly can the SOC respond to an insider attempting data exfiltration?
  • Do employees know how to report suspicious behaviour from colleagues?

When we design these scenarios, we often need to consider the type of insider we are playing:

Compromised Employee Scenario: This simulation assumes a legitimate user’s credentials have been stolen (via phishing or password reuse). The red team uses these credentials to move laterally, escalate privileges, and access sensitive systems, just as a real attacker would — without triggering alerts.

Rogue Insider with Intent: In this simulation, the red team acts as a disgruntled employee with legitimate access. The goal is to test how much damage a single individual can do from within without raising red flags.

Privileged Abuse Scenario: Red teams mimic an administrator abusing their elevated access. This tests both technical controls and oversight mechanisms.

Social Engineering Internally: Sometimes the threat isn’t technical at all. Red teams may simulate internal social engineering — convincing employees to reveal credentials or grant inappropriate access.

Building on these, and what makes these scenarios valuable, is understanding what the detection and response capabilities are like in relation to them:

  • Logging & monitoring: Are internal actions logged, and are alerts in place?
  • Data loss prevention (DLP): Can sensitive files be transferred to USB, personal email, or cloud apps?
  • Behaviour analytics: Are unusual login times or large file transfers detected?
  • HR + Security alignment: Are behavioural red flags being communicated and followed up?

Insider threat scenarios are uncomfortable for many organisations. Many are aware they have blind spots, and they will struggle to detect and prevent these sorts of threats, however, it is for precisely these reasons that they should be included and tested.

If you would like to know more, please reach out and contact us:

Prism Infosec: Cyber Security Testing and Consulting Services

UK Government Proposes Ban on Public Sector Ransomware Payments

On 22nd July 2025, the UK Government announced a significant legislative proposal aimed at reducing the incentive for ransomware attacks. Under the proposed law, public sector bodies and operators of Critical National Infrastructure (CNI) — including schools, local councils, the NHS, utilities, and data centres — would be prohibited from paying ransoms to cybercriminals.

The intention behind this move is to make these organisations less attractive targets for financially motivated threat actors. By clearly signalling that ransom payments are not an option, the Government hopes to deter attacks on the public sector altogether.

While the ban would apply only to public sector and CNI organisations, private companies would still be permitted to consider paying ransoms — but with a new requirement: they must notify the UK Government of any intention to make such a payment. This step would allow the Government to offer guidance, and assess and advise whether the payment could breach existing laws, such as sanctions regulations.

The implementation timeline for this proposal has not yet been confirmed. However, the announcement follows a public consultation in which nearly 75% of respondents supported the measure.

At Prism Infosec, we support efforts to reduce the impact of ransomware and limit the profitability of these attacks. However, we recognise that the proposed legislation could have unintended consequences. Organisations may still be tempted to pay ransoms covertly, particularly if they feel they have no other viable recovery options. This approach carries serious risks — including legal, reputational, and operational consequences — especially if payments are made in breach of sanctions or reporting requirements. Furthermore, the proposed legislation also makes note that penalties for breaching the legislation are also being considered.

As always, we strongly encourage all organisations to prioritise robust cyber security measures, incident response planning, and open communication with authorities in the event of an attack.

Further details on the Government’s proposal can be found here: https://www.gov.uk/government/consultations/ransomware-proposals-to-increase-incident-reporting-and-reduce-payments-to-criminals

Why bother with Physical Breach Tests?

A physical red team (breach) test is a real-world simulation of a physical breach. Think: tailgating into a secure office, picking locks, planting rogue devices, or accessing server rooms without authorisation. Unlike standard security audits, red teamers think and act like real adversaries – covertly probing for the weakest link in physical security protocols, policies, and human behaviour.

We get asked on occasion to test organisations for this sort of breach (far too few organisations actually want this tested). This is because they understand that whilst most of their threats may try to come in through digital means, a physical approach can be more impactful, and easier to deliver. Some of the reasons we’ve seen for wanting to deliver this of test include:

Helpful Staff

No matter how high-tech your access control systems are, they mean little if an attacker can simply follow an employee through the door (a practice known as tailgating). Physical red team tests highlight how susceptible staff can be to social engineering tactics like impersonation, fake deliveries, or authoritative-sounding pretexts.

Exposed infrastructure

Access to a single unsecured port in a server room or conference space can allow attackers to plug in malicious devices (like a Raspberry Pi or Bash Bunny), potentially leading to full network access. Red teamers often demonstrate just how quickly digital perimeters can be bypassed through a physical route.

Security Culture

Physical red team tests uncover issues beyond technical flaws: they reveal complacency, unclear protocols, and lack of awareness. When employees don’t challenge strangers, or when policies are not enforced in practice, that’s not just a failure of security—it’s a cultural problem.

Regulatory Pressure

As industries face stricter compliance requirements (e.g., NIST, ISO 27001, PCI-DSS), physical security is increasingly scrutinized. Some cyber insurance providers also now assess physical controls when pricing policies. Demonstrating that you’ve tested—and improved—your physical defences can reduce both regulatory risk and insurance premiums.

Actionable & Demonstratable

Unlike hypothetical risks or compliance checklists, red team results are concrete. They show exactly how an attacker got in, what assets were accessed, and where the defences broke down. These tests offer practical insights to improve training, upgrade systems, and harden physical defences.

Delivery of Testing

Before any physical red team test begins, legal authorisation is essential. Organisations should work with reputable providers who:

· Ensure written authorisation from executive leadership

· Clearly define the scope, targets, and rules of engagement

· Handle data collection, privacy, and evidence retention with care

· Respect employee dignity and avoid unnecessary disruption

This not only protects the business and the testers but ensures the activity remains ethical, controlled, and defensible.

At Prism Infosec, we not only have experience of conducting these sorts of engagements in a legal and risk managed way, but we also can provide advice, guidance and executive support in understanding and mitigating these sorts of threats.

If you would like to know more, please reach out and contact us:

Prism Infosec: Cyber Security Testing and Consulting Services

How We Got Here: A Brief Reflection on Cybersecurity’s Foundations

Computer technology as we know it, has existed for the merest blip of time in human history. In less than 90 years we have gone from valves and punchboards to pushing the boundaries of quantum states in an attempt to achieve computations that would take millions of years to achieve otherwise. We landed people on the moon with computers that were no more powerful than graphing calculators available at schools in the 1990s. To me, that is astounding. You could argue, that the field of cybersecurity although known as an alternative name, was born at the same time as Colossus, with the first code breakers using it to attack Axis powers’ encryption.

Regardless, it wasn’t until computers became more accessible and people were given the opportunity to experiment more freely that the first virus was created in the early 1970s with the Creeper Virus; the first anti-virus, Reaper followed shortly after that. Since then, we have seen an escalating rise in offensive and defensive computer capabilities grow. When the 1990s rolled around, computer interconnectivity exploded into homes and businesses around the world and the internet as we know it today took shape (the internet in fairness has existed since the 1950s but it didn’t become wholesale accessible until the 1990s).

Why mention this? Context. Computers, networks, and widespread computer literacy only started to become a thing just over 30 years ago. People who grew up in that generation, grew up with access to these tools and capabilities, and yet those capabilities became widespread almost overnight, with businesses thrown into the deepend of needing to adapt and adopt to keep up and remain competitive. They did that without expertise in the board rooms, without considering how they would implement those capabilities securely, and had to learn the hard way what the impact of this technology would be.

Today we are dealing the adoption of those systems and the speed with which they came into play. The generation that grew up programming VCRs, coding on BBC Micros, grabbing gaming magazines for cheat codes, are now entering board rooms and making decisions for the next generation. We have to keep in mind that when a red team comes in, and takes a look at a network and identifies issues like poor credential hygiene, poor network segmentation, ineffective access controls, and improper administration tiering, we are looking at a network that may have been designed, torn up, merged, reimplemented, and reconfigured multiple times over decades with no one starting fresh and building to principles we now recognise as necessary for security rather than competitiveness. That does not excuse these issues, but we do need to be cognisant of how we got here, and recognise that we are still in the infancy stages of changing mindsets as we adapt to the implications of this technology and recognise that bolting on random security products will not solve the problems if we don’t address the foundations we started with.

If you’d like to explore how red teaming can help you uncover and address these foundational risks, feel free to get in touch.

Prism Infosec: Cyber Security Testing and Consulting Services

Abuses of AI

Much like Google and Anthropic, OpenAI have released their latest report on how threat actors are abusing AI for nefarious ends, such as using AI to scale deceptive recruitment efforts, or using AI to develop novel malware.

It is no surprise that as AI has become more pervasive, cheap to gain access to, and readily accessible, that threat actors are actively abusing it to further their own agendas. So having companies like Google, OpenAI and Anthropic openly discussing the abuses they are seeing is immensely helpful in terms of understanding the threat landscape and to understand the direction that threat actors are taking.

These reports should be C Suite level required reading. They contain nuggets of information that affect business from recruitment practices to securing their perimeter, and best of all they are free to access.

Adversarial Misuse of Generative AI | Google Cloud Blog

Disrupting malicious uses of AI: June 2025

Detecting and Countering Malicious Uses of Claude \ Anthropic

For us at Prism Infosec, we not only use these reports to help inform our clients, but we also feed them into our scenarios for tabletop exercises and red team scenarios, so we can help our clients prepare for and defend against being victims of threat actors abusing these technologies.

If you would like to know more, please reach out to us.

Prism Infosec: Cyber Security Testing and Consulting Services

Why Not Test in Dev?

We frequently get asked by clients if we can do our red team tests in their DEV or UAT environments instead of production. We are told its identical to production – same systems, dummy but similar data, same security controls, same user accounts, etc. Etc.

We get it, DEV and UAT environments are there to de-risk threats to production. However, no matter how close they resemble production, they are not what threat actors are going to target. No matter how similar it is, it won’t have the entire company working on it helping to hide threat actor activity. If alarms go off in it, are we absolutely certain that it will be treated with the same priority as the production system, even if multiple alarms are already going off in production?

Red team testing is only effective if it is the live, production environment because we need to ensure that the organisation can defend the network that is most critical to the day-to-day running of the business. If your DEV or UAT environments go down, how long can your business operate compared to if your production systems go down?

At Prism Infosec we do appreciate the concerns about allowing red team testing on production environments. We do not want to disrupt your business. That’s why we have an exceptionally robust risk management strategy. We collaborate and manage risks to ensure the business can protect itself against realistic threats without unforeseen disruptions.

Talk to us today to find out more. Prism Infosec: Cyber Security Testing and Consulting Services

Bait and Switch – Are You Accidentally Recruiting Insider Threats?

Over the last couple of years, we have seen a marked increase in criminal groups infiltrating companies. Either using AI and stolen identities or fronting interviews with disposable candidates all the way through until the contract is signed, and then an alternative person shows up to start the job. In many cases once they have their position, they then either attempt to request greater privileges to gain access to corporate repositories for useful information they can steal. In many cases, even when caught they will simply vanish, corporate asset and all requiring lengthy investigations, access audits, risk management headaches and policy reviews on recruitment practices.

I have personal knowledge of 1 case where this actually happened to a multinational company. Whilst they were shocked and embarrassed by being the victim of such an attack, they did catch the individual quickly and were satisfied that they didn’t lose any sensitive data, even though the individual did get away with a corporate device (for all I know it’s now being used to inefficiently mine bitcoin). Regardless, this was a wake-up call for the company, they had heard about this sort of scam, considered that they could never be a victim to such an approach, and were then utterly astonished when it happened but they learned from it, and now factor it into their recruitment programmes, have put into place new safeguards, such as ensuring the person has to visit the office with their ID to collect their IT rather than relying on remote verification, and during the interview process, devising questions which cannot be easily answered by AI.

This sort of scenario can be played out in a tabletop exercise, for HR, Risk, Legal, and IT, to help you simulate what you would do should this happen to you. You can also play this out in a practical red team scenario, building on the tabletop exercises to help you understand how you can detect and defend against such an attack. At Prism Infosec we can help with both of these sorts of exercises, and with incident response should you ever be a victim yourself. Please feel free to reach out to us, should you like to know more.

Prism Infosec: Cyber Security Testing and Consulting Services

Underinvestment in Cybersecurity

In the last few decades IT systems have become a significant factor for every industry, increasing productivity, improving service offerings and increased the speed at which companies can deliver services. It is only right therefore that we ensure that these systems are not abused, damaged, or misused in a manner which can undermine the organisation or its customers.

Whilst every industry wants to ensure that their IT systems was continuing to deliver massive benefits to them, the cost of securing such systems and ensuring they remain secure is an area in which many companies underspend as security is often viewed as a cost centre with no discernible benefit. This is because of a combination of economic, psychological and organisational reasons.

Cybersecurity is seen as a cost, not an investment. This is because there is no immediate rerun on the investment as it does not generate visible revenue, and its hard to quantify the benefit of an attack not happening. if it is working and effective, there is no loss of service.

Companies will also often underestimate the risks they are running. Too many believe there are too small or unimportant to be targeted, without the consideration that any income a threat actor can squeeze out of a business regardless of their size makes them a potential victim. This can also be attributed to lack of awareness at how damaging cyberattacks can be. Not every attack needs to result in ransomware – sometimes you can be a victim purely because of who your clients are, the data you hold, or because of who you are affiliated with. Not to mention opportunistic criminals who would seek to abuse your IT systems to mine cryptocurrencies!

On an organisation level, there can be many factors for underinvestment. This can be a result of the c-suite not really understanding technical threats or how to prioritise them in the context of the business. CISOs can struggle to make the business case for financial investment when competing with other growth-oriented spending like sales. There can be an overconfidence in existing defences – the fallacy that anti-virus and firewalls is all that is needed to keep you secure, combined with a “check-the-box” approach to compliance can give a false sense of security.

What we see time and time again is that too many organisations only invest in their security after a breach or regulatory penalty. Security has traditionally only been prioritised after a failure, and not before one.

These issues have been identified by regulators in the financial industry, and beyond. This is why schemes such as CBEST exist. Not to force companies to spend money where they would rather not, but to validate the security spends, demonstrate the impact at board level for underinvestment and enable companies to move from a reactive culture to a proactive one. These types of regulator led tests are not pass/fail events. They are about ensuring that organisations build resilience and capability, and maintain the trust they have worked hard to gain from their customers.

Prism Infosec are proud to be part of this industry – security should be a priority for every organisation and not just the regulated ones. We want to help our clients on their security journey, raising awareness, demonstrating the value of security investments, and supporting them to be trusted, secure and robust whilst achieving their goals.

If you would like to discuss how Prism Infosec can help your company, then please reach out to us:

Prism Infosec: Cyber Security Testing and Consulting Services

The Cost of a Breach

IBM’s 2024 Cost of a Data Breach report, identified that the average cost of a data breach in the UK reached £3.58 million, and that this cost had increased 5% since 2023.

Verizon’s 2025 Data Breach Investigation report, suggested there was a 37% increase in ransomware attacks being reported, with a median payout of $115,000 paid by 36% of victims, of which 88% were smaller businesses. Keep in mind, this is just the cost of decrypting the ransomware, when you consider lost productivity, reputational risk, shareholder losses, service impacts, and potential fines, the cost skyrockets.

Even the European Union Agency for Cybersecurity (ENISA) has published a report discussing the impact of cyber security breaches, and highlights the impacts of such breaches across the financial sector; this reporting will only increase now that the Digital Operational Resilience Act (DORA) has come into force.

The news so far this year has identified a number of significant breaches: M&S, Co-Op, Harrods, Cartier, and North Face. More could be on the horizon, and the expectation is that this trend will only continue upwards.

Organisations do have tools to help them prepare for and potentially prevent these sorts of issues. Companies such as Prism Infosec offer red team engagements, where for a fraction of the cost of dealing with a breach, we can simulate how these threat actors operate, and help the organisation identify how they could be attacked, what they can do about it, and exercise how they would respond if or when this occurs, to minimise the impact, disruption, and damage these actors profit from. If your organisation is serious about managing the risk of being breached, then do reach out to us at Prism Infosec: Cyber Security Testing and Consulting Services so we can discuss how we can help secure your business.

ENISA Threat landscape: Finance sector

2025 Data Breach Investigations Report | Verizon

Cost of a data breach 2024 | IBM

AI and Red Teaming

Red teaming is still fairly young as far as cybersecurity disciplines go – most of us in this part of the industry have come in from being penetration testing consultants, or have some sort of background in IT security with a good mix of coding and scripting skills to develop tools. Our work often requires us to not only simulate the threat actors as closely as we can, but also manage the risks of our operations to avoid impacting our client’s business. This dichotomy of outcomes (simulating a threat actor who’s objective is to disrupt, whilst simultaneously trying not to disrupt) may seem confusing, but we also need to remember what red team is for. Its to help our clients test their detection and response capabilities. The objective of the red team is almost incidental – it merely sets a direction for the consultants to work towards whilst we determine what our clients can and cannot detect and what they do about it, if a detection occurs. That latter part is where the disruption is more likely to occur but even there, we can manage the risks.

So where does AI come into it? Well, we have all seen the news about AIs going to take over jobs in a number of fields, and red teaming is no different from the fears of this. The problem is, most AI systems these days are just really good guessers – I prefer to think of these things as almost expert systems, instead of a true intelligence. By that, I mean you can train them to be exceptional at specific tasks but if you go too broad with them, they really struggle. They don’t explain themselves; they can’t repeat steps identically particularly well; and they often forget or hallucinate critical elements when faced with large and complex tasks. A red team is a very large and complex series of tasks where forgetting or imagining critical steps will often lead to a poor outcome. Add into that mix, live environments and risk management, and the dangers of impacting a client become uncomfortably high. As a result, I have not yet met a single professional in this industry who would be happy to take the risk of letting a red team run entirely with AI, and I don’t see that changing any time soon.

However, I do see a future in which AIs help co-pilot red teams. By this I mean, that if the privacy concerns can be addressed, I can foresee a point where a private specialist red team LLM AI would be permitted to ingest data the red team acquires during an engagement (such as network mapping information, directory lists, active directory data, file contents, source code, etc.), and having it perform analysis on it. It can then provide suggestions on how the engagement can proceed. This would also have the added benefit of it being able to answer questions rapidly for the red team to help them consider additional attack paths, identify additional issues in the environment, and suggest additional things they could try. It could also quickly confirm if the red team had any interactions with systems within the client environment to deconflict if issues occur. In time I could even see this being an added real-time benefit for client control groups who would be able to interrogate the LLM for quicker results as to what the red team are doing and what has been identified to date.

AI is here now, and its evolving. We can’t really ignore it as it becomes a tool more and more used in everyday lives, and that means we need to find ways to make it work with the concerns we have. I personally feel that pushing them into smaller expert system roles is the right way forward, as this then allows them to fulfil the role of an assistant more fully. We also need to acknowledge that the public models have been trained unethically on source data taken without consent from authors and copyright holders. As their use grows, not only is there a considerable environmental impact, but I believe they will start to show strain in the near future. This is because, as the public further embraces these tools and uses them to generate new content, that AI generated content will also be absorbed by LLMs. This risks us entering a situation where the snake will eat its own tail and turn into the LLMs into an echo chamber, and we will see the quality of their output drop considerably. This will also likely be compounded by people losing critical thinking skills, which ultimately will harm us more than the AIs can help us.