Prism Infosec Achieves CBEST Accreditation

Prism Infosec, an established CHECK accredited Penetration Testing company, is pleased to announce that we have achieved accreditation status as a Threat-Led Penetration Testing (TLPT) provider under the CBEST scheme, the Bank of England’s rigorous regulator-led scheme for improving the cyber resiliency of the UK’s financial services, supported by CREST.

This follows our recent accreditation as a STAR-FS Intelligence-led Penetration Testing (ILPT) provider in November 2024, and . These accreditations put us in a very exclusive set of providers in the UK who have demonstrated skills, tradecraft, methodology, and the ability to deliver risk managed complex testing requirements to a set standard required for trusted testing of UK critical financial sector organisations.

Financial Regulated Threat Led Penetration Testing (TLPT) / Red Teaming

The UK is a market leader when it comes to helping organisations improve their resiliency to cyber security threats. This is in part due to the skills, talent, and capabilities of our mature cybersecurity sector developed thanks to accreditation and certification schemes introduced originally by the UK CHECK scheme for UK Government Penetration Testing in the mid-2000s. As the UK matured, new schemes covering more adversarial types of threat simulations began to evolve for additional sectors.  Today, across the globe, other schemes have been rolled out to emulate what we in the UK have been delivering for financial markets since 2014 in terms of resiliency testing against cyber security threats. This post examines two of the financial-sector oriented, UK based frameworks – CBEST and STAR-FS, explaining how they work, and how Prism Infosec can support out clients in these engagements. 

What is CBEST?

CBEST (originally called Cyber Security Testing Framework but now simply a title rather than an acronym) provides a framework for financial regulators (both the Prudential Regulation Authority (PRA) and Financial Conduct Authority (FCA)) to work with regulated financial firms to evaluate their resilience to a simulated cyber-attack. This enables firms to explore how an attack on the people, processes and technology of a firm’s cyber security controls may be disrupted. 

The aim of CBEST is to:

  • test a firm’s defences; 
  • assess its threat intelligence capability; and
  • assess its ability to detect and respond to a range of external attackers as well as people on the inside. 

Firms use the assessment to plan how they can strengthen their resilience.

The simulated attacks used in CBEST are based on current cyber threats. These include the approach a threat actor may take to attack a firm and how they might exploit a firm’s online information. The important thing to take away from CBEST is that it is not a pass or fail assessment. It is simply a tool to help the organisation evaluate and improve its resilience.

How does CBEST work?

A firm is selected for test under one of the following criteria:

  • The firm/FMI is requested by the regulator to undertake a CBEST assessment as part of the supervisory cycle. The list of those requested to undertake a review is agreed by the PRA and FCA on a regular basis in line with any thematic focus and the supervisory strategy.
  • The firm/FMI has requested to undertake a CBEST as part of its own cyber resilience programme, when agreed in consultation with the regulator.
  • An incident or other events have occurred, which has triggered the regulator to request a CBEST in support of post incident remediation activity and validation, and consultation/agreement has been sought with the regulator.

CBEST is broken down into phases, each of which contains a number of activities and deliverables:

When the decision to hold a CBEST is made, the firm is notified in writing that a CBEST should occur by the regulator, and the firm has 40 working days to start the process. This occurs in the Initiation Phase of a CBEST. A Firm will be required to scope the elements of the test, aligned with the implementation guide, before procuring suitably qualified and accredited Threat Intelligence Service Providers (TISP), and Penetration Testing Service Providers (PTSP) – such as Prism Infosec.

After procurement there is a Threat Intelligence Phase, which helps identify information threat actors may gain access to, and what threat actors are likely to conduct attacks. This information is shared with the firm, the regulator and the PTSP and used to develop the scenarios (usually three). A full set of Threat Intelligence reports is the expected output from this phase. After the Penetration Test Phase, the TISP will then conduct a Threat Intelligence Maturity Assessment. This is done after testing is complete to help maintain the secrecy of the testing phase.

The next phase is the Penetration Testing Phase – during this phase each of the scenarios are played out, with suitable risk management controls to evaluate the firm’s ability to detect and respond to the threat. During this phase, the PTSP works closely with the firm’s control group and regular updates are provided to the regulator on progress. After testing, the PTSP then conducts an assessment of the Detection and Response (D&R) capability of the firm. Following these elements, the PTSP will then provide a complete report on the activities they conducted, vulnerabilities thy identified and the firm’s D&R capability.

CBEST then moves into the Closure phase where a remediation plan is created by the firm and discussed with the regulator and debrief activities are carried out between the TISP, PTSP and the regulator.

The CBEST implementation guide can be found here:

CBEST Threat Intelligence-Led Assessments | Bank of England

What is STAR-FS?

Simulated Targeted Attack and Response – Financial Services (STAR-FS)

STAR-FS is a framework for providing Threat Intelligence-led simulated attacks against financial institutions in the UK, overseen by the Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA). STAR-FS has less regulatory oversight in comparison to CBEST, but uses the same principles and is intended to be conducted by more organisations than CBEST. Like CBEST, STAR-FS uses the same 4 phase model.

How does STAR-FS work?

STAR-FS has been designed to replicate the rigorous approach defined within the CBEST framework that has been in use since 2015. However, STAR-FS allows for financial institutions to manage the tests themselves whilst still allowing for regulatory reporting. This means that STAR-FS can be self-initiated by a firm as part of their own cyber programme. Self-initiated STAR-FS testing could be recognised as a supervisory assessment if Regulators are notified of the STAR-FS and have the opportunity to input to the scope, and receive the remediation plan at the end of the assessment.

The Regulator, which includes the relevant Supervisory teams, receives the Regulator Summary of the STAR-FS assessment in order to inform their understanding of the Participant’s current position in terms of cyber security and to be confident that risk mitigation activities are being implemented. The Regulator’s responsibilities include receiving and acting upon any immediate notifications of issues that have been identified that would be relevant to their regulatory function. The Regulator will also review the STAR-FS assessment findings in order to inform sector specific thematic reports. Aside from these stipulations, the regulator is not involved in the delivery or monitoring of STAR-FS engagements and does not usually attend the update calls between the firm and TISP and PTSPs.

Like CBEST, there are also Initiation, Threat Intelligence, Penetration Testing and Closure phases, and accredited TI and PT suppliers must be used. In the Initiation and Closure phases, the firm is considered to have the lead role, whilst in the Threat Intelligence and Penetration Testing phases, the TISP and PTSPs are respectively expected to lead those elements. Again, a STAR-FS implementation guide is available to support firms undergoing testing:

STAR-FS UK Implementation Guide

How are we qualified to deliver Threat Led Penetration Testing?

Prism Infosec are one of a small handful of companies in the UK which have met the criteria mandated by the PRA and FCA to deliver STAR-FS and CBEST engagements as a Penetration Testing Service Provider. That mandate is that the provider must have, and ensure that engagements are led by a CCSAM (CREST Certified Simulated Attack Manager) and a CCSAS (CREST Certified Simulated Attack Specialist). Furthermore, the firm must have at least 14,000 hours of penetration testing experience, and the CCSAM and CCSAS must also have 4000 hours of testing financial institutions. The firm must also have demonstrated their skills through delivery of penetration testing services for financial entities which are willing to act as references, and must have been delivered within the last months prior to the application.  

How we deliver Threat Led Penetration Testing?

At Prism Infosec we pride ourselves on delivering a risk managed approach to Threat Led Penetration Testing – ensuring we deliver a test that helps us evaluate all the controls in an end-to-end test. Our goal is to help our clients understand and evaluate the risks of a cyber breach in a controlled manner which limits the impact to the business but still permits lessons to be learned and controls to be evaluated. Testing under CBEST, STAR-FS or simply commercial STAR engagements is supposed to help the firm, not hinder it, which is why we ensure our clients are kept fully informed and are able to take risk aware decisions on how best to proceed to get the best results from testing.

Prism Infosec will produce a test plan covering the scenarios, the pre-requisites, the objectives, the rules of the engagement and the contingencies required to support testing. A risk workshop will be held to discuss how risks will be minimised and agree clear communication pathways for the delivery of the engagement.

As each scenario progresses, Prism Infosec’s team will hold daily briefing calls with the client stakeholders to keep them informed, set expectations and answer questions. An out of band communications channel will also be setup to ensure that stakeholders and the consultants can contact each other as necessary, should the need arise. At the end of each week, a weekly update pack outline what has been accomplished, risks identified, contingencies used and vulnerabilities identified will be provided to the stakeholders to ensure that everyone remains fully informed.

Once testing concludes, Prism Infosec would seek to hold a Detection and Risk Assessment (DRA) workshop which comprises of two elements – the first a light touch GRC led discussion with senior stakeholders to provide an evaluation of the business against the NIST 2.0 framework; the second is a more tactical workshop with members of the defence teams to examine specific elements of the engagement. This second workshop is invaluable for defensive teams as it helps them identify blank spots in the defence tooling, and gain understanding to the tactics, techniques and procedures (TTPs) used by Prism Infosec’s consultants.

Following this, Prism Infosec will produce a comprehensive report on how each scenario played out, severity rated vulnerabilities, a summary of the DRA workshops and information on possible improvements to support and assist the defence teams. We will also produce an executive debriefing pack and deliver debriefing tailored to c-suite executives and regulators. We will also provide a redacted version of the report which can be shared with the regulator, as required under CBEST.

DORA – What Does it Mean for Business?

The Digital Operational Resilience Act (DORA) is a European legislative act that will be applied from the 17th  of January 2025 and will apply to all financial entities (except for microenterprises).

It is designed to strengthen European financial entities against cyber-attacks and ICT (Information and Communication Technology) disruptions. The full original text (in English) can be found here: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32022R2554&from=FR

and a second batch of documents which updated those articles were published here:

https://www.eiopa.europa.eu/publications/second-batch-policy-products-under-dora_en

Whilst DORA was written to focus on financial entities, it also applies to some entities typically excluded from financial regulations. For example, third -party service providers that supply financial firms with ICT systems and services—like cloud service providers and datacentres; they must comply with DORA requirements as they are servicing the financial industries and therefore cannot be excluded. DORA also covers firms that provide critical third-party information services, such as credit rating services and data analytics providers.

DORA Requirements

DORA is composed of five pillars. Each pillar lays out requirements and expectations for different aspects of resilience.

Additionally DORA explicitly defines the responsibility of organisation and governance for the compliance of DORA in an organisation.

  1. Risk Management – Chapter 2, Articles 5 to 16
  2. Incident Reporting – Chapter 3, Articles 17 to 23
  3. Digital Operational Resiliency Testing – Chapter 4, Articles 24 to 27
  4. ICT Third Party Risk – Chapter 5, Articles 28 to 44
  5. Information & Intelligence Sharing – Chapter 6, Article 45

Governance

The financial entity’s management body is responsible for establishing the organisation and governance structure to effectively manage ICT risk. DORA outlines a set of responsibilities and requirements that the management body must fulfil, one of which is for them to enhance and sustain their understanding of ICT risk.

Risk Management

This pillar underlines the need for financial entities to adopt a proactive approach to risk management.

It requires financial entities to precisely identify, assess and mitigate ICT-related risks with robust frameworks in place to continuously monitor key digital systems, data, and connections.

Incident Reporting

This pillar places a strong emphasis on standardising the process of Incident Reporting within the European Union’s financial sector. Under DORA, financial entities are required to implement management systems that enable them to monitor, describe, and report any significant ICT-based incidents to relevant authorities.

It is important to note that the reporting framework must include both internal and external reporting mechanisms:

Internal reporting refers to quickly identifying incidents and communicating them to all important internal stakeholders. Their impact must then be evaluated, and steps put into action for mitigating damage.

External incident reporting refers to alerting regulatory authorities in case of a disruptive incident. For cases such as a data breach, this may also include the affected customers who must be notified if their sensitive financial information has been compromised.

Digital Operational Resiliency Testing

DORA insists that financial institutions periodically assess their ICT risk management frameworks through digital operational resilience testing. Testing must be conducted by either independent parties, either external or internal, but if internal is used then it will be a requirement that the sufficient resources must be allocated and that conflicts of interest are avoided in the design and execution of the tests.

These tests can include:

  • vulnerability assessments and scans,
  • open-source analyses,
  • network security assessments,
  • gap analyses,
  • physical security reviews,
  • questionnaires and scanning software solutions,
  • source code reviews where feasible,
  • scenario-based tests,
  • compatibility testing,
  • performance testing,
  • end-to-end testing
  • penetration testing 

Basic tests like vulnerability assessments and scenario-based tests must be run once a year.

Financial entities however must also undergo threat-led penetration testing (TLPT) at least every three years , and it was confirmed (July 2024) that TIBER-EU framework tests will satisfy this requirement if it incorporates any additional DORA TLPT requirements. The three-year requirement may be relaxed or shortened dependent on the decision of the designated competent authority.

Each threat-led penetration test shall cover several or all critical or important functions of a financial entity, and shall be performed on live production systems supporting such functions. Critical third party service providers are included in the scope of this and are expected to participate; however where participation is reasonably expected to have an adverse impact in the quality of service delivery for customers outside of the financial entity, they can be excluded but only if they enter into a contractual agreement permitting an external tester to conduct a pooled assessment under the director of a designated financial entity.

TLPT can also permits the use internal (if the financial entity is not a significant credit institution) or external testers; however it is a requirement that every three tests, an external provider must be used. Furthermore if internal testers are used, then the threat intelligence provider must be external to the financial entity.

Beyond that, testers must :

  • Have high suitability and reputation;
  • Possess technical and organisational capabilities and specific expertise in threat intelligence, penetration testing and red team testing;
  • Are certified by an accreditation body in a Member State or adhere to formal codes of conduct and ethical frameworks;
  • Provide independent assurance or an audit report in relation to TLPT risk management;
  • Are insured including against risks of misconduct and negligence.

ICT Third Party Risk

This pillar requires financial organisations to thoroughly conduct due diligence on ICT third parties.

It mandates that financial entities maintain strong contracts with their third-party service providers. They must ensure that their partners adopt high standards of digital security and operational resilience. Furthermore, certain ICT service providers can be designated as “critical” for financial entities. These will have even more obligations (further info below).

Article 30 of DORA has an embedded list of contract requirements that financial services will want to implement for ICT service providers but, the bare minimum is this:

  • Clear and complete description of all functions and ICT services to be provided by ICT 3rd parties.
  • Locations (regions or countries) where the contracted/subcontracted functions are to be provided, processed, stored and requirement to notify in advance if that changes
  • Provisions for the availability, authenticity, integrity and confidentiality of the protection of data
  • Provision of ensuring access, recover and return in an easily accessible format of data processed by the financial entity in the event of insolvency of the third party
  • Obligation of the third-party provider to support the financial entity in an ICT incident related to the service provided at no additional cost, or at a cost determined ex-ante.
  • Obligation of the third-party provider to fully cooperate with competent authorities/representatives of the financial entity.
  • Termination rights and min notice period for contractual arrangements
  • Conditions for the participation of third-party providers in the financial entities ICT security awareness programmes.

Financial entities are also expected to document any risks observed with their third-party ICT providers. Importantly, DORA highlights the need for financial organisations to implement a multi-vendor ICT third-party risk strategy.

Critical ICT third-party service providers will be subject to direct oversight from relevant ESAs (European Supervisory Authorities). The European Commission is still developing the criteria for determining which providers are critical. However, at the time of this article, under existing law it is defined as: “a function whose disruption would materially impair the financial performance of a financial entity, or the soundness or continuity of its services and activities, or whose discontinued, defective or failed performance would materially impair the continuing compliance of a financial entity with the conditions and obligations of its authorisation, or with its other obligations under applicable financial services legislation”. Those that meet the standards will have one of the ESAs assigned as a lead overseer. In addition to enforcing DORA requirements on critical providers, lead overseers will be empowered to forbid providers from entering contracts with financial firms or other ICT providers that do not comply with the DORA.

Information & Intelligence Sharing

This pillar promotes a collaborative approach to managing cyber threats, ensuring that financial entities can collectively enhance their defences and respond more effectively to incidents.

Supporting Business

Whilst DORA is written with European Financial entities in mind, compliance for organisations outside of the EU providing services to EU Financial Services is a requirement. The designated European authorities and regulators will ultimately oversee the testing and will guide entities in being tested.

At Prism Infosec we are a CREST accredited company, this means we can deliver Threat-Led and Scenario Based Penetration Testing services at the levels expected for DORA compliance. Furthermore we offer GRC, Incident Response, Vulnerability Scanning and Penetration Testing services which align with many of the requirements in DORA’s 5 pillars.

If you want to know more about how we can help with compliance, then please reach out and contact us.

Capitalising on the Investment of a Red Team Engagement

Cybersecurity red teams are designed to evaluate an organisation’s ability to detect and respond to cybersecurity threats. They are modelled on real life breaches, giving an organisation an opportunity to determine if they have the resiliency to withstand a similar breach. No two breaches are entirely alike, as each organisation’s organic and planned growth of their infrastructure. They are often built around their initial purpose before being subjected to acquisitions and evolutions based on new requirements. As such the first stage of every red team, and real-world breach is understanding that environment enough to pick out the critical components which can springboard to the next element of the breach. Hopefully, somewhere along that route detections will occur, and the organisation’s security team can stress test their ability to respond and mitigate the threat. Regardless of outcome however, too often once the scenario is done, the red team hand in their report documenting what they were asked to do, how it went, and what recommendations would make the organisation more resilient, but is that enough?

Detection and Response assessments are part of the methodology for the Bank of England and FCA’s CBEST regulated intelligence-led penetration testing (red teaming). However, their interpretation of it is more aligned at understanding response times and capabilities. At LRQA (formerly LRQA Nettitude), I learned the value of a more attuned Detection and Response Assessment, a lesson I brought with me and evolved at Prism Infosec.

At its heart, the Detection and Responses Assessment takes the output of the red team, and then turns it on its head. It examines the engagement from the eyes of the defender. We identify the at least one instance of each of the critical steps of breach – the delivery, the exploitation, the discovery, the privilege escalation, the lateral movement, the action on objectives. For each of those, we look to identify if the defenders received any telemetry. If they did, we look to see if any of that telemetry triggered a rule in their security products. If it triggered a rule, we look to see what sort of alert it generated. If an alert was generated, we then look to see what happened with it – was a response recorded? If a response was recorded, what did the team do about it? Was it closed as a false positive, did it lead to the containment of the red team?

Five “so what” questions, at the end of which we have either identified a gap in the security system/process or identified good, strong controls and behaviours. There is more to it than that of course, but from a technical delivery point of view, this is what will drive benefits for the organisation. A red team should be able to highlight the good behaviours as well as the ones that still require work, and a good Detection and Response Assessment not only results in the organisation validating their controls but also understanding why defences didn’t work as well as they should. This allows the red team to present the report with an important foil – how the organisation responded to the engagement. It shows the other side of the coin, in a report that will be circulated with the engagement information at a senior level of engagement, and can set the entire engagement into a stark contrast.

The results can be seen, digested and understood by C-level suite executives. There is no point in having a red team and reporting to the board that because of poor credential hygiene, or outdated software that the organisation was breached and remains at risk. The board already knows that security is expensive and that they are risk, but if a red team can also demonstrate the benefits or direct the funding for security in a more efficient manner by helping the organisation understand the value of that investment then it becomes a much more powerful instrument of change. What’s even better is that it can become a measurable test – we can see how that investment improves things over time by comparing results between engagements and using that to tweak or adjust.

One final benefit is that security professionals on both sides of the divide, (defenders and attackers) gain substantial amounts of knowledge from such assessments – both sides lift the curtain, explain the techniques, the motivations and the limitations of the tooling and methodology. As a result both sides become much more effective, build greater respect, and are more willing to collaborate on future projects when not under direct test.

Next time your company is considering a red team, don’t just look at how long it will take to deliver or the cost, but also consider the return you are getting on that investment in the form of what will be delivered to your board. Please feel free to contact us at Prism Infosec if you would like to know more.

Our Red Team Services: https://prisminfosec.com/service/red-teaming-simulated-attack/

Prism Infosec Achieves STAR-FS Accreditation

We’re thrilled to announce that Prism Infosec is now an accredited provider of STAR-FS (Simulated Targeted Attack & Response assessments for Financial Services), the threat-led penetration testing and red teaming framework launched by the Bank of England, PRA, and FCA this year for the UK finance sector.

The STAR-FS scheme represents a significant step forward in enhancing cyber resilience for financial institutions, providing an innovative approach to identifying and mitigating cyber risks through assessments that simulate real-world threats.

STAR-FS assessments offer:

– Enhanced Resilience: By assessing firms’ capabilities to protect, detect, and respond to sophisticated cyber threats.

– Firm-Led Model: Allowing organizations to proactively identify vulnerabilities within systems, processes, and people.

– Independent Assurance: Beyond the scope of traditional penetration testing, STAR-FS offers regulatory-recognized assessments.

– Broader Accessibility: Making this assessment available to more financial institutions, enabling wider adoption and learning across the industry.

Prism Infosec is committed to helping financial institutions strengthen their cyber defences and meet regulatory expectations. Contact us to learn how STAR-FS can enhance your organisation’s resilience to cyber threats and enable a proactive approach to security.

Our Red Teaming Service:

Red teaming Identifies organisational cyber security weaknesses.

Gone Phishing

Social engineering extremely commonplace, we all experience it every day, and have done from an extremely early age. The most common social engineering we are exposed to is through advertising. Selling the desire to obtain goods or services using a variety of tactics designed to entice us. This is so socially acceptable that we barely even notice it, let alone comment on it anymore, and it’s extremely successful. In Cybersecurity we associate social engineer in a more sinister light. Here it is used to achieve specific goals that would further a compromise of the organisation. The social engineering can take the form of physical interactions, but more often is digital, expressing itself in the forms of Phishing (emails), Vishing (Voice Calls), and Smishing (IM/SMS text messages). In this blog we’ll look at how we run each of these sorts of campaigns to model real world threat actors.

Before we look at the individual techniques, its worth focussing on the target for a second. Often the victims of social engineering in cybersecurity are not selected for who they are, but rather the access or role they are currently delivering for the organisation. The fact of the matter is, anyone can be a victim of social engineering in cybersecurity, all it takes is the right lure, at the wrong time to turn a user into a victim. We spend a significant amount of time training staff on social engineering; however people are only a single (albeit vital) thread in the tapestry of what makes a social engineering attack successful. Users should never be the single control that is preventing an organisation from being hacked, they should not even be the first or final control – there needs to be technical controls supporting users to either prevent attacks from reaching them, or flagging suspicious behaviours to them, through to protecting the environment if a user does fall victim. Regardless of the outcome, a user should have confidence in their organisation to support them in reporting it and responding appropriately.

Phishing

Phishing often presents in one of two ways – mass, or spear. In mass phishing the attacker will send a cookie cutter email to as many targets as possible. They have a low expectation of success against any one target, but instead are trading on the probability to achieve any traction. Consider, if a mass phishing campaign has a 0.1% chance of successfully resulting in a  victim, and they send 10,000 emails using the campaign, which will still result in 10 victims. These attacks are cheap to setup, cheap to run, and even at such low return rates still can result in a profit. Fortunately, automated tools are particularly good at identifying these sorts of mass emails and classifying them as spam.

The alternative approach is a “spear phish” attack. This is when there are very few victims, but each email is carefully crafted to maximise the chance the victim will respond and follow through. They require research into the victim to identify approaches and likely contacts they will respond to. These attacks are much harder to spot, much more likely to succeed, but cost significantly more.

Vishing

Vishing is when a threat actor will call their victim. They will often be working from a script, and possibly some seed data to achieve their specific goal.

Vishing calls will usually employ impersonation, an attempt for the threat actor to pretend to be an authority or individual who is likely to be interacted with by their victim. These sorts of attacks have become more pervasive in recent years thanks to generative AI advances which permit real time voice imitation. Supporting the impersonation, will often be additional tools such as spoofed caller ID.

Tactics employed in vishing calls usually make use of fear or greed, and an attempt to create a sense of urgency. Often the approach will have some sort of partial information (sometimes called seed information) which helps the threat actor drive the initial conversation.

Regardless of the approach taken, the threat actor will often be seeking to either obtain sensitive information or even provide remote access to a device. The remote access might be obvious, such as using Windows Quick Assist, or installing a tool like ScreenConnect, AnyDesk, TeamViewer, RustDesk, etc. or to download and open a document on their behalf.

Smishing

Smishing is when a threat actor will contact their victim using an instant messaging, or short messaging service (SMS).

Smishing attacks are becoming more common, especially with the business movement towards tools like Teams and Slack. For Microsoft Teams this can be particularly insidious as the threat actor can create a throwaway Microsoft Azure account to obtain a “onmicrosoft” domain. They can then rename their account to look more legitimate before making a connection to their target. This massively helps sell the impersonation and makes their targets far more likely to click links shared with them. This is because a significant number of defences are bypassed through this attack vector.

However mobile smishing is also a threat, however like the mass phishing scams are much more difficult to achieve success with, and again rely on sheer weight of numbers to result in a small number of successes.

Protecting yourself

The greatest user defence against phishing in all its forms is scepticism. However it is impossible to be sceptical of every email, phone call and IM that comes to a person working in a role that requires interaction with other people, especially strangers. However there are some steps which can help. The first thing to do, if it is a delayed interactive approach, such as phishing or smishing is not to respond immediately.  The last thing a threat actor employing these tactics wants you to do is to take your time, as it removes the sense of urgency and gives you breathing space to counter fear or greed being employed as part of the lure. Time can also permit you to verify some of their details – call your colleagues or the organisation they are pretending to be to confirm their identity. For interactive approaches, such as vishing, if it is unexpected, then again taking time to verify the caller, and even if they pass those or sound familiar, trust your instincts, if what a trusted source is asking you to do, even if it sounds reasonable, is still unusual, then telling them you will call them back and ask for a number you can check will make a massive difference. If the caller is genuine, they will accept and provide you the information, whilst if not they will try to keep you on the line and get you to change your mind, adding more pressure to the call. In such an event, hanging up and walking away is often the best way to go ahead and allow you to gather your thoughts.

Security controls can help by preventing some of these approaches from getting to users, or permitting suitable responses and protections from harm being inflicted should those defences and scepticism fail. However they are not a panacea, and need to be calibrated and exercised regularly to ensure they are effective.

Final Thoughts

Crafting a phishing attack ethically, is a challenge for cybersecurity companies. We tend to try to avoid using lures which trade on fear or empty promises to achieve the goal of the engagement. For example, cybersecurity companies were careful to avoid using promises of covid vaccines in phishing lures during the pandemic as this would have been unethical in terms of using people’s fear of getting sick and of the hope for a prophylactic to achieve their goal during a time of global desperation and stress. Likewise, a lot of cybersecurity companies will avoid topics which could have legal implications for the client organisations, such as the promise to changes to salary, pensions, holidays, or working hours. A fine line does need to be tread however, as real-world cybercriminals will capitalise on exactly these sorts of topics to achieve their goals.

Ultimately the goal of any phishing test will be either to test staff training (in which case its best if the results are anonymised as the focus should be on how well training was used by staff, not call out individuals), or to achieve a foothold for a red team as part of a threat scenario. In the former, we are testing just the training the user has received and how they put it to use protecting themselves and the organisation. In the latter we are evaluating the technical controls AND user. In red teaming we will also use an “assisted click”. This is when we don’t want to test the user, just the technical controls – this is achieved by having a user briefed to just follow the phish instructions if they receive it, no matter what it asks them to do, but otherwise act as if they had been successfully phished in the event the attack is detected and responded to.

Prism Infosec has a significant amount of experience in conducting social engineering engagements which includes phishing, smishing and vishing. If you would like to know more, then please feel free to contact us to discuss how we can help evaluate your defences and training.

Find out more here: Social engineering simulation mimics attacks on your organisation

Red Team Scenarios – Modelling the Threats

Introduction

Yesterday organisations were under cyber-attack, today even more organisations are under cyber-attack, and tomorrow this number will increase again. This number has been increasing for years, and will not reverse. Our world is getting smaller, the threat actors becoming more emboldened, and our defences continue to be tested. Any organisation can become a victim to a cyber security threat actor, you just need to have something they want – whether that is money, information, or a political stance or activity inimical to their ideology. Having cybersecurity defences and security programs will help your organisation be prepared for these threats, but like all defences, they need to be tested; staff need to understand how to use them, when they should be invoked, and what to do when a breach happens.

Cybersecurity red teaming is about testing those defences. Security professionals take on the role of a threat actor, and using a scenario, and appropriate tooling, conduct a real-world attack on your organisation to simulate the threat.

Scenarios

Scenarios form the heart of a red team service: they are defined by the objective,  the threat actor, and the attack vector. This will ultimately determine what defences, playbooks, and policies are going to be tested.

Scenarios are developed either out of threat intelligence – i.e. threat actors who are likely to target your organisation have a specific modus operandi in how they operate; or scenarios are developed out of a question the organisation wants answered to understand their security capabilities.

Regardless of the approach, all scenarios need to be realistic but also be delivered in a safe, secure, and above all, risk managed manner.

Objectives

Most red team engagements start by defining the objective. This would be a system, privilege or data which if breached would result in a specific outcome that a threat actor is seeking to achieve. Each scenario should have a primary target which would ultimately result in impact to the: organisation’s finances (either through theft or disruption (such as ransomware)); data (theft of Personally Identifiable Information (PII) or private research); or reputation (causing embarrassment/loss of trust through breach of services/privacy). Secondary and tertiary objectives can be defined but often these will be milestones along the way to accomplish to primary.

Objectives should be defined in terms of impacting Confidentiality (can threat actors read the data), Integrity (can threat actors change the data), or Availability (can threat actors deny legitimate access to the data). This determines the level of access the red team, will seek to achieve to accomplish their goal.

Threat Actors 

Once an objective is chosen, we then need to understand who will attack it. This might be driven by threat intelligence, which will indicate who is likely to attack an organisation, or for a more open test, we can define it by sophistication level of the threat actor…

Not all threat actors are equal in terms of skill, capability, motivation, and financial backing. We often refer to this collection of attributes as the threat actor’s sophistication. Different threat actors also favour different attack vectors, and if the scenario is derived from threat intelligence, this will inform how that should be manifested.

High Sophistication

The most mature threat actors are usually referred to as Nation State threat actors, but we have seen some cybercriminal gangs start to touch elements of that space. They are extremely well resourced (often with not only capability development teams, but also with linguists, financial networks, and a sizeable number of operators able to deliver 24/7 attacks). They will often have access to private tooling that is likely to evade most security products; and they are motivated usually by politics (causing political embarrassment to rivals, theft of data to uplift country research, extreme financial theft, denigrating services to cause real-world impact/hardship. Examples in this group can include APT28, APT38, and WIZARD SPIDER

Medium Sophistication

In the mid-tier maturity range we have a number of cybercriminal and corporate espionage threat actors. These will often have some significant financial backing – able to afford some custom (albeit commercial) tooling which will have been obtained either legally, or illegally; they may work solo, but will often be supported by a small team who can operate 24/7 but will often limit themselves to specific working patterns where possible. They may have some custom written capabilities, but these will often be tweaked versions of open-source tools. They are often motivated by financial concerns – whether that is profiting from stolen research, or directly obtaining funding from their victim due to their activities. Occasionally they will also be motivated by some sort of activism – often using their skills to target organisations which represent or deliver a service for a perceived cause which they do not agree with. In this motivation they will often either seek to use the attack as a platform to voice their politics or to try and force the organisation to change their behaviour to one which aligns better with their beliefs. Examples of threat actors in this tier have included  FIN13 and LAPSUS$.

Low Sophistication

At the lower tier maturity range, we are often faced with single threat actors, rather than a team; insiders are often grouped into this category. Threat actors in this category often make use of open-source tooling, which may have light customisation depending on the skill set of the individual. They will often work fixed time zones based on their victim, and will often only have a single target at a time or ever. Their motivation can be financial but can also be motivated by personal belief or spite if they believe they have been wronged. Despite being considered the lowest sophistication of threat actor, they should never be underestimated – some of the most impactful cybersecurity breaches have been conducted by threat actors we would normally place in this category- such as Edward Snowden, or Bradley Manning.

Attack Vector

Finally, now that we know what will be attacked, and who will be attacking we need to define how the attack will start. Again, threat intelligence gathered on different threat actors will show their preferences in terms of how they can start an attack, and if the objective is to keep this realistic, that should be the template. However if we are using a more open test we can mix things up and use an alternative attack vector. This is not to say that specific threat actors won’t change their attack vector, but they do have favourites.

Keep in mind, the attack vector determines which security boundary will be the initial focus of the attack, and they can be grouped into the following categories:

External (Direct External Attackers)

  • Digital Social Engineering (phishing/vishing/smshing)
  • Perimeter Breach (zero days)
  • Physical (geographical location breach leading to digital foothold)

Supply Chain (Indirect External Attackers)

  • Software compromise (backdoored/malicious software updates from trusted vendor)
  • Trusted link compromise (MSP access into organisation)
  • Hardware compromise (unauthorised modified device)

Insider (both Direct and Indirect Internal Attackers)

  • Willing Malicious Activity
  • Unwilling Sold/stolen access
  • Physical compromise

Each of these categories not only contain different attack vectors, but will often result in testing different security boundaries and controls. Whilst a Phishing attack will likely result in achieving a foothold on a user’s desktop – the likely natural starting position for an insider conducting willing or unwilling attacks; they will test different things, as an insider will not need to necessarily deploy tooling which might be detected, and will already have passwords to potentially multiple systems to do their job. Understanding this is the first step in determining how you want to test your security.

Pulling it together

Once all these elements have been identified and defined, the scenario can move forward to the planning phase before delivery. This is where any pre-requisites to deliver the scenarios, any scenario milestones, any contingencies can be prepared to help simulate top tier threat actors,  and any tooling preparations can be done to ensure the scenario can start. Keep in mind that whilst the scenario objective might be to compromise a system of note, the true purpose of the engagement is to determine if the security teams, tools, and procedures can identify and respond to the threat. This can only be measured and understood if the security teams have no clue when or how they will be tested, as real-world threats will not give any notice either.

Even if the red team accomplish the goals, the scenario will still help security teams understand the gaps in their skills, tools, and policies so that they can react better in the future. Consider contacting Prism Infosec if you would like your security teams to reap these benefits too.

Our Red Team Services: https://prisminfosec.com/service/red-teaming-simulated-attack/

Red Teams don’t go out of their way to get caught (except when they do)

Introduction

In testing an organisation, a  red team will be seeking to emulate a threat actor by achieving a specific goal – whether that is to gain administrative control of the network and prove they can control backups (a kin to how many ransomware operators work), through to proving access to financial systems, or even gaining access to sensitive data repositories. They will employ tactics, tools and capabilities aligned to the sophistication level of the threat actor they are pretending to be. The question asked about red teams asks is always “can the bad guys get to system X”,  when it really should be, “can we spot the bad guys before they get to system X AND do something effective about it”. The unfortunate answer is that with enough time and effort, the bad guys will always get to X. What we can do in red teaming is try to tell you how the bad guys will get to X and help you understand if you can spot the bad guys trying things.

Red Team Outcomes

In assessing an organisation, we often have engagements go in one of two ways – the first (and unfortunately more common) is that the red team operators achieve the objective of the attack, sometimes this is entirely without detection and sometimes there is a detection, but containment is unsuccessful. The other is when the team are successfully detected (usually early on) and containment and eradication is not only successful, but extremely effective.

So What?

In both cases, we have failed to answer some of the exam questions, namely the level of visibility the security teams have across the network.

In the first instance, we don’t know why they failed to see us, or why they failed to contain us, and why they didn’t spot any of the myriad other activities we conducted. We need to understand if the issue is one of process or effort (is the security team drinking from a firehose of alerts and we were there but lost in the noise; or did the security team see nothing because they don’t have visibility in the network; or do we have telemetry but no alerts for the sophistication level of the attacker’s capabilities/tactics?). The red team can try to help answer some of these questions by moving the engagement to one of “Detection Threshold Testing” where the sophistication level of the Tactics, Techniques and Procedures of the test are gradually lowered, and the attack becomes noisier until a detection occurs, and a response is observed. If the red team get to the point of dropping disabled, un-obfuscated copies of known bad tools on domain controllers which are monitored by security tools and there are still no detections, then the organisation needs to know and work out why. This is when a Detection and Response Assessment (DRA) Workshop can add real value to understand the root causes of the issues.

In the second instance we have observed a great detection and response capability, but we don’t know the depth of the detection capabilities – i.e. if the red team changed tactics, or came in elsewhere would the security team have a similar result? We can answer this sometimes with additional scenarios which model different threat actors, however multiple scenario red teams can be costly, and what happens if they get caught early on in all three scenarios? I prefer to adopt an approach of trust but verify in these circumstances by moving an engagement through to a “Declared Red Team”. In this circumstance, the security teams are congratulated on their skills, but are informed that the exercise will continue. They are told the host the red team are starting on, and they are to allow it to remain on the network uncontained but monitored while the red team continue testing. They will not be told what the red team objective is or on what date the test will end – they will however be informed when testing is concluded. If they detect suspicious activity elsewhere in the network  during this period they can deconflict the activity with a representative of the test control group. If it is the red team, it will be confirmed, and the security team will  be asked to record what their next steps would be. If it isn’t then the security team are authorised to take full steps to mitigate the incident; a failure on the red team to confirm, will always be treated as malicious activity unrelated to the test. Once testing is concluded (objective achieved/time runs out), the security team is informed, and the test can move onto a Detection and Response Assessment (DRA) Workshop.

Next Steps

In both of these instances, you will have noted that the next step is a Detection and Response Assessment (DRA) Workshop – DRA’s were introduced by the Bank of England’s CBEST testing framework, LRQA (formerly LRQA Nettitude) refined the idea, and Prism Infosec have adapted it by fully integrating NIST 2.0 into it. Regardless, it is essentially a chance to understand what happened, and what the security team did about it. The red team should provide the client security team with the main TTP events of the engagement – initial access, discovery which led to further compromise, privilege escalation, lateral movement, action on objectives. This should include timestamps and locations/accounts abused to achieve this. The security team should come equipped with logs, alerts, and playbooks to discuss what they saw, what they did about it, and what their response should be. Where possible, this response should also have been exercised during the engagement so the red team can evaluate its effectiveness.

The output of this workshop should be a series of observations about areas of improvement for the organisation’s security teams, and areas of effective behaviours and capabilities. These observations need to be included in the red team report – and should be presented in the executive summary to help senior stakeholders understand the value and opportunities to improve their security capabilities, and why it matters.

Conclusion

Red Teams will help identify attack paths, and let you know if bad guys can get to their targets, but more importantly they can and should help organisations understand how effective they are detecting and responding to the threat before that happens. Red Teams need to be caught to help organisations understand their limits so they can push them, show good capabilities to senior stakeholders, and identify opportunities for improvement. An effective red team exercise will not only engineer being caught into their test plan, but they will ensure that when it happens, the test still adds value to the organisation.

Our Red Team services: https://prisminfosec.com/service/red-teaming-simulated-attack/

To you it’s a Black Swan, to me it’s a Tuesday…

Cybersecurity is a discipline with many moving parts. At its core though, it is a tool to help organisations identify, protect, detect, respond, and recover, then adapt to the ever-evolving  risks and threats that new technologies, and capabilities that threat actors employ through threat modelling. Sometimes these threats are minor – causing annoyance but no real damage, but sometimes these threats are existential and unpredictable; these are known as Black Swan events.

They represent threats or attacks that fall outside the boundaries of standard threat models, often blindsiding organisations despite rigorous security practices.

In this post, we’ll explore the relationship between cybersecurity threat modelling and Black Swan events, and how to better prepare for the unexpected.

What Are Black Swan Events?

The term Black Swan was popularized by the statistician and risk analyst Nassim Nicholas Taleb. He described Black Swan events as:

  • Highly improbable: These events are beyond the scope of regular expectations, and no prior event or data hints at their occurrence.
  • Extreme impact: When they do happen, Black Swan events have widespread, often catastrophic, consequences.
  • Retrospective rationalization: After these events occur, people tend to rationalize them as being predictable in hindsight, even though they were not foreseen at the time.

In cybersecurity, Black Swan events can be seen as threats or attacks that emerge suddenly from unknown or neglected vectors—such as nation-state actors deploying novel zero-day exploits, or a completely new class of vulnerabilities being discovered in widely used software.

The Limits of Traditional Threat Modelling

Threat modelling is a systematic approach to identifying security risks within a system, application, or network.

It typically involves:

  • Identifying assets: What needs protection (e.g., data, services, infrastructure)?
  • Defining threats: What could go wrong? Common threats include malware, phishing, denial of service (DoS) attacks, and insider threats.
  • Assessing vulnerabilities: How could the threats exploit system weaknesses?
  • Evaluating potential impact: How severe would the consequences of an attack be?
  • Mitigating risks: What steps can be taken to reduce the likelihood and impact of threats?

While highly effective for many threats, traditional threat modelling is largely based on past experience and known attack methods. It relies on patterns, data, and risk profiles developed from historical analysis. However, Black Swan events, by their nature, evade these models because they represent unknown unknowns—threats that have never been seen before or that arise in ways no one could predict. This is where organisations often encounter significant challenges. Despite extensive security efforts, unknown vulnerabilities, unexpected technological changes, or even human error can expose them to unforeseen, high-impact cyber events.

Real-World Examples of Cybersecurity Black Swan Events

1. The SolarWinds Hack (2020)

The SolarWinds cyberattack, attributed to a nation-state actor, was one of the most devastating and unexpected breaches in recent history. Attackers compromised the software supply chain by embedding malicious code into SolarWinds’ Orion software updates, which were then distributed to thousands of organizations, including U.S. government agencies and Fortune 500 companies.

The sophistication of the attack and the sheer scale of its impact make it a classic Black Swan event. It was a novel approach to cyber espionage, and its implications were far-reaching, affecting critical systems and sensitive data across industries.

2. NotPetya (2018)

The Petya ransomware that launched in 2016 was a standard ransomware tool – designed to encrypt, demand payment and then be decrypted. NotPetya however was something different. It leveraged two changes – the first was that it was changed to not be reversed – once data was encrypted, it could not be recovered; this made it a wiper instead of ransomware. The second was that it also had the ability to leverage the EternalBlue exploit, much like the Wannacry ransomware code that attacked devices worldwide earlier that year – this allowed it to spread rapidly around unpatched Microsoft Windows networks.

NotPetya is believed have infected victims through a compromised piece of Ukrainian tax software called M.E.Doc. This software was extremely widespread throughout Ukrainian businesses, and investigators found that a backdoor in its update system had been present for at least six weeks before NotPetya’s outbreak.

At the time of the outbreak, Russia was still in the throes of conflict with the Ukrainian state, have annexed the Crimean peninsula less than two years prior; and the attack was timed to coincide with Constitution Day, a Ukrainian public holiday commemorating the signing of the post-Soviet Ukrainian constitution. As well as its political significance, the timing also ensured that businesses and authorities would be caught off guard and unable to respond. What the attackers did not consider however was how far spread that software was. Any company local or international who did business in Ukraine likely had a copy of that software. When the attackers struck, they hit multinationals, including the massive shipping company A.P. Møller-Maersk, the Pharmaceutical company Merck, delivery company FedEx, and many others. Aside from crippling these companies, reverberations of the attack were felt in global shipping, and across multiple business sectors.

NotPetya is believed to resulted in more than $10 billion in total damages across the globe, making it one of, if not the, most expensive cyberattack in history to date.

How to Prepare for Cybersecurity Black Swan Events

While it’s impossible to predict or completely prevent Black Swan events, there are steps that organisations can take to enhance their resilience and minimise potential damage:

1. Adopt a Resilience-Based Approach

Rather than solely focusing on known threats, build your cybersecurity strategy around resilience. This means being prepared to rapidly detect, respond to, and recover from attacks, regardless of their origin.

Organisations should prioritise:

  • Incident response plans: Have well-documented and tested response procedures in place for any type of security event.
  • Redundancy and backups: Ensure critical systems and data have redundant layers and secure backups that can be quickly restored.
  • Post-event recovery: Create strategies to mitigate the damage and recover swiftly, minimising long-term business disruption.

2. Encourage Continuous Security Research and Innovation

Security Testing: Many Black Swan events are the result of the exploitation of previously unknown vulnerabilities. Investing in continuous security research and vulnerability discovery (through bug bounty programs, penetration testing, etc.) can reduce the number of undiscovered vulnerabilities and improve overall system security.

Defence Engineering: Implement defensive measures such as application isolation, network segmentation, and behaviour monitoring to limit the damage if a zero-day exploit is discovered.

3. Utilize Cyber Threat Intelligence

Staying informed on emerging cybersecurity trends and participating in industry collaborations can give organisations an edge when it comes to detecting potential Black Swan events. By sharing information, organisations can learn from others’ experiences and uncover threats that might not have been apparent within their own systems.

4. Model Chaos and Test the Unthinkable

Chaos engineering, which involves intentionally introducing failures into systems to see how they respond, can be an effective way to test the robustness of an organization’s defences. These drills can help security teams explore what might happen during an unanticipated event and can uncover system weaknesses that might otherwise be overlooked.

5. Promote a Culture of Adaptive Security

Adopting an adaptive security mindset means continuously monitoring the threat landscape, adjusting security controls, and being willing to evolve when necessary. The concept of security-by-design—where security considerations are built into the very foundation of systems and software—will also help organisations stay ahead of new and unforeseen risks.

Black Swan events in cybersecurity may be rare, but their consequences can be catastrophic. The unpredictability of these threats poses a unique challenge, requiring organisations to shift from a purely reactive, known-threat approach to one that emphasises resilience, adaptation, and continuous learning.

Red Team engagements are one tool which can help organisations develop resilient security strategies designed to respond to Black Swans. What makes this possible is some of the key concepts, controls and attitudes which are introduced during the planning stages of the engagement. The results of red team engagements using this approach helps shape boardroom discussions around strategy, resilience, and capacity in a way that allows the business to anticipate Black Swans and be prepared should they ever arrive.

Our Red Team services: https://prisminfosec.com/service/red-teaming-simulated-attack/

The Value of Physical Red Teaming

Introduction

In testing an organisation, a red team will be seeking to emulate a threat actor by achieving a specific goal – whether that is to gain administrative control of the network and prove they can control backups (a kin to how many ransomware operators work), through to proving access to financial systems, or even gaining access to sensitive data repositories. They will employ tactics, tools and capabilities aligned to the sophistication level of the threat actor they are pretending to be.

However, not all threat actors operate only in the digital threat axis, and will instead seek to breach the organisation itself to achieve their goal. Physical red teaming seeks to test an organisation’s resilience and security culture. It is aimed more at testing people and physical security controls. The most common physical threat actor is the insider threat; however nation state,  criminal, industrial espionage, and activist threats also remain prevalent in the physical arena, however their motivations to cause digital harm will vary.

As part of an organisation’s layered defence we not only have to consider the digital defences but also the physical ones. Consider, would it be easier for the threat actor to achieve their goal by physically taking a computer rather than try to digitally gain a foothold and then get to the target and complete their activities? Taking a holistic approach to security makes a significant difference to an organisation.

Understanding Physical Red Teaming

Physical red team simulates attacks on physical security systems and behaviours to test defences. It accomplishes this by:

  • attempting to gain unauthorised access to buildings though:
    •  the manipulation of locks,
    • use of social engineering techniques such as  tailgating
  • bypassing security protocols such as:
    •  cloned access cards,
    • managing to connect rogue network devices,
    • or gaining access to unattended documents from bins and printers;
  • or exploiting social behaviours and abusing preconceptions
    • using props to appear as though you belong or are a person of authority to avoid being challenged.

In digital red teaming we are evaluating people and security controls in response to remote attacks. The threat actor must not only convince a user to complete actions on their behalf, but must also then bypass the digital controls that are constantly being updated and potentially, monitored.

In comparison, physical security controls are rarely updated due to cost reasons as they are integrated into the buildings. Furthermore, people will often act very differently towards an approach when it is conducted online than if it is conducted in person. This can be down to peoples’ confidence and assertiveness which psychologically is different online than in person. Therefore it can be important to test the controls that keep threat actors out and if they fail, that staff feel empowered and supported to be able to challenge individuals who they believe do not belong, even if that person is one of authority until their credentials have been verified.

Why Physical Security Matters in Cybersecurity

At the top end of the scale, we should consider the breach caused by Edward Snowden at the NSA in 2013  which affected the national security of multiple countries. This was a trusted employee, who abused their privileges as a system administrator to breach digital security controls, and abused and compromised credentials of other users who trusted him to gain unauthorised access to highly sensitive information. He then breached physical security controls to extract that data and remove it, not only from the organisation, but also the country. The impact of that data-breach was enormous in terms of reputational damage, as well as tools and techniques used by the security services. Whilst he claimed his motivation was an underlying privacy concern (which was later ruled unlawful by US courts); the damage his actions caused have undoubtedly, though impossible to distinctly prove, inflicted significant threat to life for numerous individuals worldwide. Regardless, this breach was a failing of both physical controls (preventing material from leaving the premises) and digital (abusing trusted access to gain access to digital data stores).

Other attacks do exist however, consider back in 2008, a 14-year-old, with a homemade transmitter deliberately attacked the Polish city of Lodz’s tram system. This individual ended up derailing four trams, injuring a dozen individuals. Using published material he spent months studying the city’s rail lines to determine where best to create havoc; then using nothing more than a converted TV remote, inflicted significant damage. In this instance, the digital controls were related to the material that had been published regarding the control systems and the unauthenticated and unauthorised signals being acted upon by the system. Whilst the physical controls were in terms of being able to direct signals to the receiver which permitted the attack to occur.

Key Benefits of Physical Red Teaming

A benefit of physical red teaming is in testing and improving an organisation’s response to physical breaches or threats. Surveillance, access control systems, locks, and security staff can be assessed for weaknesses, and it can help identify lapses in employee vigilance (e.g., tailgating or failure to challenge strangers).

This in turn can lead to improvements in behaviours, policies, and procedures for physical access management. Furthermore, physical red teaming encourages employees to take an active role in security practices and fosters an overall culture of security.

Challenges of Physical Red Teaming

However delivering physical red teaming is fraught with ethical and legal risk; aside from trespassing, breaking and entering, and other criminal infringements, there could also be civil litigation concerns depending on the approach the consultants take.

Therefore it is important to establish clear consent and guidelines from the organisation, this must include the agreed scope – what activities the consultants are permitted to do, when and where those activities will take place, and who at the client organisation is responsible for the test. This information, including any additional property considerations such as shared tenancies or public/private events which may be impacted by testing also need to be considered and factored into the scope and planning. It is not unusual for this information to be captured into a “get out of jail” letter provided to the testers along with client points of contact to verify the test and stand down a response.

This is to ensure that testing can remain realistic but also any disruption caused by it can be minimised.

Cost is always also going to be a concern, as it takes time for consultants to not only travel to site, but also conduct surveillance, equip suitable props (some of which may need to be custom made), and develop and deploy tooling to bypass certain controls (such as locks and card readers) if that is required in the engagement.

Conclusion:

The physical threat axis is one that people have been attacking since time immemorial. However in today’s world we have shrunk distances using digital estates, and have managed to establish satellite offices beyond our traditional perimeters and as a result increased the complexity of the environments we must defend. Red teaming permits an organisation to assess all these threat axis and consider how physical and digital controls are not only required but need to be regularly exercised to ensure their effectiveness.

Readers of this post are therefore encouraged to consider the physical security of their locations – whether that is their offices, factories, transit hubs, public buildings through to security of home offices, and ask themselves if they have verified their security controls are effective and when they were last exercised.

Our Red Team services: https://prisminfosec.com/service/red-teaming-simulated-attack/

Red Teams – Supporting Incident Response

Unauthorised access into remote computers has been around since the 1960s but since those early days organisations and their IT systems have become complex, and that complexity is increasing at an exponential rate, making securing those systems increasingly difficult. Defence mechanisms like firewalls, antivirus software, and monitoring systems have become essential, but they aren’t enough on their own. Cybersecurity red teams—groups of ethical hackers tasked with simulating real-world attacks—are increasingly playing a pivotal role in not only identifying vulnerabilities but also supporting incident response efforts. Red teams need to be considered as part training opportunity for defenders, and part organisational security assessment. In this post, we’ll explore how red teams can actively contribute to the incident response (IR) process, helping organizations detect, mitigate, and recover from cyber incidents more effectively.

Proactive Detection and Prevention

Red teams conduct simulations that mimic threat actors of varying degrees of sophistication, which includes phishing attacks, insider threats, and other malicious activities to evaluate the effectiveness of an organisation’s security defences. Incident response teams, also known as blue teams, are responsible for defending against and responding to active threats. As red teams can simulate a wide range of attack scenarios, they provide the blue team realistic training opportunities.

Key Contributions

  • Identify vulnerabilities: By testing both technical and human vulnerabilities, red teams can uncover gaps in systems, processes and controls that attackers could exploit. These insights help incident response teams prioritize fixes and harden defences.
  • Test detection capabilities: During simulations, red teams often use tactics that mimic real-world threat actor behaviour. This allows Security Operations Centres (SOCs) to evaluate whether current detection mechanisms are effective in identifying threats – ideally early on in a breach, providing a feedback loop to improve monitoring and alerting systems.
  • Highlight gaps in response: Beyond detection, red teams can uncover weaknesses in the organisation’s ability to respond. These exercises help refine playbooks and improve reaction times in case of a real attack; acting like a fire drill for the organisation’s security teams.
  • Simulation of real-world attacks: Red team exercises provide blue teams with exposure to the tactics, techniques, and procedures (TTPs) used by adversaries. This allows the incident response team to better understand the behaviour of attackers and improve their incident detection and response procedures.
  • Drills under pressure: Simulated attacks create controlled, high-pressure situations where the blue team must react as if the incident were real. This strengthens their ability to work effectively under stress during actual incidents.
  • Collaborative feedback loops: After red team exercises, post-mortem reviews and feedback sessions help blue teams understand what went wrong and what went right. This collaborative effort ensures continuous improvement in incident detection and response.

Ongoing Incident and Forensic Support

When an incident occurs, quick identification of the threat’s origin, scope, and impact is critical. Red teams, by virtue of their expertise in adversary tactics, can aid in threat hunting and digital forensics during an ongoing incident.

Key Contributions

  • Insight into threat actor behaviour: Since red teams specialize in mimicking attacker methodologies, they can offer unique insights into how a real adversary might have breached the system. This includes understanding common evasion techniques, lateral movement strategies, and exfiltration tactics.
  • Identification of blind spots: During live incidents, red teams can collaborate with blue teams to identify blind spots or areas where an attack might have gone unnoticed. Their understanding of complex attack chains helps guide incident responders toward detecting hidden malware or compromised accounts.
  • Improving forensic analysis: Red teams can aid in digital forensics by offering a detailed understanding of how an attack might unfold. They can help analyse compromised systems, logs, and network traffic to identify indicators of compromise (IoCs) and reconstruct the attack timeline more accurately based on their experience of what steps they would take, and an understanding of the footprints various tools leave on system logs.

Fostering a Culture of Continuous Improvement

One of the biggest challenges in cybersecurity is complacency. Organisations often become overconfident after implementing new security measures or surviving an attack. Red teams, by constantly pushing the boundaries and simulating sophisticated attacks, help prevent this.

Key Contributions:

  • Challenge security assumptions: Red teams encourage organisations to avoid a “set-it-and-forget-it” mindset by continually challenging the effectiveness of defences and forcing teams to stay agile and adaptable in their responses.
  • Promote proactive security: By migrating to consistent tempo of red team assessments and testing organisational exposure to different Tactics, Techniques and Procedures, the incident response team can take a proactive approach rather than a reactive one. This works by helping the Blue Team conduct regular threat hunting activities, using this to improve their detections and identify weaknesses in their detections or gaps in network visibility so they can be addressed. This shift reduces the likelihood of severe incidents and ensures faster containment if they do occur.
  • Drive organisational awareness: Red teams don’t just work with security professionals; they also raise awareness across the organisation. They often test phishing or social engineering schemes, helping non-technical employees understand their role in cybersecurity, which indirectly supports better incident response.

Conclusion

In the complex world of cybersecurity, red teams are invaluable in supporting and strengthening incident response efforts. By identifying vulnerabilities, training blue teams in real-world scenarios, aiding in threat hunting, and offering an initiative-taking approach to defending against modern cyber threats. Organisations that leverage both red team and blue team collaboration can better detect, respond to, and recover from cyber incidents, significantly reducing risk and minimizing damage.

Our Red Team Services: Red Teaming & Simulated Attack Archives – Prism Infosec

Have you had a breach? Contact us here for our Incident Response service: Have You Had A Security Breach? – Prism Infosec