Prism Infosec Achieves CBEST Accreditation

Prism Infosec, an established CHECK accredited Penetration Testing company, is pleased to announce that we have achieved accreditation status as a Threat-Led Penetration Testing (TLPT) provider under the CBEST scheme, the Bank of England’s rigorous regulator-led scheme for improving the cyber resiliency of the UK’s financial services, supported by CREST.

This follows our recent accreditation as a STAR-FS Intelligence-led Penetration Testing (ILPT) provider in November 2024, and . These accreditations put us in a very exclusive set of providers in the UK who have demonstrated skills, tradecraft, methodology, and the ability to deliver risk managed complex testing requirements to a set standard required for trusted testing of UK critical financial sector organisations.

Financial Regulated Threat Led Penetration Testing (TLPT) / Red Teaming

The UK is a market leader when it comes to helping organisations improve their resiliency to cyber security threats. This is in part due to the skills, talent, and capabilities of our mature cybersecurity sector developed thanks to accreditation and certification schemes introduced originally by the UK CHECK scheme for UK Government Penetration Testing in the mid-2000s. As the UK matured, new schemes covering more adversarial types of threat simulations began to evolve for additional sectors.  Today, across the globe, other schemes have been rolled out to emulate what we in the UK have been delivering for financial markets since 2014 in terms of resiliency testing against cyber security threats. This post examines two of the financial-sector oriented, UK based frameworks – CBEST and STAR-FS, explaining how they work, and how Prism Infosec can support out clients in these engagements. 

What is CBEST?

CBEST (originally called Cyber Security Testing Framework but now simply a title rather than an acronym) provides a framework for financial regulators (both the Prudential Regulation Authority (PRA) and Financial Conduct Authority (FCA)) to work with regulated financial firms to evaluate their resilience to a simulated cyber-attack. This enables firms to explore how an attack on the people, processes and technology of a firm’s cyber security controls may be disrupted. 

The aim of CBEST is to:

  • test a firm’s defences; 
  • assess its threat intelligence capability; and
  • assess its ability to detect and respond to a range of external attackers as well as people on the inside. 

Firms use the assessment to plan how they can strengthen their resilience.

The simulated attacks used in CBEST are based on current cyber threats. These include the approach a threat actor may take to attack a firm and how they might exploit a firm’s online information. The important thing to take away from CBEST is that it is not a pass or fail assessment. It is simply a tool to help the organisation evaluate and improve its resilience.

How does CBEST work?

A firm is selected for test under one of the following criteria:

  • The firm/FMI is requested by the regulator to undertake a CBEST assessment as part of the supervisory cycle. The list of those requested to undertake a review is agreed by the PRA and FCA on a regular basis in line with any thematic focus and the supervisory strategy.
  • The firm/FMI has requested to undertake a CBEST as part of its own cyber resilience programme, when agreed in consultation with the regulator.
  • An incident or other events have occurred, which has triggered the regulator to request a CBEST in support of post incident remediation activity and validation, and consultation/agreement has been sought with the regulator.

CBEST is broken down into phases, each of which contains a number of activities and deliverables:

When the decision to hold a CBEST is made, the firm is notified in writing that a CBEST should occur by the regulator, and the firm has 40 working days to start the process. This occurs in the Initiation Phase of a CBEST. A Firm will be required to scope the elements of the test, aligned with the implementation guide, before procuring suitably qualified and accredited Threat Intelligence Service Providers (TISP), and Penetration Testing Service Providers (PTSP) – such as Prism Infosec.

After procurement there is a Threat Intelligence Phase, which helps identify information threat actors may gain access to, and what threat actors are likely to conduct attacks. This information is shared with the firm, the regulator and the PTSP and used to develop the scenarios (usually three). A full set of Threat Intelligence reports is the expected output from this phase. After the Penetration Test Phase, the TISP will then conduct a Threat Intelligence Maturity Assessment. This is done after testing is complete to help maintain the secrecy of the testing phase.

The next phase is the Penetration Testing Phase – during this phase each of the scenarios are played out, with suitable risk management controls to evaluate the firm’s ability to detect and respond to the threat. During this phase, the PTSP works closely with the firm’s control group and regular updates are provided to the regulator on progress. After testing, the PTSP then conducts an assessment of the Detection and Response (D&R) capability of the firm. Following these elements, the PTSP will then provide a complete report on the activities they conducted, vulnerabilities thy identified and the firm’s D&R capability.

CBEST then moves into the Closure phase where a remediation plan is created by the firm and discussed with the regulator and debrief activities are carried out between the TISP, PTSP and the regulator.

The CBEST implementation guide can be found here:

CBEST Threat Intelligence-Led Assessments | Bank of England

What is STAR-FS?

Simulated Targeted Attack and Response – Financial Services (STAR-FS)

STAR-FS is a framework for providing Threat Intelligence-led simulated attacks against financial institutions in the UK, overseen by the Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA). STAR-FS has less regulatory oversight in comparison to CBEST, but uses the same principles and is intended to be conducted by more organisations than CBEST. Like CBEST, STAR-FS uses the same 4 phase model.

How does STAR-FS work?

STAR-FS has been designed to replicate the rigorous approach defined within the CBEST framework that has been in use since 2015. However, STAR-FS allows for financial institutions to manage the tests themselves whilst still allowing for regulatory reporting. This means that STAR-FS can be self-initiated by a firm as part of their own cyber programme. Self-initiated STAR-FS testing could be recognised as a supervisory assessment if Regulators are notified of the STAR-FS and have the opportunity to input to the scope, and receive the remediation plan at the end of the assessment.

The Regulator, which includes the relevant Supervisory teams, receives the Regulator Summary of the STAR-FS assessment in order to inform their understanding of the Participant’s current position in terms of cyber security and to be confident that risk mitigation activities are being implemented. The Regulator’s responsibilities include receiving and acting upon any immediate notifications of issues that have been identified that would be relevant to their regulatory function. The Regulator will also review the STAR-FS assessment findings in order to inform sector specific thematic reports. Aside from these stipulations, the regulator is not involved in the delivery or monitoring of STAR-FS engagements and does not usually attend the update calls between the firm and TISP and PTSPs.

Like CBEST, there are also Initiation, Threat Intelligence, Penetration Testing and Closure phases, and accredited TI and PT suppliers must be used. In the Initiation and Closure phases, the firm is considered to have the lead role, whilst in the Threat Intelligence and Penetration Testing phases, the TISP and PTSPs are respectively expected to lead those elements. Again, a STAR-FS implementation guide is available to support firms undergoing testing:

STAR-FS UK Implementation Guide

How are we qualified to deliver Threat Led Penetration Testing?

Prism Infosec are one of a small handful of companies in the UK which have met the criteria mandated by the PRA and FCA to deliver STAR-FS and CBEST engagements as a Penetration Testing Service Provider. That mandate is that the provider must have, and ensure that engagements are led by a CCSAM (CREST Certified Simulated Attack Manager) and a CCSAS (CREST Certified Simulated Attack Specialist). Furthermore, the firm must have at least 14,000 hours of penetration testing experience, and the CCSAM and CCSAS must also have 4000 hours of testing financial institutions. The firm must also have demonstrated their skills through delivery of penetration testing services for financial entities which are willing to act as references, and must have been delivered within the last months prior to the application.  

How we deliver Threat Led Penetration Testing?

At Prism Infosec we pride ourselves on delivering a risk managed approach to Threat Led Penetration Testing – ensuring we deliver a test that helps us evaluate all the controls in an end-to-end test. Our goal is to help our clients understand and evaluate the risks of a cyber breach in a controlled manner which limits the impact to the business but still permits lessons to be learned and controls to be evaluated. Testing under CBEST, STAR-FS or simply commercial STAR engagements is supposed to help the firm, not hinder it, which is why we ensure our clients are kept fully informed and are able to take risk aware decisions on how best to proceed to get the best results from testing.

Prism Infosec will produce a test plan covering the scenarios, the pre-requisites, the objectives, the rules of the engagement and the contingencies required to support testing. A risk workshop will be held to discuss how risks will be minimised and agree clear communication pathways for the delivery of the engagement.

As each scenario progresses, Prism Infosec’s team will hold daily briefing calls with the client stakeholders to keep them informed, set expectations and answer questions. An out of band communications channel will also be setup to ensure that stakeholders and the consultants can contact each other as necessary, should the need arise. At the end of each week, a weekly update pack outline what has been accomplished, risks identified, contingencies used and vulnerabilities identified will be provided to the stakeholders to ensure that everyone remains fully informed.

Once testing concludes, Prism Infosec would seek to hold a Detection and Risk Assessment (DRA) workshop which comprises of two elements – the first a light touch GRC led discussion with senior stakeholders to provide an evaluation of the business against the NIST 2.0 framework; the second is a more tactical workshop with members of the defence teams to examine specific elements of the engagement. This second workshop is invaluable for defensive teams as it helps them identify blank spots in the defence tooling, and gain understanding to the tactics, techniques and procedures (TTPs) used by Prism Infosec’s consultants.

Following this, Prism Infosec will produce a comprehensive report on how each scenario played out, severity rated vulnerabilities, a summary of the DRA workshops and information on possible improvements to support and assist the defence teams. We will also produce an executive debriefing pack and deliver debriefing tailored to c-suite executives and regulators. We will also provide a redacted version of the report which can be shared with the regulator, as required under CBEST.

Capitalising on the Investment of a Red Team Engagement

Cybersecurity red teams are designed to evaluate an organisation’s ability to detect and respond to cybersecurity threats. They are modelled on real life breaches, giving an organisation an opportunity to determine if they have the resiliency to withstand a similar breach. No two breaches are entirely alike, as each organisation’s organic and planned growth of their infrastructure. They are often built around their initial purpose before being subjected to acquisitions and evolutions based on new requirements. As such the first stage of every red team, and real-world breach is understanding that environment enough to pick out the critical components which can springboard to the next element of the breach. Hopefully, somewhere along that route detections will occur, and the organisation’s security team can stress test their ability to respond and mitigate the threat. Regardless of outcome however, too often once the scenario is done, the red team hand in their report documenting what they were asked to do, how it went, and what recommendations would make the organisation more resilient, but is that enough?

Detection and Response assessments are part of the methodology for the Bank of England and FCA’s CBEST regulated intelligence-led penetration testing (red teaming). However, their interpretation of it is more aligned at understanding response times and capabilities. At LRQA (formerly LRQA Nettitude), I learned the value of a more attuned Detection and Response Assessment, a lesson I brought with me and evolved at Prism Infosec.

At its heart, the Detection and Responses Assessment takes the output of the red team, and then turns it on its head. It examines the engagement from the eyes of the defender. We identify the at least one instance of each of the critical steps of breach – the delivery, the exploitation, the discovery, the privilege escalation, the lateral movement, the action on objectives. For each of those, we look to identify if the defenders received any telemetry. If they did, we look to see if any of that telemetry triggered a rule in their security products. If it triggered a rule, we look to see what sort of alert it generated. If an alert was generated, we then look to see what happened with it – was a response recorded? If a response was recorded, what did the team do about it? Was it closed as a false positive, did it lead to the containment of the red team?

Five “so what” questions, at the end of which we have either identified a gap in the security system/process or identified good, strong controls and behaviours. There is more to it than that of course, but from a technical delivery point of view, this is what will drive benefits for the organisation. A red team should be able to highlight the good behaviours as well as the ones that still require work, and a good Detection and Response Assessment not only results in the organisation validating their controls but also understanding why defences didn’t work as well as they should. This allows the red team to present the report with an important foil – how the organisation responded to the engagement. It shows the other side of the coin, in a report that will be circulated with the engagement information at a senior level of engagement, and can set the entire engagement into a stark contrast.

The results can be seen, digested and understood by C-level suite executives. There is no point in having a red team and reporting to the board that because of poor credential hygiene, or outdated software that the organisation was breached and remains at risk. The board already knows that security is expensive and that they are risk, but if a red team can also demonstrate the benefits or direct the funding for security in a more efficient manner by helping the organisation understand the value of that investment then it becomes a much more powerful instrument of change. What’s even better is that it can become a measurable test – we can see how that investment improves things over time by comparing results between engagements and using that to tweak or adjust.

One final benefit is that security professionals on both sides of the divide, (defenders and attackers) gain substantial amounts of knowledge from such assessments – both sides lift the curtain, explain the techniques, the motivations and the limitations of the tooling and methodology. As a result both sides become much more effective, build greater respect, and are more willing to collaborate on future projects when not under direct test.

Next time your company is considering a red team, don’t just look at how long it will take to deliver or the cost, but also consider the return you are getting on that investment in the form of what will be delivered to your board. Please feel free to contact us at Prism Infosec if you would like to know more.

Our Red Team Services: https://prisminfosec.com/service/red-teaming-simulated-attack/

Prism Infosec Achieves STAR-FS Accreditation

We’re thrilled to announce that Prism Infosec is now an accredited provider of STAR-FS (Simulated Targeted Attack & Response assessments for Financial Services), the threat-led penetration testing and red teaming framework launched by the Bank of England, PRA, and FCA this year for the UK finance sector.

The STAR-FS scheme represents a significant step forward in enhancing cyber resilience for financial institutions, providing an innovative approach to identifying and mitigating cyber risks through assessments that simulate real-world threats.

STAR-FS assessments offer:

– Enhanced Resilience: By assessing firms’ capabilities to protect, detect, and respond to sophisticated cyber threats.

– Firm-Led Model: Allowing organizations to proactively identify vulnerabilities within systems, processes, and people.

– Independent Assurance: Beyond the scope of traditional penetration testing, STAR-FS offers regulatory-recognized assessments.

– Broader Accessibility: Making this assessment available to more financial institutions, enabling wider adoption and learning across the industry.

Prism Infosec is committed to helping financial institutions strengthen their cyber defences and meet regulatory expectations. Contact us to learn how STAR-FS can enhance your organisation’s resilience to cyber threats and enable a proactive approach to security.

Our Red Teaming Service:

Red teaming Identifies organisational cyber security weaknesses.

Red Team Scenarios – Modelling the Threats

Introduction

Yesterday organisations were under cyber-attack, today even more organisations are under cyber-attack, and tomorrow this number will increase again. This number has been increasing for years, and will not reverse. Our world is getting smaller, the threat actors becoming more emboldened, and our defences continue to be tested. Any organisation can become a victim to a cyber security threat actor, you just need to have something they want – whether that is money, information, or a political stance or activity inimical to their ideology. Having cybersecurity defences and security programs will help your organisation be prepared for these threats, but like all defences, they need to be tested; staff need to understand how to use them, when they should be invoked, and what to do when a breach happens.

Cybersecurity red teaming is about testing those defences. Security professionals take on the role of a threat actor, and using a scenario, and appropriate tooling, conduct a real-world attack on your organisation to simulate the threat.

Scenarios

Scenarios form the heart of a red team service: they are defined by the objective,  the threat actor, and the attack vector. This will ultimately determine what defences, playbooks, and policies are going to be tested.

Scenarios are developed either out of threat intelligence – i.e. threat actors who are likely to target your organisation have a specific modus operandi in how they operate; or scenarios are developed out of a question the organisation wants answered to understand their security capabilities.

Regardless of the approach, all scenarios need to be realistic but also be delivered in a safe, secure, and above all, risk managed manner.

Objectives

Most red team engagements start by defining the objective. This would be a system, privilege or data which if breached would result in a specific outcome that a threat actor is seeking to achieve. Each scenario should have a primary target which would ultimately result in impact to the: organisation’s finances (either through theft or disruption (such as ransomware)); data (theft of Personally Identifiable Information (PII) or private research); or reputation (causing embarrassment/loss of trust through breach of services/privacy). Secondary and tertiary objectives can be defined but often these will be milestones along the way to accomplish to primary.

Objectives should be defined in terms of impacting Confidentiality (can threat actors read the data), Integrity (can threat actors change the data), or Availability (can threat actors deny legitimate access to the data). This determines the level of access the red team, will seek to achieve to accomplish their goal.

Threat Actors 

Once an objective is chosen, we then need to understand who will attack it. This might be driven by threat intelligence, which will indicate who is likely to attack an organisation, or for a more open test, we can define it by sophistication level of the threat actor…

Not all threat actors are equal in terms of skill, capability, motivation, and financial backing. We often refer to this collection of attributes as the threat actor’s sophistication. Different threat actors also favour different attack vectors, and if the scenario is derived from threat intelligence, this will inform how that should be manifested.

High Sophistication

The most mature threat actors are usually referred to as Nation State threat actors, but we have seen some cybercriminal gangs start to touch elements of that space. They are extremely well resourced (often with not only capability development teams, but also with linguists, financial networks, and a sizeable number of operators able to deliver 24/7 attacks). They will often have access to private tooling that is likely to evade most security products; and they are motivated usually by politics (causing political embarrassment to rivals, theft of data to uplift country research, extreme financial theft, denigrating services to cause real-world impact/hardship. Examples in this group can include APT28, APT38, and WIZARD SPIDER

Medium Sophistication

In the mid-tier maturity range we have a number of cybercriminal and corporate espionage threat actors. These will often have some significant financial backing – able to afford some custom (albeit commercial) tooling which will have been obtained either legally, or illegally; they may work solo, but will often be supported by a small team who can operate 24/7 but will often limit themselves to specific working patterns where possible. They may have some custom written capabilities, but these will often be tweaked versions of open-source tools. They are often motivated by financial concerns – whether that is profiting from stolen research, or directly obtaining funding from their victim due to their activities. Occasionally they will also be motivated by some sort of activism – often using their skills to target organisations which represent or deliver a service for a perceived cause which they do not agree with. In this motivation they will often either seek to use the attack as a platform to voice their politics or to try and force the organisation to change their behaviour to one which aligns better with their beliefs. Examples of threat actors in this tier have included  FIN13 and LAPSUS$.

Low Sophistication

At the lower tier maturity range, we are often faced with single threat actors, rather than a team; insiders are often grouped into this category. Threat actors in this category often make use of open-source tooling, which may have light customisation depending on the skill set of the individual. They will often work fixed time zones based on their victim, and will often only have a single target at a time or ever. Their motivation can be financial but can also be motivated by personal belief or spite if they believe they have been wronged. Despite being considered the lowest sophistication of threat actor, they should never be underestimated – some of the most impactful cybersecurity breaches have been conducted by threat actors we would normally place in this category- such as Edward Snowden, or Bradley Manning.

Attack Vector

Finally, now that we know what will be attacked, and who will be attacking we need to define how the attack will start. Again, threat intelligence gathered on different threat actors will show their preferences in terms of how they can start an attack, and if the objective is to keep this realistic, that should be the template. However if we are using a more open test we can mix things up and use an alternative attack vector. This is not to say that specific threat actors won’t change their attack vector, but they do have favourites.

Keep in mind, the attack vector determines which security boundary will be the initial focus of the attack, and they can be grouped into the following categories:

External (Direct External Attackers)

  • Digital Social Engineering (phishing/vishing/smshing)
  • Perimeter Breach (zero days)
  • Physical (geographical location breach leading to digital foothold)

Supply Chain (Indirect External Attackers)

  • Software compromise (backdoored/malicious software updates from trusted vendor)
  • Trusted link compromise (MSP access into organisation)
  • Hardware compromise (unauthorised modified device)

Insider (both Direct and Indirect Internal Attackers)

  • Willing Malicious Activity
  • Unwilling Sold/stolen access
  • Physical compromise

Each of these categories not only contain different attack vectors, but will often result in testing different security boundaries and controls. Whilst a Phishing attack will likely result in achieving a foothold on a user’s desktop – the likely natural starting position for an insider conducting willing or unwilling attacks; they will test different things, as an insider will not need to necessarily deploy tooling which might be detected, and will already have passwords to potentially multiple systems to do their job. Understanding this is the first step in determining how you want to test your security.

Pulling it together

Once all these elements have been identified and defined, the scenario can move forward to the planning phase before delivery. This is where any pre-requisites to deliver the scenarios, any scenario milestones, any contingencies can be prepared to help simulate top tier threat actors,  and any tooling preparations can be done to ensure the scenario can start. Keep in mind that whilst the scenario objective might be to compromise a system of note, the true purpose of the engagement is to determine if the security teams, tools, and procedures can identify and respond to the threat. This can only be measured and understood if the security teams have no clue when or how they will be tested, as real-world threats will not give any notice either.

Even if the red team accomplish the goals, the scenario will still help security teams understand the gaps in their skills, tools, and policies so that they can react better in the future. Consider contacting Prism Infosec if you would like your security teams to reap these benefits too.

Our Red Team Services: https://prisminfosec.com/service/red-teaming-simulated-attack/

Red Teams don’t go out of their way to get caught (except when they do)

Introduction

In testing an organisation, a  red team will be seeking to emulate a threat actor by achieving a specific goal – whether that is to gain administrative control of the network and prove they can control backups (a kin to how many ransomware operators work), through to proving access to financial systems, or even gaining access to sensitive data repositories. They will employ tactics, tools and capabilities aligned to the sophistication level of the threat actor they are pretending to be. The question asked about red teams asks is always “can the bad guys get to system X”,  when it really should be, “can we spot the bad guys before they get to system X AND do something effective about it”. The unfortunate answer is that with enough time and effort, the bad guys will always get to X. What we can do in red teaming is try to tell you how the bad guys will get to X and help you understand if you can spot the bad guys trying things.

Red Team Outcomes

In assessing an organisation, we often have engagements go in one of two ways – the first (and unfortunately more common) is that the red team operators achieve the objective of the attack, sometimes this is entirely without detection and sometimes there is a detection, but containment is unsuccessful. The other is when the team are successfully detected (usually early on) and containment and eradication is not only successful, but extremely effective.

So What?

In both cases, we have failed to answer some of the exam questions, namely the level of visibility the security teams have across the network.

In the first instance, we don’t know why they failed to see us, or why they failed to contain us, and why they didn’t spot any of the myriad other activities we conducted. We need to understand if the issue is one of process or effort (is the security team drinking from a firehose of alerts and we were there but lost in the noise; or did the security team see nothing because they don’t have visibility in the network; or do we have telemetry but no alerts for the sophistication level of the attacker’s capabilities/tactics?). The red team can try to help answer some of these questions by moving the engagement to one of “Detection Threshold Testing” where the sophistication level of the Tactics, Techniques and Procedures of the test are gradually lowered, and the attack becomes noisier until a detection occurs, and a response is observed. If the red team get to the point of dropping disabled, un-obfuscated copies of known bad tools on domain controllers which are monitored by security tools and there are still no detections, then the organisation needs to know and work out why. This is when a Detection and Response Assessment (DRA) Workshop can add real value to understand the root causes of the issues.

In the second instance we have observed a great detection and response capability, but we don’t know the depth of the detection capabilities – i.e. if the red team changed tactics, or came in elsewhere would the security team have a similar result? We can answer this sometimes with additional scenarios which model different threat actors, however multiple scenario red teams can be costly, and what happens if they get caught early on in all three scenarios? I prefer to adopt an approach of trust but verify in these circumstances by moving an engagement through to a “Declared Red Team”. In this circumstance, the security teams are congratulated on their skills, but are informed that the exercise will continue. They are told the host the red team are starting on, and they are to allow it to remain on the network uncontained but monitored while the red team continue testing. They will not be told what the red team objective is or on what date the test will end – they will however be informed when testing is concluded. If they detect suspicious activity elsewhere in the network  during this period they can deconflict the activity with a representative of the test control group. If it is the red team, it will be confirmed, and the security team will  be asked to record what their next steps would be. If it isn’t then the security team are authorised to take full steps to mitigate the incident; a failure on the red team to confirm, will always be treated as malicious activity unrelated to the test. Once testing is concluded (objective achieved/time runs out), the security team is informed, and the test can move onto a Detection and Response Assessment (DRA) Workshop.

Next Steps

In both of these instances, you will have noted that the next step is a Detection and Response Assessment (DRA) Workshop – DRA’s were introduced by the Bank of England’s CBEST testing framework, LRQA (formerly LRQA Nettitude) refined the idea, and Prism Infosec have adapted it by fully integrating NIST 2.0 into it. Regardless, it is essentially a chance to understand what happened, and what the security team did about it. The red team should provide the client security team with the main TTP events of the engagement – initial access, discovery which led to further compromise, privilege escalation, lateral movement, action on objectives. This should include timestamps and locations/accounts abused to achieve this. The security team should come equipped with logs, alerts, and playbooks to discuss what they saw, what they did about it, and what their response should be. Where possible, this response should also have been exercised during the engagement so the red team can evaluate its effectiveness.

The output of this workshop should be a series of observations about areas of improvement for the organisation’s security teams, and areas of effective behaviours and capabilities. These observations need to be included in the red team report – and should be presented in the executive summary to help senior stakeholders understand the value and opportunities to improve their security capabilities, and why it matters.

Conclusion

Red Teams will help identify attack paths, and let you know if bad guys can get to their targets, but more importantly they can and should help organisations understand how effective they are detecting and responding to the threat before that happens. Red Teams need to be caught to help organisations understand their limits so they can push them, show good capabilities to senior stakeholders, and identify opportunities for improvement. An effective red team exercise will not only engineer being caught into their test plan, but they will ensure that when it happens, the test still adds value to the organisation.

Our Red Team services: https://prisminfosec.com/service/red-teaming-simulated-attack/

To you it’s a Black Swan, to me it’s a Tuesday…

Cybersecurity is a discipline with many moving parts. At its core though, it is a tool to help organisations identify, protect, detect, respond, and recover, then adapt to the ever-evolving  risks and threats that new technologies, and capabilities that threat actors employ through threat modelling. Sometimes these threats are minor – causing annoyance but no real damage, but sometimes these threats are existential and unpredictable; these are known as Black Swan events.

They represent threats or attacks that fall outside the boundaries of standard threat models, often blindsiding organisations despite rigorous security practices.

In this post, we’ll explore the relationship between cybersecurity threat modelling and Black Swan events, and how to better prepare for the unexpected.

What Are Black Swan Events?

The term Black Swan was popularized by the statistician and risk analyst Nassim Nicholas Taleb. He described Black Swan events as:

  • Highly improbable: These events are beyond the scope of regular expectations, and no prior event or data hints at their occurrence.
  • Extreme impact: When they do happen, Black Swan events have widespread, often catastrophic, consequences.
  • Retrospective rationalization: After these events occur, people tend to rationalize them as being predictable in hindsight, even though they were not foreseen at the time.

In cybersecurity, Black Swan events can be seen as threats or attacks that emerge suddenly from unknown or neglected vectors—such as nation-state actors deploying novel zero-day exploits, or a completely new class of vulnerabilities being discovered in widely used software.

The Limits of Traditional Threat Modelling

Threat modelling is a systematic approach to identifying security risks within a system, application, or network.

It typically involves:

  • Identifying assets: What needs protection (e.g., data, services, infrastructure)?
  • Defining threats: What could go wrong? Common threats include malware, phishing, denial of service (DoS) attacks, and insider threats.
  • Assessing vulnerabilities: How could the threats exploit system weaknesses?
  • Evaluating potential impact: How severe would the consequences of an attack be?
  • Mitigating risks: What steps can be taken to reduce the likelihood and impact of threats?

While highly effective for many threats, traditional threat modelling is largely based on past experience and known attack methods. It relies on patterns, data, and risk profiles developed from historical analysis. However, Black Swan events, by their nature, evade these models because they represent unknown unknowns—threats that have never been seen before or that arise in ways no one could predict. This is where organisations often encounter significant challenges. Despite extensive security efforts, unknown vulnerabilities, unexpected technological changes, or even human error can expose them to unforeseen, high-impact cyber events.

Real-World Examples of Cybersecurity Black Swan Events

1. The SolarWinds Hack (2020)

The SolarWinds cyberattack, attributed to a nation-state actor, was one of the most devastating and unexpected breaches in recent history. Attackers compromised the software supply chain by embedding malicious code into SolarWinds’ Orion software updates, which were then distributed to thousands of organizations, including U.S. government agencies and Fortune 500 companies.

The sophistication of the attack and the sheer scale of its impact make it a classic Black Swan event. It was a novel approach to cyber espionage, and its implications were far-reaching, affecting critical systems and sensitive data across industries.

2. NotPetya (2018)

The Petya ransomware that launched in 2016 was a standard ransomware tool – designed to encrypt, demand payment and then be decrypted. NotPetya however was something different. It leveraged two changes – the first was that it was changed to not be reversed – once data was encrypted, it could not be recovered; this made it a wiper instead of ransomware. The second was that it also had the ability to leverage the EternalBlue exploit, much like the Wannacry ransomware code that attacked devices worldwide earlier that year – this allowed it to spread rapidly around unpatched Microsoft Windows networks.

NotPetya is believed have infected victims through a compromised piece of Ukrainian tax software called M.E.Doc. This software was extremely widespread throughout Ukrainian businesses, and investigators found that a backdoor in its update system had been present for at least six weeks before NotPetya’s outbreak.

At the time of the outbreak, Russia was still in the throes of conflict with the Ukrainian state, have annexed the Crimean peninsula less than two years prior; and the attack was timed to coincide with Constitution Day, a Ukrainian public holiday commemorating the signing of the post-Soviet Ukrainian constitution. As well as its political significance, the timing also ensured that businesses and authorities would be caught off guard and unable to respond. What the attackers did not consider however was how far spread that software was. Any company local or international who did business in Ukraine likely had a copy of that software. When the attackers struck, they hit multinationals, including the massive shipping company A.P. Møller-Maersk, the Pharmaceutical company Merck, delivery company FedEx, and many others. Aside from crippling these companies, reverberations of the attack were felt in global shipping, and across multiple business sectors.

NotPetya is believed to resulted in more than $10 billion in total damages across the globe, making it one of, if not the, most expensive cyberattack in history to date.

How to Prepare for Cybersecurity Black Swan Events

While it’s impossible to predict or completely prevent Black Swan events, there are steps that organisations can take to enhance their resilience and minimise potential damage:

1. Adopt a Resilience-Based Approach

Rather than solely focusing on known threats, build your cybersecurity strategy around resilience. This means being prepared to rapidly detect, respond to, and recover from attacks, regardless of their origin.

Organisations should prioritise:

  • Incident response plans: Have well-documented and tested response procedures in place for any type of security event.
  • Redundancy and backups: Ensure critical systems and data have redundant layers and secure backups that can be quickly restored.
  • Post-event recovery: Create strategies to mitigate the damage and recover swiftly, minimising long-term business disruption.

2. Encourage Continuous Security Research and Innovation

Security Testing: Many Black Swan events are the result of the exploitation of previously unknown vulnerabilities. Investing in continuous security research and vulnerability discovery (through bug bounty programs, penetration testing, etc.) can reduce the number of undiscovered vulnerabilities and improve overall system security.

Defence Engineering: Implement defensive measures such as application isolation, network segmentation, and behaviour monitoring to limit the damage if a zero-day exploit is discovered.

3. Utilize Cyber Threat Intelligence

Staying informed on emerging cybersecurity trends and participating in industry collaborations can give organisations an edge when it comes to detecting potential Black Swan events. By sharing information, organisations can learn from others’ experiences and uncover threats that might not have been apparent within their own systems.

4. Model Chaos and Test the Unthinkable

Chaos engineering, which involves intentionally introducing failures into systems to see how they respond, can be an effective way to test the robustness of an organization’s defences. These drills can help security teams explore what might happen during an unanticipated event and can uncover system weaknesses that might otherwise be overlooked.

5. Promote a Culture of Adaptive Security

Adopting an adaptive security mindset means continuously monitoring the threat landscape, adjusting security controls, and being willing to evolve when necessary. The concept of security-by-design—where security considerations are built into the very foundation of systems and software—will also help organisations stay ahead of new and unforeseen risks.

Black Swan events in cybersecurity may be rare, but their consequences can be catastrophic. The unpredictability of these threats poses a unique challenge, requiring organisations to shift from a purely reactive, known-threat approach to one that emphasises resilience, adaptation, and continuous learning.

Red Team engagements are one tool which can help organisations develop resilient security strategies designed to respond to Black Swans. What makes this possible is some of the key concepts, controls and attitudes which are introduced during the planning stages of the engagement. The results of red team engagements using this approach helps shape boardroom discussions around strategy, resilience, and capacity in a way that allows the business to anticipate Black Swans and be prepared should they ever arrive.

Our Red Team services: https://prisminfosec.com/service/red-teaming-simulated-attack/

The Value of Physical Red Teaming

Introduction

In testing an organisation, a red team will be seeking to emulate a threat actor by achieving a specific goal – whether that is to gain administrative control of the network and prove they can control backups (a kin to how many ransomware operators work), through to proving access to financial systems, or even gaining access to sensitive data repositories. They will employ tactics, tools and capabilities aligned to the sophistication level of the threat actor they are pretending to be.

However, not all threat actors operate only in the digital threat axis, and will instead seek to breach the organisation itself to achieve their goal. Physical red teaming seeks to test an organisation’s resilience and security culture. It is aimed more at testing people and physical security controls. The most common physical threat actor is the insider threat; however nation state,  criminal, industrial espionage, and activist threats also remain prevalent in the physical arena, however their motivations to cause digital harm will vary.

As part of an organisation’s layered defence we not only have to consider the digital defences but also the physical ones. Consider, would it be easier for the threat actor to achieve their goal by physically taking a computer rather than try to digitally gain a foothold and then get to the target and complete their activities? Taking a holistic approach to security makes a significant difference to an organisation.

Understanding Physical Red Teaming

Physical red team simulates attacks on physical security systems and behaviours to test defences. It accomplishes this by:

  • attempting to gain unauthorised access to buildings though:
    •  the manipulation of locks,
    • use of social engineering techniques such as  tailgating
  • bypassing security protocols such as:
    •  cloned access cards,
    • managing to connect rogue network devices,
    • or gaining access to unattended documents from bins and printers;
  • or exploiting social behaviours and abusing preconceptions
    • using props to appear as though you belong or are a person of authority to avoid being challenged.

In digital red teaming we are evaluating people and security controls in response to remote attacks. The threat actor must not only convince a user to complete actions on their behalf, but must also then bypass the digital controls that are constantly being updated and potentially, monitored.

In comparison, physical security controls are rarely updated due to cost reasons as they are integrated into the buildings. Furthermore, people will often act very differently towards an approach when it is conducted online than if it is conducted in person. This can be down to peoples’ confidence and assertiveness which psychologically is different online than in person. Therefore it can be important to test the controls that keep threat actors out and if they fail, that staff feel empowered and supported to be able to challenge individuals who they believe do not belong, even if that person is one of authority until their credentials have been verified.

Why Physical Security Matters in Cybersecurity

At the top end of the scale, we should consider the breach caused by Edward Snowden at the NSA in 2013  which affected the national security of multiple countries. This was a trusted employee, who abused their privileges as a system administrator to breach digital security controls, and abused and compromised credentials of other users who trusted him to gain unauthorised access to highly sensitive information. He then breached physical security controls to extract that data and remove it, not only from the organisation, but also the country. The impact of that data-breach was enormous in terms of reputational damage, as well as tools and techniques used by the security services. Whilst he claimed his motivation was an underlying privacy concern (which was later ruled unlawful by US courts); the damage his actions caused have undoubtedly, though impossible to distinctly prove, inflicted significant threat to life for numerous individuals worldwide. Regardless, this breach was a failing of both physical controls (preventing material from leaving the premises) and digital (abusing trusted access to gain access to digital data stores).

Other attacks do exist however, consider back in 2008, a 14-year-old, with a homemade transmitter deliberately attacked the Polish city of Lodz’s tram system. This individual ended up derailing four trams, injuring a dozen individuals. Using published material he spent months studying the city’s rail lines to determine where best to create havoc; then using nothing more than a converted TV remote, inflicted significant damage. In this instance, the digital controls were related to the material that had been published regarding the control systems and the unauthenticated and unauthorised signals being acted upon by the system. Whilst the physical controls were in terms of being able to direct signals to the receiver which permitted the attack to occur.

Key Benefits of Physical Red Teaming

A benefit of physical red teaming is in testing and improving an organisation’s response to physical breaches or threats. Surveillance, access control systems, locks, and security staff can be assessed for weaknesses, and it can help identify lapses in employee vigilance (e.g., tailgating or failure to challenge strangers).

This in turn can lead to improvements in behaviours, policies, and procedures for physical access management. Furthermore, physical red teaming encourages employees to take an active role in security practices and fosters an overall culture of security.

Challenges of Physical Red Teaming

However delivering physical red teaming is fraught with ethical and legal risk; aside from trespassing, breaking and entering, and other criminal infringements, there could also be civil litigation concerns depending on the approach the consultants take.

Therefore it is important to establish clear consent and guidelines from the organisation, this must include the agreed scope – what activities the consultants are permitted to do, when and where those activities will take place, and who at the client organisation is responsible for the test. This information, including any additional property considerations such as shared tenancies or public/private events which may be impacted by testing also need to be considered and factored into the scope and planning. It is not unusual for this information to be captured into a “get out of jail” letter provided to the testers along with client points of contact to verify the test and stand down a response.

This is to ensure that testing can remain realistic but also any disruption caused by it can be minimised.

Cost is always also going to be a concern, as it takes time for consultants to not only travel to site, but also conduct surveillance, equip suitable props (some of which may need to be custom made), and develop and deploy tooling to bypass certain controls (such as locks and card readers) if that is required in the engagement.

Conclusion:

The physical threat axis is one that people have been attacking since time immemorial. However in today’s world we have shrunk distances using digital estates, and have managed to establish satellite offices beyond our traditional perimeters and as a result increased the complexity of the environments we must defend. Red teaming permits an organisation to assess all these threat axis and consider how physical and digital controls are not only required but need to be regularly exercised to ensure their effectiveness.

Readers of this post are therefore encouraged to consider the physical security of their locations – whether that is their offices, factories, transit hubs, public buildings through to security of home offices, and ask themselves if they have verified their security controls are effective and when they were last exercised.

Our Red Team services: https://prisminfosec.com/service/red-teaming-simulated-attack/

Flawed Foundations – Issues Commonly Identified During Red Team Engagements

Cybersecurity Red Team engagements are exercises designed to simulate adversarial threats to organisations. They are founded on real world Tactics, Techniques, and Procedures that cybercriminals, nation states, and other threat actors employ when attacking an organisation. It is a tool for exercising detection and response capabilities and to understand how the organisation would react in the event of a real-world breach.

One of outcomes of such exercises is an increased awareness of vulnerabilities, misconfigurations and gaps in systems and security controls which could result in the organisation’s compromise, and impact business delivery, causing reputational, financial, and legal damage.

Most of the time, threat actors rarely need to employ cutting edge capabilities or “zero day” exploits in order to compromise an organisation. This is because organisations grow organically, they exist to deliver their business, and as a result, security is not a key consideration from its founding, this means that critical issues can exist in the foundations of the organisation’s IT which threat actors will be more than happy to abuse.

This post covers five of the most common vulnerabilities we regularly see when conducting red team engagements for our clients. Its’s purpose is to raise awareness among IT professionals and business leaders about potential security risks.

Insufficient privilege management

This issue presents when accounts are provided with privileges within the organisation greater than what they require to conduct their work. This can present as: users who have local administrator privileges, accounts who have been given indirect administrator privileges, or overly privileged service accounts.

Some examples include:

  • Users who are all local administrators on their work devices –  This gives them the ability to install any software they might need to conduct their work, but also exposes the organisation to significant risk, should that device or user account become compromised. If users do require privileges on their laptops, then they should also be provided with a corporate virtual device (either cloud or on host based), which has different credentials from their base laptop, and is the only device permitted to connect to the corporate infrastructure. This will limit the exposure of the risk and permit staff to continue to operate. In a red team, this permits us to abuse a machine account, and gain the ability to bypass numerous security tools and controls which would normally impede our ability to operate.
  • Users with indirect administrator privileges – in Microsoft Windows Domains, users can belong to groups, however groups can also belong to other groups, and as a result users can inherit privileges due to this nesting. Whilst it was never the intention to grant  a user administrator privileges, and whilst the user is unaware that they have been given this power, such a misconfiguration can result quite easily and exposes the organisation to considerable risk. This can only be addressed through an in-depth analysis of the active directory and consistent auditing combined with system architecture. This sort of subtle misconfiguration only really becomes apparent when a threat actor or red team starts to enumerate the active directory environment; when found though it rapidly leads to a full organisation compromise.
  • Overly privileged service accounts – service accounts exist to ensure that specific systems such as databases or applications are able to authenticate users accessing them from the domain and to provide domain resources to the system. A common misconfiguration is providing them with high levels of privilege during installation even though they do not require them. Service accounts, due to the way they operate need to be exposed, and threat actors who identify overly privileged accounts can attempt to capture an authentication using the service. This can be attacked offline to retrieve the password, which can then lead to greater compromise within the estate. Service accounts should be regularly audited for their privileges, where possible these should be removed or restricted. If it is not a domain managed service account (a feature made available from Windows Server 2012 R2 onwards), then ensuring the service account has a password of at least 16 characters in length, which is recorded in a secure fashion if it is required in the future will severely restrict threat actors abilities to abuse these. Abuse of service accounts is becoming rarer but legacy systems which do not support long passwords means there are still significant amounts of these sorts of accounts present. Abuse of these accounts can often be tied to whether they have logon rights across the network or not – as identifying them being compromised can often be problematic if the threat actor or red team operate in a secure manner.

Poor credential complexity and hygiene

This issue presents when users are given no corporately supported method to store credential material; as a result passwords chosen are often easy to guess or predict, and they are stored either in browsers, or in clear text files on network shared drives, or on individual hosts.

  • Credential Storage – staff will often use plain text files, excel documents, emails, one notes, confluence,  or browsers to store credentials when there is no corporately provided solution. The problem with all of these options is that they are insecure – the passwords can be retrieved using trivial methods; which means the organisations are often one step away from a  significant breach. Password vaults such as LastPass, BitWarden, KeyPass, OnePass, etc. whilst targets for threat actors do offer considerably greater protection, as long as the credentials used to unlock them are not single factor, or stored with the wallet. It is standard practice for red teams and threat actors to try to locate clear text credentials, and attacking wallets significantly increases the difficulty and complexity of the tradecraft required when the material to unlock the wallet uses MFA or is not stored locally alongside it.
  • Credential Complexity – over the last 20 years the advice on password complexity has changed considerably. We used to advise staff to rotate passwords every 30/60/90 days, choose random mixes of uppercase, lowercase, numbers and punctuation, and have a minimum length; today we advise not rotating passwords regularly, and instead choosing a phrase or 3 random, easy to memorise words which are combined with punctuation and letters. The reason for this is because as computational power has increased, smaller passwords, regardless of their composition have become easier to break. Furthermore, when staff rotated them regularly, it would often result in just a number changing rather than an entirely new password being generated, as such they would also become easy to predict. Education is critical in addressing this. Furthermore many password wallets will also offer a password generator that can make management of this easier for staff whilst still complying with policies.  Too often I have seen weak passwords, which complied with password complexity policies because people will seek the simplest way to comply. Credential complexity buys an organisation time, time to notice a breach, raises the effort a threat actor must invest in order to be effective in attacking the organisation.

Insufficient Network Segregation

 This issue occurs when a network is kept flat – hosts are allowed to connect to any server or other workstations within the environment on any exposed ports regardless of department or geographical region. It also covers cases where clients  which connect to the network using VPN are not isolated from other clients.

  • VPN Isolation –  Clients which connect to the network through VPN to access domain resources such as file shares, can be directly communicated with from other clients. This can be abused by threat actors who seed network resources with materials which will force clients who load them to try to connect with a compromised host. Often this will be a compromised client device. When this occurs, the connecting host transmits encrypted user credentials to authenticate with the device. These can be taken offline by the threat actor and cracked which could result in greater compromise in the network.  Securing hosts on a VPN limits the threat actor, and red team in terms of where they can pivot their attacks, and makes it easier to identify and isolate malicious activities.
  • Flat Networks – networks are often implemented to ensure that business can operate efficiently, the easiest implementation for this is in flat networks where any networked resource is made available to staff regardless of department or geographical location, and access is managed purely by credentials and role-based access controls (RBAC). Unfortunately, this configuration will often expose administrative ports and devices which can be attacked. When a threat actor manages to recover privileged credentials then, a flat network offers significant advantages to them for further compromise of the organisation. Segregating management ports and services, breaking up regions and departments and restricting access to resources based on requirements will severely restrict and delay a threat actors and red teams ability to move around the network and impact services.

Weak Endpoint Security

Workstations are often the first foothold achieved by threat actors when attacking an organisation. As a result they require constant monitoring and controls to ensure they stay secure. This can be achieved through a combination of maintained antivirus, effective Endpoint Detection and Response, and Application Controls. Furthermore controlling what endpoint devices are allowed to be connected to the network will limit the exposure of the organisation.

  • Unmanaged Devices -Endpoints that are not regularly monitored or managed, increasing risk. Permitting Bring Your Own Device (BYOD) can increase productivity as staff can use devices they have customised; however it also exposes the organisation as these devices may not comply with organisation security requirements. This also compounds issues when a threat is detected, as identifying a rogue device becomes much more difficult as you need to treat every BYOD device as potentially rogue. Furthermore, you have little insight or knowledge as to where else these devices have been used, or who else has used them. By only permitting managed devices to your network and ensuring that BYOD devices, if they must be used, are severely restricted in terms of what can be accessed, you can limit your exposure to risk. Restrictions of managed devices can be bypassed but it raises the complexity and sophistication of the tradecraft required which means it takes longer, and there is a greater chance of detection.
  • Anti-Virus – it used to be the case that anti-virus products were the hallmark of security for devices. However, the majority of these work on signatures, which means they are only effective against threats that have been identified and are listed in their definitions files. Threat Actors know this and will often change their malware so that it no longer matches the signature and therefore can be evaded. This means the protection they offer is often limited but if well maintained, they can limit the organisations exposure to common attacks and provide a tripwire defence should a capable adversary deploy tooling that has previously been  signatured. Bypassing antivirus can be trivial, but it provides an additional layer of defence which can increase the complexity of a red team or threat actors activities.
  • Lack of Endpoint Detection and Response (EDR) configuration- EDR goes one step beyond antivirus and looks at all of the events occurring on a device to identify suspicious tools, behaviours, and activities that could indicate breach. Like anti-virus they will often work with detection heuristics and rules which can be centrally managed. However they require significant time to tune for the environment, as normal activity for one organisation, maybe suspicious in another. Furthermore it permits the organisation to isolate suspected devices. Unfortunately EDR can be costly, both to implement and then maintain correctly – and is only effective when it is on every device. Too often, organisations will not spend time using it, or understand the implementation of the basic rules versus tuned rules. As such false positives can often impact business, and lead to a lack of trust in the tooling. Lacking an EDR product severely restricts an organisation’s ability to detect and respond to threats in a capable, and effective manner. Well maintained and effective EDR that is operated by a well-resourced, exercised security team significantly impacts threat actor and red team activities; often bringing the Mean Time to Detected a breach down from days/weeks to hours/days.
  • Application Control – When application allowlisting was first introduced, it was clunky and often broke a lot of business applications. However it has evolved since those early days but is still not well implemented by organisations. It takes significant initial investment to properly implement but acts in a manner which can strongly restrict a threat actors ability to operate in an environment. Good implementations are based on user roles; most employees require a browser, and basic office applications to conduct their work. From there additional applications can be allowed dependent on the role, and users who do not have application control applied have segregated devices to operate on, which will help limit exposure. Without this, threat actors and red teams can often run multiple tools which most users have no use for or business using during their day jobs; furthermore it can result in shadow IT applications as users introduce portable apps to their devices which makes investigation of incidents difficult as it muddies the water in terms of if it is legitimate use or threat actor activity.

Insufficient Logging and Monitoring

If an incident does occur – and remember that red team engagements are also about exercising the organisation’s ability to respond; then logging and monitoring become paramount for the organisation to effectively respond. When we have exercised organisations in the past, we often find that at this stage of the engagement a number of issues become quickly apparent that prevent the security teams from being effective. These are almost often linked to a lack of centralised logging, poor incident detection, and log retention issues.

  • Lack of Centralised Logging: Threat actors have been known to wipe logs during their activities, when this occurs on compromised devices, it makes detecting activities difficult, and reconstruction of threat actor activities impossible. Centralising logs allows additional tooling to be deployed as a secondary defence to detect malicious activity so that devices can be isolated; it also means that reconstruction of events is significantly easier. Many EDR products will support centralised logging, however this is only true on devices which have agents installed, and on supported operating systems; therefore to make this effective additional tooling may need to be used such as syslog and Sysmon to ensure that logging is sent to centralised hosts for analysis and curating. Centralised logging can also be easier to store for longer periods of time, permitting effective investigations to understand how, what and where the threat actor/red team have been operating and what they accomplished before being detected and containment activities are undertaken.
  • Poor Incident Detection: Organisations which do not exercise their security teams often will act poorly when an incident occurs. Staff need to practice using SIEM (Security Information and Event Management) tooling, and develop playbooks and queries that can be run against the monitoring software in order to locate and classify threats. When this does not occur, identifying genuine threats from background user activity can become tedious, difficult, and ineffective, resulting in poor containment and ineffective response behaviours. When this occurs inn red teams, it can result in alerts being ignored or classed as false positives which leads to exacerbating an incident.
  • Log Retention Issues: many organisations keep at most, 30 days of logs – furthermore many organisations think they have longer retention than this as they have 180 days of alert retention, not realising that alerts and logs are often different. As a result we can often review alerts as far back as 6 months, but can only see what happened around those alerts for 30 days. A lot of threat actors know about this shortcoming, and will often wait 30 days once established in the network to conduct their activities to make it difficult for the responders to know how they got it, how long they have been there, and where else they have been. This often comes up in red teams as many red teams will run for at least 4 weeks, if not longer to deliver a scenario, which makes exercising the detection and response difficult when this issue is present.

Conclusion

These are just the 5 most common issues we identify when conducting a red team engagement; however, they are not the only issues we come across. They are fundamental issues which are ingrained in organisations due to a mixture of culture and lack or deliberate architectural design considerations.

Red team engagements not only help shine a light on these sorts of issues but also allows the business to plan how to address them at a pace that works for them, rather than as a consequence of a breach. Additionally, red team engagements can help identify areas where additional focus testing can help test additional controls, provide a deeper understanding of identified issues, and exercise controls that are implemented following a red team engagement.

Basically, a red team engagement is just the start or milestone marker in an organisation’s security journey. It is used in tandem with other security frameworks and capabilities to deliver a layered, effective security function which supports an organisation to adapt, protect, detect, respond and recover effectively to an ever-evolving world of cybersecurity threats.

Our Red Team services: https://prisminfosec.com/service/red-teaming-simulated-attack/

Let’s Go Phishing

Kian J recounts a recent simulated phishing engagement delivered to a major financial organisation

We recently completed a project for a major financial organisation which saw us deliver a red team engagement covering three scenarios. The first involved a simulated phishing attack and we thought it worth sharing the procedures used by our consultants to gain complete, persistent, unauthorised access to the company’s internal network.

Before we embarked upon the exercise, we needed to assess the requirements of the phishing campaign and pick a campaign profile that was a best fit for the use case. Examples of possible attacks included:

  • Email Phishing
  • URL/HTTPs Phishing
  • Spear Phishing
  • Whale Phishing
  • Vishing
  • Smishing
  • Angler Phishing
  • Pharming
  • Clone Phishing

Note: this is not a complete list of attacks, but only a handful that would be considered in a remote phishing engagement.

Due to the engagement requirements, we had decided that the best approach would be a multi-pronged campaign, consisting of vishing, email phishing and URL phishing.

Initially, we used email, however, it quickly became apparent that users had been trained in this area, resulting in burnt accounts which we were able to diagnose due to a high bounce back rate on our emails. Despite running the assault over a couple of days with various attack vectors, it all led to the same result – with our account or domain being blacklisted.

At this stage the natural conclusion would have been to assume that the staff had received adequate training in phishing engagements. However, we decided to give it one last shot using a vishing campaign conducted using URL phishing.

We continued with our OSINT efforts, specifically scraping phone numbers from various sites such as rocketreach.io and lusha.com to put together a new target list. Ideally, we wanted this new list to consist of higher value targets such as developers or technical leadership roles. The purpose of this was that once we landed in the environment, we would hopefully have more privileges enabling us to escalate access. This resulted in a target list consisting of 31 phone numbers.The next step we needed to take was to get the staff to either visit a malicious site or to give us their username, password, and MFA token over the phone. We figured the first solution would have a better outcome (this is where the URL phishing comes into play). So, we went through the endpoints we had access to and decided that we would clone a Citrix site, and had created the following page:

Citrix Login
Citrix Gateway Login Screen

The page, after submitting credentials would then ask for a MFA token:

Citrix Login
Login Requesting an MFA Token Value

Great, now we had a target list and a malicious site (hosted in AWS to bypass any proxy filtering) and so were primed and ready to begin the vishing attack. 

On our first call we managed to get a hold of someone I will refer to as “Mark”. We than ran through a simple script with him, explaining that we were swapping over Citrix environments and needed to test the database changeover had worked.

Mark was a great help throughout the assessment, but specifically on this call, he gave us a vital piece of information; the Citrix authentication was being handled by Microsoft single-sign-on (SSO) and that the page wasn’t sending him the SMS. We quickly got another consultant on the case to process the request (by submitting the credentials into their legitimate site) which would then force the SMS process to kick off.

We then called Mark back and, as we were already friendly with him, went through the same process. Mark then submitted the newly generated MFA token, an example of the output of which can be seen here:

Phished Credentials
Receiving the Credentials from our Target

Mark was then forwarded straight to their legitimate landing page, and it appeared as though sign-in was successful; this was caused by their time-out periods being overly long.

Perfect! So now we had Mark’s username, password, and an MFA token, but if we wanted to access the Citrix environment consistently, we would need multiple tokens.  As it worked out, we had an easy solution to bypass this: the Microsoft Authentication application. We proceeded to log in to the app with Marks details:

Authenticator Screen
Microsoft Authenticator App

This then reflected on the website, adding two new options to the user:

Login Screen with Auth Options
The login then allowed use of the Authenticator

The two new options, “Approve a request on my Microsoft Authentication app” and “Use a verification code from my mobile app” were now the only indicators that the user had been compromised, however this did not lead to the campaign being discovered.

Finally, after a week of attempts, we had established a means of gaining complete, persistent, unauthorised access to the company’s internal network. From this point, we were then able to compromise another two accounts, totaling three before we decided that it was no longer going to provide us with an advantage and disclosing to the client.

In conclusion, we think there are two key recommendations not just for the company concerned but for anyone else who thinks they’ve covered the bases when it comes to phishing attacks. Firstly, we would advise that staff are trained in different forms of phishing attacks, such as email attacks and vocal attacks. Staff can quickly let their guard down when different channels are used. Secondly, we would also advise that any unmanaged devices are blocked, or at least have heavy restrictions placed upon them.

If you’d like to talk to us about how we can help test your resilience to a phishing attack, do contact us at contact@prisminfosec.com or call us on 01242 652 100.

Prism Infosec achieves CREST STAR Certification

Prism Infosec is delighted to announce that its approach and methodologies for the delivery of Simulated Target Attack (STAR) Intelligence-Led Penetration Testing (red teaming) services have been assessed and approved by CREST.

Prism Infosec has therefore been awarded CREST STAR membership status.

To book a red team engagement aligned to our STAR methodology see our https://prisminfosec.com/services/red-teaming/ page and request a callback!