Capitalising on the Investment of a Red Team Engagement

Cybersecurity red teams are designed to evaluate an organisation’s ability to detect and respond to cybersecurity threats. They are modelled on real life breaches, giving an organisation an opportunity to determine if they have the resiliency to withstand a similar breach. No two breaches are entirely alike, as each organisation’s organic and planned growth of their infrastructure. They are often built around their initial purpose before being subjected to acquisitions and evolutions based on new requirements. As such the first stage of every red team, and real-world breach is understanding that environment enough to pick out the critical components which can springboard to the next element of the breach. Hopefully, somewhere along that route detections will occur, and the organisation’s security team can stress test their ability to respond and mitigate the threat. Regardless of outcome however, too often once the scenario is done, the red team hand in their report documenting what they were asked to do, how it went, and what recommendations would make the organisation more resilient, but is that enough?

Detection and Response assessments are part of the methodology for the Bank of England and FCA’s CBEST regulated intelligence-led penetration testing (red teaming). However, their interpretation of it is more aligned at understanding response times and capabilities. At LRQA (formerly LRQA Nettitude), I learned the value of a more attuned Detection and Response Assessment, a lesson I brought with me and evolved at Prism Infosec.

At its heart, the Detection and Responses Assessment takes the output of the red team, and then turns it on its head. It examines the engagement from the eyes of the defender. We identify the at least one instance of each of the critical steps of breach – the delivery, the exploitation, the discovery, the privilege escalation, the lateral movement, the action on objectives. For each of those, we look to identify if the defenders received any telemetry. If they did, we look to see if any of that telemetry triggered a rule in their security products. If it triggered a rule, we look to see what sort of alert it generated. If an alert was generated, we then look to see what happened with it – was a response recorded? If a response was recorded, what did the team do about it? Was it closed as a false positive, did it lead to the containment of the red team?

Five “so what” questions, at the end of which we have either identified a gap in the security system/process or identified good, strong controls and behaviours. There is more to it than that of course, but from a technical delivery point of view, this is what will drive benefits for the organisation. A red team should be able to highlight the good behaviours as well as the ones that still require work, and a good Detection and Response Assessment not only results in the organisation validating their controls but also understanding why defences didn’t work as well as they should. This allows the red team to present the report with an important foil – how the organisation responded to the engagement. It shows the other side of the coin, in a report that will be circulated with the engagement information at a senior level of engagement, and can set the entire engagement into a stark contrast.

The results can be seen, digested and understood by C-level suite executives. There is no point in having a red team and reporting to the board that because of poor credential hygiene, or outdated software that the organisation was breached and remains at risk. The board already knows that security is expensive and that they are risk, but if a red team can also demonstrate the benefits or direct the funding for security in a more efficient manner by helping the organisation understand the value of that investment then it becomes a much more powerful instrument of change. What’s even better is that it can become a measurable test – we can see how that investment improves things over time by comparing results between engagements and using that to tweak or adjust.

One final benefit is that security professionals on both sides of the divide, (defenders and attackers) gain substantial amounts of knowledge from such assessments – both sides lift the curtain, explain the techniques, the motivations and the limitations of the tooling and methodology. As a result both sides become much more effective, build greater respect, and are more willing to collaborate on future projects when not under direct test.

Next time your company is considering a red team, don’t just look at how long it will take to deliver or the cost, but also consider the return you are getting on that investment in the form of what will be delivered to your board. Please feel free to contact us at Prism Infosec if you would like to know more.

Our Red Team Services: https://prisminfosec.com/service/red-teaming-simulated-attack/

Gone Phishing

Social engineering extremely commonplace, we all experience it every day, and have done from an extremely early age. The most common social engineering we are exposed to is through advertising. Selling the desire to obtain goods or services using a variety of tactics designed to entice us. This is so socially acceptable that we barely even notice it, let alone comment on it anymore, and it’s extremely successful. In Cybersecurity we associate social engineer in a more sinister light. Here it is used to achieve specific goals that would further a compromise of the organisation. The social engineering can take the form of physical interactions, but more often is digital, expressing itself in the forms of Phishing (emails), Vishing (Voice Calls), and Smishing (IM/SMS text messages). In this blog we’ll look at how we run each of these sorts of campaigns to model real world threat actors.

Before we look at the individual techniques, its worth focussing on the target for a second. Often the victims of social engineering in cybersecurity are not selected for who they are, but rather the access or role they are currently delivering for the organisation. The fact of the matter is, anyone can be a victim of social engineering in cybersecurity, all it takes is the right lure, at the wrong time to turn a user into a victim. We spend a significant amount of time training staff on social engineering; however people are only a single (albeit vital) thread in the tapestry of what makes a social engineering attack successful. Users should never be the single control that is preventing an organisation from being hacked, they should not even be the first or final control – there needs to be technical controls supporting users to either prevent attacks from reaching them, or flagging suspicious behaviours to them, through to protecting the environment if a user does fall victim. Regardless of the outcome, a user should have confidence in their organisation to support them in reporting it and responding appropriately.

Phishing

Phishing often presents in one of two ways – mass, or spear. In mass phishing the attacker will send a cookie cutter email to as many targets as possible. They have a low expectation of success against any one target, but instead are trading on the probability to achieve any traction. Consider, if a mass phishing campaign has a 0.1% chance of successfully resulting in a  victim, and they send 10,000 emails using the campaign, which will still result in 10 victims. These attacks are cheap to setup, cheap to run, and even at such low return rates still can result in a profit. Fortunately, automated tools are particularly good at identifying these sorts of mass emails and classifying them as spam.

The alternative approach is a “spear phish” attack. This is when there are very few victims, but each email is carefully crafted to maximise the chance the victim will respond and follow through. They require research into the victim to identify approaches and likely contacts they will respond to. These attacks are much harder to spot, much more likely to succeed, but cost significantly more.

Vishing

Vishing is when a threat actor will call their victim. They will often be working from a script, and possibly some seed data to achieve their specific goal.

Vishing calls will usually employ impersonation, an attempt for the threat actor to pretend to be an authority or individual who is likely to be interacted with by their victim. These sorts of attacks have become more pervasive in recent years thanks to generative AI advances which permit real time voice imitation. Supporting the impersonation, will often be additional tools such as spoofed caller ID.

Tactics employed in vishing calls usually make use of fear or greed, and an attempt to create a sense of urgency. Often the approach will have some sort of partial information (sometimes called seed information) which helps the threat actor drive the initial conversation.

Regardless of the approach taken, the threat actor will often be seeking to either obtain sensitive information or even provide remote access to a device. The remote access might be obvious, such as using Windows Quick Assist, or installing a tool like ScreenConnect, AnyDesk, TeamViewer, RustDesk, etc. or to download and open a document on their behalf.

Smishing

Smishing is when a threat actor will contact their victim using an instant messaging, or short messaging service (SMS).

Smishing attacks are becoming more common, especially with the business movement towards tools like Teams and Slack. For Microsoft Teams this can be particularly insidious as the threat actor can create a throwaway Microsoft Azure account to obtain a “onmicrosoft” domain. They can then rename their account to look more legitimate before making a connection to their target. This massively helps sell the impersonation and makes their targets far more likely to click links shared with them. This is because a significant number of defences are bypassed through this attack vector.

However mobile smishing is also a threat, however like the mass phishing scams are much more difficult to achieve success with, and again rely on sheer weight of numbers to result in a small number of successes.

Protecting yourself

The greatest user defence against phishing in all its forms is scepticism. However it is impossible to be sceptical of every email, phone call and IM that comes to a person working in a role that requires interaction with other people, especially strangers. However there are some steps which can help. The first thing to do, if it is a delayed interactive approach, such as phishing or smishing is not to respond immediately.  The last thing a threat actor employing these tactics wants you to do is to take your time, as it removes the sense of urgency and gives you breathing space to counter fear or greed being employed as part of the lure. Time can also permit you to verify some of their details – call your colleagues or the organisation they are pretending to be to confirm their identity. For interactive approaches, such as vishing, if it is unexpected, then again taking time to verify the caller, and even if they pass those or sound familiar, trust your instincts, if what a trusted source is asking you to do, even if it sounds reasonable, is still unusual, then telling them you will call them back and ask for a number you can check will make a massive difference. If the caller is genuine, they will accept and provide you the information, whilst if not they will try to keep you on the line and get you to change your mind, adding more pressure to the call. In such an event, hanging up and walking away is often the best way to go ahead and allow you to gather your thoughts.

Security controls can help by preventing some of these approaches from getting to users, or permitting suitable responses and protections from harm being inflicted should those defences and scepticism fail. However they are not a panacea, and need to be calibrated and exercised regularly to ensure they are effective.

Final Thoughts

Crafting a phishing attack ethically, is a challenge for cybersecurity companies. We tend to try to avoid using lures which trade on fear or empty promises to achieve the goal of the engagement. For example, cybersecurity companies were careful to avoid using promises of covid vaccines in phishing lures during the pandemic as this would have been unethical in terms of using people’s fear of getting sick and of the hope for a prophylactic to achieve their goal during a time of global desperation and stress. Likewise, a lot of cybersecurity companies will avoid topics which could have legal implications for the client organisations, such as the promise to changes to salary, pensions, holidays, or working hours. A fine line does need to be tread however, as real-world cybercriminals will capitalise on exactly these sorts of topics to achieve their goals.

Ultimately the goal of any phishing test will be either to test staff training (in which case its best if the results are anonymised as the focus should be on how well training was used by staff, not call out individuals), or to achieve a foothold for a red team as part of a threat scenario. In the former, we are testing just the training the user has received and how they put it to use protecting themselves and the organisation. In the latter we are evaluating the technical controls AND user. In red teaming we will also use an “assisted click”. This is when we don’t want to test the user, just the technical controls – this is achieved by having a user briefed to just follow the phish instructions if they receive it, no matter what it asks them to do, but otherwise act as if they had been successfully phished in the event the attack is detected and responded to.

Prism Infosec has a significant amount of experience in conducting social engineering engagements which includes phishing, smishing and vishing. If you would like to know more, then please feel free to contact us to discuss how we can help evaluate your defences and training.

Find out more here: Social engineering simulation mimics attacks on your organisation

Red Team Scenarios – Modelling the Threats

Introduction

Yesterday organisations were under cyber-attack, today even more organisations are under cyber-attack, and tomorrow this number will increase again. This number has been increasing for years, and will not reverse. Our world is getting smaller, the threat actors becoming more emboldened, and our defences continue to be tested. Any organisation can become a victim to a cyber security threat actor, you just need to have something they want – whether that is money, information, or a political stance or activity inimical to their ideology. Having cybersecurity defences and security programs will help your organisation be prepared for these threats, but like all defences, they need to be tested; staff need to understand how to use them, when they should be invoked, and what to do when a breach happens.

Cybersecurity red teaming is about testing those defences. Security professionals take on the role of a threat actor, and using a scenario, and appropriate tooling, conduct a real-world attack on your organisation to simulate the threat.

Scenarios

Scenarios form the heart of a red team service: they are defined by the objective,  the threat actor, and the attack vector. This will ultimately determine what defences, playbooks, and policies are going to be tested.

Scenarios are developed either out of threat intelligence – i.e. threat actors who are likely to target your organisation have a specific modus operandi in how they operate; or scenarios are developed out of a question the organisation wants answered to understand their security capabilities.

Regardless of the approach, all scenarios need to be realistic but also be delivered in a safe, secure, and above all, risk managed manner.

Objectives

Most red team engagements start by defining the objective. This would be a system, privilege or data which if breached would result in a specific outcome that a threat actor is seeking to achieve. Each scenario should have a primary target which would ultimately result in impact to the: organisation’s finances (either through theft or disruption (such as ransomware)); data (theft of Personally Identifiable Information (PII) or private research); or reputation (causing embarrassment/loss of trust through breach of services/privacy). Secondary and tertiary objectives can be defined but often these will be milestones along the way to accomplish to primary.

Objectives should be defined in terms of impacting Confidentiality (can threat actors read the data), Integrity (can threat actors change the data), or Availability (can threat actors deny legitimate access to the data). This determines the level of access the red team, will seek to achieve to accomplish their goal.

Threat Actors 

Once an objective is chosen, we then need to understand who will attack it. This might be driven by threat intelligence, which will indicate who is likely to attack an organisation, or for a more open test, we can define it by sophistication level of the threat actor…

Not all threat actors are equal in terms of skill, capability, motivation, and financial backing. We often refer to this collection of attributes as the threat actor’s sophistication. Different threat actors also favour different attack vectors, and if the scenario is derived from threat intelligence, this will inform how that should be manifested.

High Sophistication

The most mature threat actors are usually referred to as Nation State threat actors, but we have seen some cybercriminal gangs start to touch elements of that space. They are extremely well resourced (often with not only capability development teams, but also with linguists, financial networks, and a sizeable number of operators able to deliver 24/7 attacks). They will often have access to private tooling that is likely to evade most security products; and they are motivated usually by politics (causing political embarrassment to rivals, theft of data to uplift country research, extreme financial theft, denigrating services to cause real-world impact/hardship. Examples in this group can include APT28, APT38, and WIZARD SPIDER

Medium Sophistication

In the mid-tier maturity range we have a number of cybercriminal and corporate espionage threat actors. These will often have some significant financial backing – able to afford some custom (albeit commercial) tooling which will have been obtained either legally, or illegally; they may work solo, but will often be supported by a small team who can operate 24/7 but will often limit themselves to specific working patterns where possible. They may have some custom written capabilities, but these will often be tweaked versions of open-source tools. They are often motivated by financial concerns – whether that is profiting from stolen research, or directly obtaining funding from their victim due to their activities. Occasionally they will also be motivated by some sort of activism – often using their skills to target organisations which represent or deliver a service for a perceived cause which they do not agree with. In this motivation they will often either seek to use the attack as a platform to voice their politics or to try and force the organisation to change their behaviour to one which aligns better with their beliefs. Examples of threat actors in this tier have included  FIN13 and LAPSUS$.

Low Sophistication

At the lower tier maturity range, we are often faced with single threat actors, rather than a team; insiders are often grouped into this category. Threat actors in this category often make use of open-source tooling, which may have light customisation depending on the skill set of the individual. They will often work fixed time zones based on their victim, and will often only have a single target at a time or ever. Their motivation can be financial but can also be motivated by personal belief or spite if they believe they have been wronged. Despite being considered the lowest sophistication of threat actor, they should never be underestimated – some of the most impactful cybersecurity breaches have been conducted by threat actors we would normally place in this category- such as Edward Snowden, or Bradley Manning.

Attack Vector

Finally, now that we know what will be attacked, and who will be attacking we need to define how the attack will start. Again, threat intelligence gathered on different threat actors will show their preferences in terms of how they can start an attack, and if the objective is to keep this realistic, that should be the template. However if we are using a more open test we can mix things up and use an alternative attack vector. This is not to say that specific threat actors won’t change their attack vector, but they do have favourites.

Keep in mind, the attack vector determines which security boundary will be the initial focus of the attack, and they can be grouped into the following categories:

External (Direct External Attackers)

  • Digital Social Engineering (phishing/vishing/smshing)
  • Perimeter Breach (zero days)
  • Physical (geographical location breach leading to digital foothold)

Supply Chain (Indirect External Attackers)

  • Software compromise (backdoored/malicious software updates from trusted vendor)
  • Trusted link compromise (MSP access into organisation)
  • Hardware compromise (unauthorised modified device)

Insider (both Direct and Indirect Internal Attackers)

  • Willing Malicious Activity
  • Unwilling Sold/stolen access
  • Physical compromise

Each of these categories not only contain different attack vectors, but will often result in testing different security boundaries and controls. Whilst a Phishing attack will likely result in achieving a foothold on a user’s desktop – the likely natural starting position for an insider conducting willing or unwilling attacks; they will test different things, as an insider will not need to necessarily deploy tooling which might be detected, and will already have passwords to potentially multiple systems to do their job. Understanding this is the first step in determining how you want to test your security.

Pulling it together

Once all these elements have been identified and defined, the scenario can move forward to the planning phase before delivery. This is where any pre-requisites to deliver the scenarios, any scenario milestones, any contingencies can be prepared to help simulate top tier threat actors,  and any tooling preparations can be done to ensure the scenario can start. Keep in mind that whilst the scenario objective might be to compromise a system of note, the true purpose of the engagement is to determine if the security teams, tools, and procedures can identify and respond to the threat. This can only be measured and understood if the security teams have no clue when or how they will be tested, as real-world threats will not give any notice either.

Even if the red team accomplish the goals, the scenario will still help security teams understand the gaps in their skills, tools, and policies so that they can react better in the future. Consider contacting Prism Infosec if you would like your security teams to reap these benefits too.

Our Red Team Services: https://prisminfosec.com/service/red-teaming-simulated-attack/

Red Teams don’t go out of their way to get caught (except when they do)

Introduction

In testing an organisation, a  red team will be seeking to emulate a threat actor by achieving a specific goal – whether that is to gain administrative control of the network and prove they can control backups (a kin to how many ransomware operators work), through to proving access to financial systems, or even gaining access to sensitive data repositories. They will employ tactics, tools and capabilities aligned to the sophistication level of the threat actor they are pretending to be. The question asked about red teams asks is always “can the bad guys get to system X”,  when it really should be, “can we spot the bad guys before they get to system X AND do something effective about it”. The unfortunate answer is that with enough time and effort, the bad guys will always get to X. What we can do in red teaming is try to tell you how the bad guys will get to X and help you understand if you can spot the bad guys trying things.

Red Team Outcomes

In assessing an organisation, we often have engagements go in one of two ways – the first (and unfortunately more common) is that the red team operators achieve the objective of the attack, sometimes this is entirely without detection and sometimes there is a detection, but containment is unsuccessful. The other is when the team are successfully detected (usually early on) and containment and eradication is not only successful, but extremely effective.

So What?

In both cases, we have failed to answer some of the exam questions, namely the level of visibility the security teams have across the network.

In the first instance, we don’t know why they failed to see us, or why they failed to contain us, and why they didn’t spot any of the myriad other activities we conducted. We need to understand if the issue is one of process or effort (is the security team drinking from a firehose of alerts and we were there but lost in the noise; or did the security team see nothing because they don’t have visibility in the network; or do we have telemetry but no alerts for the sophistication level of the attacker’s capabilities/tactics?). The red team can try to help answer some of these questions by moving the engagement to one of “Detection Threshold Testing” where the sophistication level of the Tactics, Techniques and Procedures of the test are gradually lowered, and the attack becomes noisier until a detection occurs, and a response is observed. If the red team get to the point of dropping disabled, un-obfuscated copies of known bad tools on domain controllers which are monitored by security tools and there are still no detections, then the organisation needs to know and work out why. This is when a Detection and Response Assessment (DRA) Workshop can add real value to understand the root causes of the issues.

In the second instance we have observed a great detection and response capability, but we don’t know the depth of the detection capabilities – i.e. if the red team changed tactics, or came in elsewhere would the security team have a similar result? We can answer this sometimes with additional scenarios which model different threat actors, however multiple scenario red teams can be costly, and what happens if they get caught early on in all three scenarios? I prefer to adopt an approach of trust but verify in these circumstances by moving an engagement through to a “Declared Red Team”. In this circumstance, the security teams are congratulated on their skills, but are informed that the exercise will continue. They are told the host the red team are starting on, and they are to allow it to remain on the network uncontained but monitored while the red team continue testing. They will not be told what the red team objective is or on what date the test will end – they will however be informed when testing is concluded. If they detect suspicious activity elsewhere in the network  during this period they can deconflict the activity with a representative of the test control group. If it is the red team, it will be confirmed, and the security team will  be asked to record what their next steps would be. If it isn’t then the security team are authorised to take full steps to mitigate the incident; a failure on the red team to confirm, will always be treated as malicious activity unrelated to the test. Once testing is concluded (objective achieved/time runs out), the security team is informed, and the test can move onto a Detection and Response Assessment (DRA) Workshop.

Next Steps

In both of these instances, you will have noted that the next step is a Detection and Response Assessment (DRA) Workshop – DRA’s were introduced by the Bank of England’s CBEST testing framework, LRQA (formerly LRQA Nettitude) refined the idea, and Prism Infosec have adapted it by fully integrating NIST 2.0 into it. Regardless, it is essentially a chance to understand what happened, and what the security team did about it. The red team should provide the client security team with the main TTP events of the engagement – initial access, discovery which led to further compromise, privilege escalation, lateral movement, action on objectives. This should include timestamps and locations/accounts abused to achieve this. The security team should come equipped with logs, alerts, and playbooks to discuss what they saw, what they did about it, and what their response should be. Where possible, this response should also have been exercised during the engagement so the red team can evaluate its effectiveness.

The output of this workshop should be a series of observations about areas of improvement for the organisation’s security teams, and areas of effective behaviours and capabilities. These observations need to be included in the red team report – and should be presented in the executive summary to help senior stakeholders understand the value and opportunities to improve their security capabilities, and why it matters.

Conclusion

Red Teams will help identify attack paths, and let you know if bad guys can get to their targets, but more importantly they can and should help organisations understand how effective they are detecting and responding to the threat before that happens. Red Teams need to be caught to help organisations understand their limits so they can push them, show good capabilities to senior stakeholders, and identify opportunities for improvement. An effective red team exercise will not only engineer being caught into their test plan, but they will ensure that when it happens, the test still adds value to the organisation.

Our Red Team services: https://prisminfosec.com/service/red-teaming-simulated-attack/

To you it’s a Black Swan, to me it’s a Tuesday…

Cybersecurity is a discipline with many moving parts. At its core though, it is a tool to help organisations identify, protect, detect, respond, and recover, then adapt to the ever-evolving  risks and threats that new technologies, and capabilities that threat actors employ through threat modelling. Sometimes these threats are minor – causing annoyance but no real damage, but sometimes these threats are existential and unpredictable; these are known as Black Swan events.

They represent threats or attacks that fall outside the boundaries of standard threat models, often blindsiding organisations despite rigorous security practices.

In this post, we’ll explore the relationship between cybersecurity threat modelling and Black Swan events, and how to better prepare for the unexpected.

What Are Black Swan Events?

The term Black Swan was popularized by the statistician and risk analyst Nassim Nicholas Taleb. He described Black Swan events as:

  • Highly improbable: These events are beyond the scope of regular expectations, and no prior event or data hints at their occurrence.
  • Extreme impact: When they do happen, Black Swan events have widespread, often catastrophic, consequences.
  • Retrospective rationalization: After these events occur, people tend to rationalize them as being predictable in hindsight, even though they were not foreseen at the time.

In cybersecurity, Black Swan events can be seen as threats or attacks that emerge suddenly from unknown or neglected vectors—such as nation-state actors deploying novel zero-day exploits, or a completely new class of vulnerabilities being discovered in widely used software.

The Limits of Traditional Threat Modelling

Threat modelling is a systematic approach to identifying security risks within a system, application, or network.

It typically involves:

  • Identifying assets: What needs protection (e.g., data, services, infrastructure)?
  • Defining threats: What could go wrong? Common threats include malware, phishing, denial of service (DoS) attacks, and insider threats.
  • Assessing vulnerabilities: How could the threats exploit system weaknesses?
  • Evaluating potential impact: How severe would the consequences of an attack be?
  • Mitigating risks: What steps can be taken to reduce the likelihood and impact of threats?

While highly effective for many threats, traditional threat modelling is largely based on past experience and known attack methods. It relies on patterns, data, and risk profiles developed from historical analysis. However, Black Swan events, by their nature, evade these models because they represent unknown unknowns—threats that have never been seen before or that arise in ways no one could predict. This is where organisations often encounter significant challenges. Despite extensive security efforts, unknown vulnerabilities, unexpected technological changes, or even human error can expose them to unforeseen, high-impact cyber events.

Real-World Examples of Cybersecurity Black Swan Events

1. The SolarWinds Hack (2020)

The SolarWinds cyberattack, attributed to a nation-state actor, was one of the most devastating and unexpected breaches in recent history. Attackers compromised the software supply chain by embedding malicious code into SolarWinds’ Orion software updates, which were then distributed to thousands of organizations, including U.S. government agencies and Fortune 500 companies.

The sophistication of the attack and the sheer scale of its impact make it a classic Black Swan event. It was a novel approach to cyber espionage, and its implications were far-reaching, affecting critical systems and sensitive data across industries.

2. NotPetya (2018)

The Petya ransomware that launched in 2016 was a standard ransomware tool – designed to encrypt, demand payment and then be decrypted. NotPetya however was something different. It leveraged two changes – the first was that it was changed to not be reversed – once data was encrypted, it could not be recovered; this made it a wiper instead of ransomware. The second was that it also had the ability to leverage the EternalBlue exploit, much like the Wannacry ransomware code that attacked devices worldwide earlier that year – this allowed it to spread rapidly around unpatched Microsoft Windows networks.

NotPetya is believed have infected victims through a compromised piece of Ukrainian tax software called M.E.Doc. This software was extremely widespread throughout Ukrainian businesses, and investigators found that a backdoor in its update system had been present for at least six weeks before NotPetya’s outbreak.

At the time of the outbreak, Russia was still in the throes of conflict with the Ukrainian state, have annexed the Crimean peninsula less than two years prior; and the attack was timed to coincide with Constitution Day, a Ukrainian public holiday commemorating the signing of the post-Soviet Ukrainian constitution. As well as its political significance, the timing also ensured that businesses and authorities would be caught off guard and unable to respond. What the attackers did not consider however was how far spread that software was. Any company local or international who did business in Ukraine likely had a copy of that software. When the attackers struck, they hit multinationals, including the massive shipping company A.P. Møller-Maersk, the Pharmaceutical company Merck, delivery company FedEx, and many others. Aside from crippling these companies, reverberations of the attack were felt in global shipping, and across multiple business sectors.

NotPetya is believed to resulted in more than $10 billion in total damages across the globe, making it one of, if not the, most expensive cyberattack in history to date.

How to Prepare for Cybersecurity Black Swan Events

While it’s impossible to predict or completely prevent Black Swan events, there are steps that organisations can take to enhance their resilience and minimise potential damage:

1. Adopt a Resilience-Based Approach

Rather than solely focusing on known threats, build your cybersecurity strategy around resilience. This means being prepared to rapidly detect, respond to, and recover from attacks, regardless of their origin.

Organisations should prioritise:

  • Incident response plans: Have well-documented and tested response procedures in place for any type of security event.
  • Redundancy and backups: Ensure critical systems and data have redundant layers and secure backups that can be quickly restored.
  • Post-event recovery: Create strategies to mitigate the damage and recover swiftly, minimising long-term business disruption.

2. Encourage Continuous Security Research and Innovation

Security Testing: Many Black Swan events are the result of the exploitation of previously unknown vulnerabilities. Investing in continuous security research and vulnerability discovery (through bug bounty programs, penetration testing, etc.) can reduce the number of undiscovered vulnerabilities and improve overall system security.

Defence Engineering: Implement defensive measures such as application isolation, network segmentation, and behaviour monitoring to limit the damage if a zero-day exploit is discovered.

3. Utilize Cyber Threat Intelligence

Staying informed on emerging cybersecurity trends and participating in industry collaborations can give organisations an edge when it comes to detecting potential Black Swan events. By sharing information, organisations can learn from others’ experiences and uncover threats that might not have been apparent within their own systems.

4. Model Chaos and Test the Unthinkable

Chaos engineering, which involves intentionally introducing failures into systems to see how they respond, can be an effective way to test the robustness of an organization’s defences. These drills can help security teams explore what might happen during an unanticipated event and can uncover system weaknesses that might otherwise be overlooked.

5. Promote a Culture of Adaptive Security

Adopting an adaptive security mindset means continuously monitoring the threat landscape, adjusting security controls, and being willing to evolve when necessary. The concept of security-by-design—where security considerations are built into the very foundation of systems and software—will also help organisations stay ahead of new and unforeseen risks.

Black Swan events in cybersecurity may be rare, but their consequences can be catastrophic. The unpredictability of these threats poses a unique challenge, requiring organisations to shift from a purely reactive, known-threat approach to one that emphasises resilience, adaptation, and continuous learning.

Red Team engagements are one tool which can help organisations develop resilient security strategies designed to respond to Black Swans. What makes this possible is some of the key concepts, controls and attitudes which are introduced during the planning stages of the engagement. The results of red team engagements using this approach helps shape boardroom discussions around strategy, resilience, and capacity in a way that allows the business to anticipate Black Swans and be prepared should they ever arrive.

Our Red Team services: https://prisminfosec.com/service/red-teaming-simulated-attack/

The Value of Physical Red Teaming

Introduction

In testing an organisation, a red team will be seeking to emulate a threat actor by achieving a specific goal – whether that is to gain administrative control of the network and prove they can control backups (a kin to how many ransomware operators work), through to proving access to financial systems, or even gaining access to sensitive data repositories. They will employ tactics, tools and capabilities aligned to the sophistication level of the threat actor they are pretending to be.

However, not all threat actors operate only in the digital threat axis, and will instead seek to breach the organisation itself to achieve their goal. Physical red teaming seeks to test an organisation’s resilience and security culture. It is aimed more at testing people and physical security controls. The most common physical threat actor is the insider threat; however nation state,  criminal, industrial espionage, and activist threats also remain prevalent in the physical arena, however their motivations to cause digital harm will vary.

As part of an organisation’s layered defence we not only have to consider the digital defences but also the physical ones. Consider, would it be easier for the threat actor to achieve their goal by physically taking a computer rather than try to digitally gain a foothold and then get to the target and complete their activities? Taking a holistic approach to security makes a significant difference to an organisation.

Understanding Physical Red Teaming

Physical red team simulates attacks on physical security systems and behaviours to test defences. It accomplishes this by:

  • attempting to gain unauthorised access to buildings though:
    •  the manipulation of locks,
    • use of social engineering techniques such as  tailgating
  • bypassing security protocols such as:
    •  cloned access cards,
    • managing to connect rogue network devices,
    • or gaining access to unattended documents from bins and printers;
  • or exploiting social behaviours and abusing preconceptions
    • using props to appear as though you belong or are a person of authority to avoid being challenged.

In digital red teaming we are evaluating people and security controls in response to remote attacks. The threat actor must not only convince a user to complete actions on their behalf, but must also then bypass the digital controls that are constantly being updated and potentially, monitored.

In comparison, physical security controls are rarely updated due to cost reasons as they are integrated into the buildings. Furthermore, people will often act very differently towards an approach when it is conducted online than if it is conducted in person. This can be down to peoples’ confidence and assertiveness which psychologically is different online than in person. Therefore it can be important to test the controls that keep threat actors out and if they fail, that staff feel empowered and supported to be able to challenge individuals who they believe do not belong, even if that person is one of authority until their credentials have been verified.

Why Physical Security Matters in Cybersecurity

At the top end of the scale, we should consider the breach caused by Edward Snowden at the NSA in 2013  which affected the national security of multiple countries. This was a trusted employee, who abused their privileges as a system administrator to breach digital security controls, and abused and compromised credentials of other users who trusted him to gain unauthorised access to highly sensitive information. He then breached physical security controls to extract that data and remove it, not only from the organisation, but also the country. The impact of that data-breach was enormous in terms of reputational damage, as well as tools and techniques used by the security services. Whilst he claimed his motivation was an underlying privacy concern (which was later ruled unlawful by US courts); the damage his actions caused have undoubtedly, though impossible to distinctly prove, inflicted significant threat to life for numerous individuals worldwide. Regardless, this breach was a failing of both physical controls (preventing material from leaving the premises) and digital (abusing trusted access to gain access to digital data stores).

Other attacks do exist however, consider back in 2008, a 14-year-old, with a homemade transmitter deliberately attacked the Polish city of Lodz’s tram system. This individual ended up derailing four trams, injuring a dozen individuals. Using published material he spent months studying the city’s rail lines to determine where best to create havoc; then using nothing more than a converted TV remote, inflicted significant damage. In this instance, the digital controls were related to the material that had been published regarding the control systems and the unauthenticated and unauthorised signals being acted upon by the system. Whilst the physical controls were in terms of being able to direct signals to the receiver which permitted the attack to occur.

Key Benefits of Physical Red Teaming

A benefit of physical red teaming is in testing and improving an organisation’s response to physical breaches or threats. Surveillance, access control systems, locks, and security staff can be assessed for weaknesses, and it can help identify lapses in employee vigilance (e.g., tailgating or failure to challenge strangers).

This in turn can lead to improvements in behaviours, policies, and procedures for physical access management. Furthermore, physical red teaming encourages employees to take an active role in security practices and fosters an overall culture of security.

Challenges of Physical Red Teaming

However delivering physical red teaming is fraught with ethical and legal risk; aside from trespassing, breaking and entering, and other criminal infringements, there could also be civil litigation concerns depending on the approach the consultants take.

Therefore it is important to establish clear consent and guidelines from the organisation, this must include the agreed scope – what activities the consultants are permitted to do, when and where those activities will take place, and who at the client organisation is responsible for the test. This information, including any additional property considerations such as shared tenancies or public/private events which may be impacted by testing also need to be considered and factored into the scope and planning. It is not unusual for this information to be captured into a “get out of jail” letter provided to the testers along with client points of contact to verify the test and stand down a response.

This is to ensure that testing can remain realistic but also any disruption caused by it can be minimised.

Cost is always also going to be a concern, as it takes time for consultants to not only travel to site, but also conduct surveillance, equip suitable props (some of which may need to be custom made), and develop and deploy tooling to bypass certain controls (such as locks and card readers) if that is required in the engagement.

Conclusion:

The physical threat axis is one that people have been attacking since time immemorial. However in today’s world we have shrunk distances using digital estates, and have managed to establish satellite offices beyond our traditional perimeters and as a result increased the complexity of the environments we must defend. Red teaming permits an organisation to assess all these threat axis and consider how physical and digital controls are not only required but need to be regularly exercised to ensure their effectiveness.

Readers of this post are therefore encouraged to consider the physical security of their locations – whether that is their offices, factories, transit hubs, public buildings through to security of home offices, and ask themselves if they have verified their security controls are effective and when they were last exercised.

Our Red Team services: https://prisminfosec.com/service/red-teaming-simulated-attack/

Red Teams – Supporting Incident Response

Unauthorised access into remote computers has been around since the 1960s but since those early days organisations and their IT systems have become complex, and that complexity is increasing at an exponential rate, making securing those systems increasingly difficult. Defence mechanisms like firewalls, antivirus software, and monitoring systems have become essential, but they aren’t enough on their own. Cybersecurity red teams—groups of ethical hackers tasked with simulating real-world attacks—are increasingly playing a pivotal role in not only identifying vulnerabilities but also supporting incident response efforts. Red teams need to be considered as part training opportunity for defenders, and part organisational security assessment. In this post, we’ll explore how red teams can actively contribute to the incident response (IR) process, helping organizations detect, mitigate, and recover from cyber incidents more effectively.

Proactive Detection and Prevention

Red teams conduct simulations that mimic threat actors of varying degrees of sophistication, which includes phishing attacks, insider threats, and other malicious activities to evaluate the effectiveness of an organisation’s security defences. Incident response teams, also known as blue teams, are responsible for defending against and responding to active threats. As red teams can simulate a wide range of attack scenarios, they provide the blue team realistic training opportunities.

Key Contributions

  • Identify vulnerabilities: By testing both technical and human vulnerabilities, red teams can uncover gaps in systems, processes and controls that attackers could exploit. These insights help incident response teams prioritize fixes and harden defences.
  • Test detection capabilities: During simulations, red teams often use tactics that mimic real-world threat actor behaviour. This allows Security Operations Centres (SOCs) to evaluate whether current detection mechanisms are effective in identifying threats – ideally early on in a breach, providing a feedback loop to improve monitoring and alerting systems.
  • Highlight gaps in response: Beyond detection, red teams can uncover weaknesses in the organisation’s ability to respond. These exercises help refine playbooks and improve reaction times in case of a real attack; acting like a fire drill for the organisation’s security teams.
  • Simulation of real-world attacks: Red team exercises provide blue teams with exposure to the tactics, techniques, and procedures (TTPs) used by adversaries. This allows the incident response team to better understand the behaviour of attackers and improve their incident detection and response procedures.
  • Drills under pressure: Simulated attacks create controlled, high-pressure situations where the blue team must react as if the incident were real. This strengthens their ability to work effectively under stress during actual incidents.
  • Collaborative feedback loops: After red team exercises, post-mortem reviews and feedback sessions help blue teams understand what went wrong and what went right. This collaborative effort ensures continuous improvement in incident detection and response.

Ongoing Incident and Forensic Support

When an incident occurs, quick identification of the threat’s origin, scope, and impact is critical. Red teams, by virtue of their expertise in adversary tactics, can aid in threat hunting and digital forensics during an ongoing incident.

Key Contributions

  • Insight into threat actor behaviour: Since red teams specialize in mimicking attacker methodologies, they can offer unique insights into how a real adversary might have breached the system. This includes understanding common evasion techniques, lateral movement strategies, and exfiltration tactics.
  • Identification of blind spots: During live incidents, red teams can collaborate with blue teams to identify blind spots or areas where an attack might have gone unnoticed. Their understanding of complex attack chains helps guide incident responders toward detecting hidden malware or compromised accounts.
  • Improving forensic analysis: Red teams can aid in digital forensics by offering a detailed understanding of how an attack might unfold. They can help analyse compromised systems, logs, and network traffic to identify indicators of compromise (IoCs) and reconstruct the attack timeline more accurately based on their experience of what steps they would take, and an understanding of the footprints various tools leave on system logs.

Fostering a Culture of Continuous Improvement

One of the biggest challenges in cybersecurity is complacency. Organisations often become overconfident after implementing new security measures or surviving an attack. Red teams, by constantly pushing the boundaries and simulating sophisticated attacks, help prevent this.

Key Contributions:

  • Challenge security assumptions: Red teams encourage organisations to avoid a “set-it-and-forget-it” mindset by continually challenging the effectiveness of defences and forcing teams to stay agile and adaptable in their responses.
  • Promote proactive security: By migrating to consistent tempo of red team assessments and testing organisational exposure to different Tactics, Techniques and Procedures, the incident response team can take a proactive approach rather than a reactive one. This works by helping the Blue Team conduct regular threat hunting activities, using this to improve their detections and identify weaknesses in their detections or gaps in network visibility so they can be addressed. This shift reduces the likelihood of severe incidents and ensures faster containment if they do occur.
  • Drive organisational awareness: Red teams don’t just work with security professionals; they also raise awareness across the organisation. They often test phishing or social engineering schemes, helping non-technical employees understand their role in cybersecurity, which indirectly supports better incident response.

Conclusion

In the complex world of cybersecurity, red teams are invaluable in supporting and strengthening incident response efforts. By identifying vulnerabilities, training blue teams in real-world scenarios, aiding in threat hunting, and offering an initiative-taking approach to defending against modern cyber threats. Organisations that leverage both red team and blue team collaboration can better detect, respond to, and recover from cyber incidents, significantly reducing risk and minimizing damage.

Our Red Team Services: Red Teaming & Simulated Attack Archives – Prism Infosec

Have you had a breach? Contact us here for our Incident Response service: Have You Had A Security Breach? – Prism Infosec

Flawed Foundations – Issues Commonly Identified During Red Team Engagements

Cybersecurity Red Team engagements are exercises designed to simulate adversarial threats to organisations. They are founded on real world Tactics, Techniques, and Procedures that cybercriminals, nation states, and other threat actors employ when attacking an organisation. It is a tool for exercising detection and response capabilities and to understand how the organisation would react in the event of a real-world breach.

One of outcomes of such exercises is an increased awareness of vulnerabilities, misconfigurations and gaps in systems and security controls which could result in the organisation’s compromise, and impact business delivery, causing reputational, financial, and legal damage.

Most of the time, threat actors rarely need to employ cutting edge capabilities or “zero day” exploits in order to compromise an organisation. This is because organisations grow organically, they exist to deliver their business, and as a result, security is not a key consideration from its founding, this means that critical issues can exist in the foundations of the organisation’s IT which threat actors will be more than happy to abuse.

This post covers five of the most common vulnerabilities we regularly see when conducting red team engagements for our clients. Its’s purpose is to raise awareness among IT professionals and business leaders about potential security risks.

Insufficient privilege management

This issue presents when accounts are provided with privileges within the organisation greater than what they require to conduct their work. This can present as: users who have local administrator privileges, accounts who have been given indirect administrator privileges, or overly privileged service accounts.

Some examples include:

  • Users who are all local administrators on their work devices –  This gives them the ability to install any software they might need to conduct their work, but also exposes the organisation to significant risk, should that device or user account become compromised. If users do require privileges on their laptops, then they should also be provided with a corporate virtual device (either cloud or on host based), which has different credentials from their base laptop, and is the only device permitted to connect to the corporate infrastructure. This will limit the exposure of the risk and permit staff to continue to operate. In a red team, this permits us to abuse a machine account, and gain the ability to bypass numerous security tools and controls which would normally impede our ability to operate.
  • Users with indirect administrator privileges – in Microsoft Windows Domains, users can belong to groups, however groups can also belong to other groups, and as a result users can inherit privileges due to this nesting. Whilst it was never the intention to grant  a user administrator privileges, and whilst the user is unaware that they have been given this power, such a misconfiguration can result quite easily and exposes the organisation to considerable risk. This can only be addressed through an in-depth analysis of the active directory and consistent auditing combined with system architecture. This sort of subtle misconfiguration only really becomes apparent when a threat actor or red team starts to enumerate the active directory environment; when found though it rapidly leads to a full organisation compromise.
  • Overly privileged service accounts – service accounts exist to ensure that specific systems such as databases or applications are able to authenticate users accessing them from the domain and to provide domain resources to the system. A common misconfiguration is providing them with high levels of privilege during installation even though they do not require them. Service accounts, due to the way they operate need to be exposed, and threat actors who identify overly privileged accounts can attempt to capture an authentication using the service. This can be attacked offline to retrieve the password, which can then lead to greater compromise within the estate. Service accounts should be regularly audited for their privileges, where possible these should be removed or restricted. If it is not a domain managed service account (a feature made available from Windows Server 2012 R2 onwards), then ensuring the service account has a password of at least 16 characters in length, which is recorded in a secure fashion if it is required in the future will severely restrict threat actors abilities to abuse these. Abuse of service accounts is becoming rarer but legacy systems which do not support long passwords means there are still significant amounts of these sorts of accounts present. Abuse of these accounts can often be tied to whether they have logon rights across the network or not – as identifying them being compromised can often be problematic if the threat actor or red team operate in a secure manner.

Poor credential complexity and hygiene

This issue presents when users are given no corporately supported method to store credential material; as a result passwords chosen are often easy to guess or predict, and they are stored either in browsers, or in clear text files on network shared drives, or on individual hosts.

  • Credential Storage – staff will often use plain text files, excel documents, emails, one notes, confluence,  or browsers to store credentials when there is no corporately provided solution. The problem with all of these options is that they are insecure – the passwords can be retrieved using trivial methods; which means the organisations are often one step away from a  significant breach. Password vaults such as LastPass, BitWarden, KeyPass, OnePass, etc. whilst targets for threat actors do offer considerably greater protection, as long as the credentials used to unlock them are not single factor, or stored with the wallet. It is standard practice for red teams and threat actors to try to locate clear text credentials, and attacking wallets significantly increases the difficulty and complexity of the tradecraft required when the material to unlock the wallet uses MFA or is not stored locally alongside it.
  • Credential Complexity – over the last 20 years the advice on password complexity has changed considerably. We used to advise staff to rotate passwords every 30/60/90 days, choose random mixes of uppercase, lowercase, numbers and punctuation, and have a minimum length; today we advise not rotating passwords regularly, and instead choosing a phrase or 3 random, easy to memorise words which are combined with punctuation and letters. The reason for this is because as computational power has increased, smaller passwords, regardless of their composition have become easier to break. Furthermore, when staff rotated them regularly, it would often result in just a number changing rather than an entirely new password being generated, as such they would also become easy to predict. Education is critical in addressing this. Furthermore many password wallets will also offer a password generator that can make management of this easier for staff whilst still complying with policies.  Too often I have seen weak passwords, which complied with password complexity policies because people will seek the simplest way to comply. Credential complexity buys an organisation time, time to notice a breach, raises the effort a threat actor must invest in order to be effective in attacking the organisation.

Insufficient Network Segregation

 This issue occurs when a network is kept flat – hosts are allowed to connect to any server or other workstations within the environment on any exposed ports regardless of department or geographical region. It also covers cases where clients  which connect to the network using VPN are not isolated from other clients.

  • VPN Isolation –  Clients which connect to the network through VPN to access domain resources such as file shares, can be directly communicated with from other clients. This can be abused by threat actors who seed network resources with materials which will force clients who load them to try to connect with a compromised host. Often this will be a compromised client device. When this occurs, the connecting host transmits encrypted user credentials to authenticate with the device. These can be taken offline by the threat actor and cracked which could result in greater compromise in the network.  Securing hosts on a VPN limits the threat actor, and red team in terms of where they can pivot their attacks, and makes it easier to identify and isolate malicious activities.
  • Flat Networks – networks are often implemented to ensure that business can operate efficiently, the easiest implementation for this is in flat networks where any networked resource is made available to staff regardless of department or geographical location, and access is managed purely by credentials and role-based access controls (RBAC). Unfortunately, this configuration will often expose administrative ports and devices which can be attacked. When a threat actor manages to recover privileged credentials then, a flat network offers significant advantages to them for further compromise of the organisation. Segregating management ports and services, breaking up regions and departments and restricting access to resources based on requirements will severely restrict and delay a threat actors and red teams ability to move around the network and impact services.

Weak Endpoint Security

Workstations are often the first foothold achieved by threat actors when attacking an organisation. As a result they require constant monitoring and controls to ensure they stay secure. This can be achieved through a combination of maintained antivirus, effective Endpoint Detection and Response, and Application Controls. Furthermore controlling what endpoint devices are allowed to be connected to the network will limit the exposure of the organisation.

  • Unmanaged Devices -Endpoints that are not regularly monitored or managed, increasing risk. Permitting Bring Your Own Device (BYOD) can increase productivity as staff can use devices they have customised; however it also exposes the organisation as these devices may not comply with organisation security requirements. This also compounds issues when a threat is detected, as identifying a rogue device becomes much more difficult as you need to treat every BYOD device as potentially rogue. Furthermore, you have little insight or knowledge as to where else these devices have been used, or who else has used them. By only permitting managed devices to your network and ensuring that BYOD devices, if they must be used, are severely restricted in terms of what can be accessed, you can limit your exposure to risk. Restrictions of managed devices can be bypassed but it raises the complexity and sophistication of the tradecraft required which means it takes longer, and there is a greater chance of detection.
  • Anti-Virus – it used to be the case that anti-virus products were the hallmark of security for devices. However, the majority of these work on signatures, which means they are only effective against threats that have been identified and are listed in their definitions files. Threat Actors know this and will often change their malware so that it no longer matches the signature and therefore can be evaded. This means the protection they offer is often limited but if well maintained, they can limit the organisations exposure to common attacks and provide a tripwire defence should a capable adversary deploy tooling that has previously been  signatured. Bypassing antivirus can be trivial, but it provides an additional layer of defence which can increase the complexity of a red team or threat actors activities.
  • Lack of Endpoint Detection and Response (EDR) configuration- EDR goes one step beyond antivirus and looks at all of the events occurring on a device to identify suspicious tools, behaviours, and activities that could indicate breach. Like anti-virus they will often work with detection heuristics and rules which can be centrally managed. However they require significant time to tune for the environment, as normal activity for one organisation, maybe suspicious in another. Furthermore it permits the organisation to isolate suspected devices. Unfortunately EDR can be costly, both to implement and then maintain correctly – and is only effective when it is on every device. Too often, organisations will not spend time using it, or understand the implementation of the basic rules versus tuned rules. As such false positives can often impact business, and lead to a lack of trust in the tooling. Lacking an EDR product severely restricts an organisation’s ability to detect and respond to threats in a capable, and effective manner. Well maintained and effective EDR that is operated by a well-resourced, exercised security team significantly impacts threat actor and red team activities; often bringing the Mean Time to Detected a breach down from days/weeks to hours/days.
  • Application Control – When application allowlisting was first introduced, it was clunky and often broke a lot of business applications. However it has evolved since those early days but is still not well implemented by organisations. It takes significant initial investment to properly implement but acts in a manner which can strongly restrict a threat actors ability to operate in an environment. Good implementations are based on user roles; most employees require a browser, and basic office applications to conduct their work. From there additional applications can be allowed dependent on the role, and users who do not have application control applied have segregated devices to operate on, which will help limit exposure. Without this, threat actors and red teams can often run multiple tools which most users have no use for or business using during their day jobs; furthermore it can result in shadow IT applications as users introduce portable apps to their devices which makes investigation of incidents difficult as it muddies the water in terms of if it is legitimate use or threat actor activity.

Insufficient Logging and Monitoring

If an incident does occur – and remember that red team engagements are also about exercising the organisation’s ability to respond; then logging and monitoring become paramount for the organisation to effectively respond. When we have exercised organisations in the past, we often find that at this stage of the engagement a number of issues become quickly apparent that prevent the security teams from being effective. These are almost often linked to a lack of centralised logging, poor incident detection, and log retention issues.

  • Lack of Centralised Logging: Threat actors have been known to wipe logs during their activities, when this occurs on compromised devices, it makes detecting activities difficult, and reconstruction of threat actor activities impossible. Centralising logs allows additional tooling to be deployed as a secondary defence to detect malicious activity so that devices can be isolated; it also means that reconstruction of events is significantly easier. Many EDR products will support centralised logging, however this is only true on devices which have agents installed, and on supported operating systems; therefore to make this effective additional tooling may need to be used such as syslog and Sysmon to ensure that logging is sent to centralised hosts for analysis and curating. Centralised logging can also be easier to store for longer periods of time, permitting effective investigations to understand how, what and where the threat actor/red team have been operating and what they accomplished before being detected and containment activities are undertaken.
  • Poor Incident Detection: Organisations which do not exercise their security teams often will act poorly when an incident occurs. Staff need to practice using SIEM (Security Information and Event Management) tooling, and develop playbooks and queries that can be run against the monitoring software in order to locate and classify threats. When this does not occur, identifying genuine threats from background user activity can become tedious, difficult, and ineffective, resulting in poor containment and ineffective response behaviours. When this occurs inn red teams, it can result in alerts being ignored or classed as false positives which leads to exacerbating an incident.
  • Log Retention Issues: many organisations keep at most, 30 days of logs – furthermore many organisations think they have longer retention than this as they have 180 days of alert retention, not realising that alerts and logs are often different. As a result we can often review alerts as far back as 6 months, but can only see what happened around those alerts for 30 days. A lot of threat actors know about this shortcoming, and will often wait 30 days once established in the network to conduct their activities to make it difficult for the responders to know how they got it, how long they have been there, and where else they have been. This often comes up in red teams as many red teams will run for at least 4 weeks, if not longer to deliver a scenario, which makes exercising the detection and response difficult when this issue is present.

Conclusion

These are just the 5 most common issues we identify when conducting a red team engagement; however, they are not the only issues we come across. They are fundamental issues which are ingrained in organisations due to a mixture of culture and lack or deliberate architectural design considerations.

Red team engagements not only help shine a light on these sorts of issues but also allows the business to plan how to address them at a pace that works for them, rather than as a consequence of a breach. Additionally, red team engagements can help identify areas where additional focus testing can help test additional controls, provide a deeper understanding of identified issues, and exercise controls that are implemented following a red team engagement.

Basically, a red team engagement is just the start or milestone marker in an organisation’s security journey. It is used in tandem with other security frameworks and capabilities to deliver a layered, effective security function which supports an organisation to adapt, protect, detect, respond and recover effectively to an ever-evolving world of cybersecurity threats.

Our Red Team services: https://prisminfosec.com/service/red-teaming-simulated-attack/

Why CISOs Need an Adversarial Mindset in Cybersecurity

Chief Information Security Officers (CISOs) are tasked with safeguarding an organisation’s most valuable assets: its data, intellectual property, and reputation. The role of a CISO has evolved from being an overseer of IT security to a strategic leader who must: anticipate and mitigate complex cyber threats, act as the board’s expert in cybersecurity matters which can affect the business, and recognise then balance the risks, costs and timescales of different activities to enhance an organisation’s security capabilities. One way to help navigate this challenging terrain effectively is to adopt an adversarial mindset—one that thinks like the enemy, predicts their moves, and pre-emptively counters their tactics.

Understanding the Adversarial Mindset

An adversarial mindset involves thinking like a hacker or cybercriminal. It is about understanding the motivations, strategies, and techniques that threat actors use to infiltrate and conduct their activities. By adopting this perspective, CISOs can proactively find vulnerabilities, predict potential attacks, and implement robust defences.

This approach is not about being paranoid; it is about being prepared. It helps CISOs to stay ahead of the curve and protect their organizations from ever-evolving threat landscapes.

 Why CISOs Need to Think Like Hackers

Predicting and Pre-empting Attacks

Hackers are innovative and constantly evolving their methods. By thinking like them, CISOs can predict the next move of a cybercriminal and act before an attack occurs. This initiative-taking approach enables the security team to find potential weaknesses and address them before they can be exploited. This can be cultivated with Threat Intelligence to understand who is likely to target the organisation and what their motivations are.

Building Resilient Systems:

A CISO with an adversarial mindset will scrutinize systems from an attacker’s perspective. This means questioning every aspect of the security architecture, finding weak points, and reinforcing them. This can be achieved by melding security teams, developers, and system architects together when designing new systems, supported with robust security testing. This should then be integrated with annual or biennial red team tests to understand how those systems have been integrated into the organisation and understand the attack paths an adversary is likely to take to compromise systems.

Understanding the Human Element:

Cybersecurity is not just about technology; it is also about people. Social engineering attacks, like phishing, rely on exploiting human behaviour. CISOs who think like attackers can better educate their employees on recognising and avoiding these traps, thus reducing the risk of human error leading to a breach. The right phish at the wrong time makes any individual vulnerable; CISOs who understand this and embrace a culture where this accepted and expected allows them to address it effectively, and means an employee is more likely to report a breach, resulting in a higher likelihood of a successful mitigation.

Adapting to Emerging Threats:

The threat landscape is dynamic, with new vulnerabilities and attack vectors emerging regularly. An adversarial mindset keeps CISOs on their toes, encouraging continuous learning and adaptation. This mindset fosters a culture of vigilance within the organisation, ensuring that the security posture evolves alongside the threat landscape. This can be enhanced by sharing knowledge across the business of emerging threats rather than hording it to security teams. By keeping the business informed the business can react more effectively, introducing more controls and procedures to address threats and support the security teams in protecting the business.

Enhanced Incident Response:

When a breach occurs, the speed and effectiveness of the response are critical. CISOs who understand an attacker’s mindset can more quickly identify the nature of the attack, trace its origin, and contain it before it causes considerable damage. This ability to think like the enemy can significantly reduce the impact of a breach. This, like any response capability needs to be regularly exercised – both theoretically with tabletop exercises and practically with red teams. Like holding a fire drill staff, tools, and policies need to be tried out under safe conditions before they can be relied upon in an emergency. A good CISO will arrange for their IR provider to be involved in at least one major exercise a year where the full process is enacted, and any third-party support is fully assessed as well.

Cultivating an Adversarial Mindset

To develop this mindset, CISOs need to engage in continuous learning and stay updated on the latest threat intelligence. Collaborating with ethical hackers, taking part in cybersecurity exercises, and regularly reviewing and updating security protocols are essential practices. Moreover, fostering a culture within the organisation that values security and encourages employees to think critically about potential threats can amplify the effectiveness of the CISO’s efforts.

Additionally, networking with peers in the industry and taking part in cybersecurity communities can offer valuable insights into emerging threats and effective countermeasures. This collective knowledge-sharing can be a powerful tool in staying one step ahead of cyber-threat actors.

Conclusion

The adversarial mindset is a crucial part of a successful cybersecurity strategy. For CISOs, thinking like an attacker is not just a defensive tactic; it is an initiative-taking approach to safeguarding the organisation. By expecting threats, building resilient systems, and fostering a culture of security awareness, CISOs can ensure that their organisations are not just reacting to cyber threats, but staying ahead of them.

Layered Defences: Building Blocks of Secure Organisations

Every organisation is different in terms of how it uses data, how its processes work, and how their staff conduct themselves. As a result no single security tool, deployment, implementation, or capability can protect them.

Layered defences, also known as “defence in depth,” is the approach of implementing multiple layers of security controls to protect against a wide range of threats, ensuring that if one layer fails, others are in place to mitigate the risk. Furthermore, each layer is designed to address specific types of threats, creating a comprehensive shield that protects against potential attacks.

The concept of layered defences is ancient. Our most striking example comes from a time before the computer, when threats would manifest themselves physically against nation-states – castles are the key epitome of a layered defence. The combination of moats, drawbridges, walls, battlements, keeps, towers, turrets, guards, and gatehouses provided a multi-layered defence system that not only protected the castle, but also its inhabitants.

 Regardless of if we are talking about fortifications, or digital estates, by diversifying defences across various points of vulnerability, organisations can reduce the likelihood of a successful breach and limit the impact of security incidents.

The Core Layers of Cybersecurity Defence

To build an effective layered defence strategy, organisations must consider various aspects of their IT environment and implement appropriate security measures at each level. Below are the core layers typically involved in a robust cybersecurity defence:

Perimeter Security

Perimeter security is the first line of defence, focusing on preventing unauthorized access to the network. Common controls at this layer include firewalls which support domain reputation services, intrusion detection and prevention systems (IDPS), secure gateways, mail filters, and intercepting SSL/TLS inspecting proxies. These tools help monitor and filter traffic, blocking malicious activity before it reaches the internal network.

Network Security

Once traffic passes through the perimeter, network security controls come into play. These measures include network segmentation, virtual private networks (VPNs), and network access control (NAC). Network security ensures that even if an adversary gains access to the perimeter, they are limited in their ability to move laterally within the network.

Endpoint Security

With the proliferation of remote work and mobile devices, securing endpoints has become increasingly important. Endpoint security involves installing antivirus software, endpoint detection and response (EDR) tools, and ensuring that devices are patched and up to date. This layer helps protect individual devices from being compromised and becoming entry points for adversaries.

Application Security

Adversaries often target applications due to their complexity and potential vulnerabilities. Application security focuses on securing software applications through secure coding practices, regular updates, and the use of web application firewalls (WAFs). By protecting applications, organisations can prevent attacks such as SQL injection, cross-site scripting (XSS), and other common exploits which may result in an adversary gaining an additional foothold or obtaining material which could further ran attack.

Data Security

At the heart of every cybersecurity strategy is the protection of data. Data security measures include encryption, data loss prevention (DLP) tools, and access controls that ensure only authorised users can access sensitive information. By securing data both at rest and in transit, organisations can reduce the risk of data breaches and ensure compliance with regulations.

Identity and Access Management (IAM)

IAM is crucial for ensuring that only the right individuals have access to the right resources at the right time. Implementing strong authentication methods, such as multi-factor authentication (MFA), and managing user privileges through role-based access control (RBAC) are essential components of IAM. This layer helps prevent unauthorised access and reduces the risk of insider threats, and limits an adversaries ability to make rapid progress should they manage to compromise an endpoint and its user.

Security Awareness and Training

The human element is often the weakest and the strongest link in cybersecurity. Providing regular security awareness training and promoting a security-conscious culture are vital components of a layered defence strategy. Educating employees on phishing, social engineering, and safe online practices can significantly reduce the likelihood of human error leading to a security incident. Furthermore, motivated and supported staff are more willing and likely to report unusual behaviour which could be indicative of an ongoing threat. Giving staff the tools to effectively report, and regularly praising, listening to feedback, and rewarding behaviours that protect the organisation benefits the whole business. Businesses which dictate security, punish one-off breaches, and have a culture which derides or ridicules staff who have fallen victim to an adversary, will often suffer more in the long term as staff become more fearful to report incidents as it could harm their career.

Incident Response and Recovery

Despite the best defences, breaches can and will still occur – no organisation will achieve 100% security and stay in business. Having a robust incident response and recovery plan is essential for minimising the impact of a security incident. This layer includes incident detection, response planning, regular drills, and data backups. Being prepared to respond quickly and effectively can make all the difference in mitigating damage and restoring normal operations.

The Benefits of a Layered Defence Approach

  • Redundancy and Resilience: A single security control can be bypassed or fail, but multiple layers ensure that an attack must overcome several hurdles, increasing the chances of detection and prevention.
  • Comprehensive Protection: Different layers address different types of threats, ensuring that the organisation is protected from various angles. This multi-faceted approach is more effective than relying on a single line of defence.
  • Reduced Attack Surface: By implementing security measures at various points, organisations can minimize their attack surface, making it more difficult for adversaries to find vulnerabilities.
  • Improved Incident Response: Layered defences provide multiple opportunities to detect and respond to threats, allowing for quicker identification and mitigation of attacks.

Trust and Verify

Implementing these defences is only one part of the story. They need to be regularly exercised and maintained. This is where  vulnerability scans can identify missing patches, misconfigured ports, and exposed appliances; penetration tests can evaluate individual layers; purple teaming can enhance the detection capabilities; and red teams can examine end-to-end attack paths, exercising as many of the layers as possible to identify gaps, and exercise incident responses. This can occur in both digital, and physical environments of the organisation. Through conducting these tests we can verify that they are not drifting, and this in turn acts as an additional layer of defence.

Conclusion

A  layered defence strategy is not just an option—it is a necessity. By implementing multiple layers of security controls and assessing them, organisations can better protect their assets, reduce the risk of successful attacks, and ensure a more resilient cybersecurity posture.

Investing in layered defences means thinking holistically about security, considering all potential vulnerabilities, and preparing for the unexpected. In the long run, this approach will not only safeguard your organisation’s digital assets but also build trust with customers, partners, and stakeholders who rely on your commitment to security.