Red Team Scenarios – Modelling the Threats

Introduction

Yesterday organisations were under cyber-attack, today even more organisations are under cyber-attack, and tomorrow this number will increase again. This number has been increasing for years, and will not reverse. Our world is getting smaller, the threat actors becoming more emboldened, and our defences continue to be tested. Any organisation can become a victim to a cyber security threat actor, you just need to have something they want – whether that is money, information, or a political stance or activity inimical to their ideology. Having cybersecurity defences and security programs will help your organisation be prepared for these threats, but like all defences, they need to be tested; staff need to understand how to use them, when they should be invoked, and what to do when a breach happens.

Cybersecurity red teaming is about testing those defences. Security professionals take on the role of a threat actor, and using a scenario, and appropriate tooling, conduct a real-world attack on your organisation to simulate the threat.

Scenarios

Scenarios form the heart of a red team service: they are defined by the objective,  the threat actor, and the attack vector. This will ultimately determine what defences, playbooks, and policies are going to be tested.

Scenarios are developed either out of threat intelligence – i.e. threat actors who are likely to target your organisation have a specific modus operandi in how they operate; or scenarios are developed out of a question the organisation wants answered to understand their security capabilities.

Regardless of the approach, all scenarios need to be realistic but also be delivered in a safe, secure, and above all, risk managed manner.

Objectives

Most red team engagements start by defining the objective. This would be a system, privilege or data which if breached would result in a specific outcome that a threat actor is seeking to achieve. Each scenario should have a primary target which would ultimately result in impact to the: organisation’s finances (either through theft or disruption (such as ransomware)); data (theft of Personally Identifiable Information (PII) or private research); or reputation (causing embarrassment/loss of trust through breach of services/privacy). Secondary and tertiary objectives can be defined but often these will be milestones along the way to accomplish to primary.

Objectives should be defined in terms of impacting Confidentiality (can threat actors read the data), Integrity (can threat actors change the data), or Availability (can threat actors deny legitimate access to the data). This determines the level of access the red team, will seek to achieve to accomplish their goal.

Threat Actors 

Once an objective is chosen, we then need to understand who will attack it. This might be driven by threat intelligence, which will indicate who is likely to attack an organisation, or for a more open test, we can define it by sophistication level of the threat actor…

Not all threat actors are equal in terms of skill, capability, motivation, and financial backing. We often refer to this collection of attributes as the threat actor’s sophistication. Different threat actors also favour different attack vectors, and if the scenario is derived from threat intelligence, this will inform how that should be manifested.

High Sophistication

The most mature threat actors are usually referred to as Nation State threat actors, but we have seen some cybercriminal gangs start to touch elements of that space. They are extremely well resourced (often with not only capability development teams, but also with linguists, financial networks, and a sizeable number of operators able to deliver 24/7 attacks). They will often have access to private tooling that is likely to evade most security products; and they are motivated usually by politics (causing political embarrassment to rivals, theft of data to uplift country research, extreme financial theft, denigrating services to cause real-world impact/hardship. Examples in this group can include APT28, APT38, and WIZARD SPIDER

Medium Sophistication

In the mid-tier maturity range we have a number of cybercriminal and corporate espionage threat actors. These will often have some significant financial backing – able to afford some custom (albeit commercial) tooling which will have been obtained either legally, or illegally; they may work solo, but will often be supported by a small team who can operate 24/7 but will often limit themselves to specific working patterns where possible. They may have some custom written capabilities, but these will often be tweaked versions of open-source tools. They are often motivated by financial concerns – whether that is profiting from stolen research, or directly obtaining funding from their victim due to their activities. Occasionally they will also be motivated by some sort of activism – often using their skills to target organisations which represent or deliver a service for a perceived cause which they do not agree with. In this motivation they will often either seek to use the attack as a platform to voice their politics or to try and force the organisation to change their behaviour to one which aligns better with their beliefs. Examples of threat actors in this tier have included  FIN13 and LAPSUS$.

Low Sophistication

At the lower tier maturity range, we are often faced with single threat actors, rather than a team; insiders are often grouped into this category. Threat actors in this category often make use of open-source tooling, which may have light customisation depending on the skill set of the individual. They will often work fixed time zones based on their victim, and will often only have a single target at a time or ever. Their motivation can be financial but can also be motivated by personal belief or spite if they believe they have been wronged. Despite being considered the lowest sophistication of threat actor, they should never be underestimated – some of the most impactful cybersecurity breaches have been conducted by threat actors we would normally place in this category- such as Edward Snowden, or Bradley Manning.

Attack Vector

Finally, now that we know what will be attacked, and who will be attacking we need to define how the attack will start. Again, threat intelligence gathered on different threat actors will show their preferences in terms of how they can start an attack, and if the objective is to keep this realistic, that should be the template. However if we are using a more open test we can mix things up and use an alternative attack vector. This is not to say that specific threat actors won’t change their attack vector, but they do have favourites.

Keep in mind, the attack vector determines which security boundary will be the initial focus of the attack, and they can be grouped into the following categories:

External (Direct External Attackers)

  • Digital Social Engineering (phishing/vishing/smshing)
  • Perimeter Breach (zero days)
  • Physical (geographical location breach leading to digital foothold)

Supply Chain (Indirect External Attackers)

  • Software compromise (backdoored/malicious software updates from trusted vendor)
  • Trusted link compromise (MSP access into organisation)
  • Hardware compromise (unauthorised modified device)

Insider (both Direct and Indirect Internal Attackers)

  • Willing Malicious Activity
  • Unwilling Sold/stolen access
  • Physical compromise

Each of these categories not only contain different attack vectors, but will often result in testing different security boundaries and controls. Whilst a Phishing attack will likely result in achieving a foothold on a user’s desktop – the likely natural starting position for an insider conducting willing or unwilling attacks; they will test different things, as an insider will not need to necessarily deploy tooling which might be detected, and will already have passwords to potentially multiple systems to do their job. Understanding this is the first step in determining how you want to test your security.

Pulling it together

Once all these elements have been identified and defined, the scenario can move forward to the planning phase before delivery. This is where any pre-requisites to deliver the scenarios, any scenario milestones, any contingencies can be prepared to help simulate top tier threat actors,  and any tooling preparations can be done to ensure the scenario can start. Keep in mind that whilst the scenario objective might be to compromise a system of note, the true purpose of the engagement is to determine if the security teams, tools, and procedures can identify and respond to the threat. This can only be measured and understood if the security teams have no clue when or how they will be tested, as real-world threats will not give any notice either.

Even if the red team accomplish the goals, the scenario will still help security teams understand the gaps in their skills, tools, and policies so that they can react better in the future. Consider contacting Prism Infosec if you would like your security teams to reap these benefits too.

Our Red Team Services: https://prisminfosec.com/service/red-teaming-simulated-attack/

Red Teams don’t go out of their way to get caught (except when they do)

Introduction

In testing an organisation, a  red team will be seeking to emulate a threat actor by achieving a specific goal – whether that is to gain administrative control of the network and prove they can control backups (a kin to how many ransomware operators work), through to proving access to financial systems, or even gaining access to sensitive data repositories. They will employ tactics, tools and capabilities aligned to the sophistication level of the threat actor they are pretending to be. The question asked about red teams asks is always “can the bad guys get to system X”,  when it really should be, “can we spot the bad guys before they get to system X AND do something effective about it”. The unfortunate answer is that with enough time and effort, the bad guys will always get to X. What we can do in red teaming is try to tell you how the bad guys will get to X and help you understand if you can spot the bad guys trying things.

Red Team Outcomes

In assessing an organisation, we often have engagements go in one of two ways – the first (and unfortunately more common) is that the red team operators achieve the objective of the attack, sometimes this is entirely without detection and sometimes there is a detection, but containment is unsuccessful. The other is when the team are successfully detected (usually early on) and containment and eradication is not only successful, but extremely effective.

So What?

In both cases, we have failed to answer some of the exam questions, namely the level of visibility the security teams have across the network.

In the first instance, we don’t know why they failed to see us, or why they failed to contain us, and why they didn’t spot any of the myriad other activities we conducted. We need to understand if the issue is one of process or effort (is the security team drinking from a firehose of alerts and we were there but lost in the noise; or did the security team see nothing because they don’t have visibility in the network; or do we have telemetry but no alerts for the sophistication level of the attacker’s capabilities/tactics?). The red team can try to help answer some of these questions by moving the engagement to one of “Detection Threshold Testing” where the sophistication level of the Tactics, Techniques and Procedures of the test are gradually lowered, and the attack becomes noisier until a detection occurs, and a response is observed. If the red team get to the point of dropping disabled, un-obfuscated copies of known bad tools on domain controllers which are monitored by security tools and there are still no detections, then the organisation needs to know and work out why. This is when a Detection and Response Assessment (DRA) Workshop can add real value to understand the root causes of the issues.

In the second instance we have observed a great detection and response capability, but we don’t know the depth of the detection capabilities – i.e. if the red team changed tactics, or came in elsewhere would the security team have a similar result? We can answer this sometimes with additional scenarios which model different threat actors, however multiple scenario red teams can be costly, and what happens if they get caught early on in all three scenarios? I prefer to adopt an approach of trust but verify in these circumstances by moving an engagement through to a “Declared Red Team”. In this circumstance, the security teams are congratulated on their skills, but are informed that the exercise will continue. They are told the host the red team are starting on, and they are to allow it to remain on the network uncontained but monitored while the red team continue testing. They will not be told what the red team objective is or on what date the test will end – they will however be informed when testing is concluded. If they detect suspicious activity elsewhere in the network  during this period they can deconflict the activity with a representative of the test control group. If it is the red team, it will be confirmed, and the security team will  be asked to record what their next steps would be. If it isn’t then the security team are authorised to take full steps to mitigate the incident; a failure on the red team to confirm, will always be treated as malicious activity unrelated to the test. Once testing is concluded (objective achieved/time runs out), the security team is informed, and the test can move onto a Detection and Response Assessment (DRA) Workshop.

Next Steps

In both of these instances, you will have noted that the next step is a Detection and Response Assessment (DRA) Workshop – DRA’s were introduced by the Bank of England’s CBEST testing framework, LRQA (formerly LRQA Nettitude) refined the idea, and Prism Infosec have adapted it by fully integrating NIST 2.0 into it. Regardless, it is essentially a chance to understand what happened, and what the security team did about it. The red team should provide the client security team with the main TTP events of the engagement – initial access, discovery which led to further compromise, privilege escalation, lateral movement, action on objectives. This should include timestamps and locations/accounts abused to achieve this. The security team should come equipped with logs, alerts, and playbooks to discuss what they saw, what they did about it, and what their response should be. Where possible, this response should also have been exercised during the engagement so the red team can evaluate its effectiveness.

The output of this workshop should be a series of observations about areas of improvement for the organisation’s security teams, and areas of effective behaviours and capabilities. These observations need to be included in the red team report – and should be presented in the executive summary to help senior stakeholders understand the value and opportunities to improve their security capabilities, and why it matters.

Conclusion

Red Teams will help identify attack paths, and let you know if bad guys can get to their targets, but more importantly they can and should help organisations understand how effective they are detecting and responding to the threat before that happens. Red Teams need to be caught to help organisations understand their limits so they can push them, show good capabilities to senior stakeholders, and identify opportunities for improvement. An effective red team exercise will not only engineer being caught into their test plan, but they will ensure that when it happens, the test still adds value to the organisation.

Our Red Team services: https://prisminfosec.com/service/red-teaming-simulated-attack/

How AI is Transforming Cyber Threat Detection and Prevention

The number of global cyber-attacks is increasing each year at a rapid rate.

According to a study by Cybersecurity Ventures, in 2023 a cyberattack took place every 39 seconds, or over 2,200 times per day. This is a 12.8% increase from 2022. Attackers are getting more sophisticated and are increasingly using AI tools to automate and increase the volume of their attacks, and traditional defences are struggling to keep up.

Security Operations Centre (SOC) analysts and real-time monitoring tools are turning to AI-driven solutions in order to combat them. Below is a brief summary of how AI-powered solutions like CrowdStrike, Splunk, and Sentry are leveraging AI driven tools for cyber threat detection and prevention.

The Power of AI in Cybersecurity

AI’s ability to analyse large amounts of data at lightning speed is a game-changer. It can identify patterns and anomalies that would take humans ages to spot. Speed is not the only advantage this brings however, there is also precision and foresight. AI can predict potential threats before they manifest, giving SOC analysts a proactive stance rather than a reactive one. It also provides a solution to a problem that many SOC analysts experience: working nights or a rotating shift pattern can affect a person’s concentration and judgement. Fatigue and disrupted sleep schedules are common issues, leading to slower reaction times and the increased likelihood of human error.

However, AI-powered solutions operate consistently and effectively around the clock, helping cybersecurity professionals on the front line maintain a high level of vigilance and reducing the risk of missed threats.

Furthermore, AI systems can continuously learn from new data, evolving and improving their threat detection capabilities over time. This dynamic adaptation ensures that AI stays ahead of emerging threats and evolving tactics used by cybercriminals.

CrowdStrike

CrowdStrike’s Falcon AI platform uses machine learning to detect and block malicious activities. By analysing billions of events in real time, it identifies patterns that indicate a threat. This means less time sifting through logs and more time focusing on critical incidents. CrowdStrike’s AI also provides valuable insights into the tactics, techniques, and procedures (TTPs) of attackers, enabling better preparedness and response.

CrowdStrike also offers Charlotte AI, a generative AI ‘security analyst’ which can help an analyst write playbooks to deal with an attack, from conversational prompts. This aims to speed up the response to incidents, as well as reduce the time that it takes a new analyst to become familiar with the CrowdStrike system. This tool leverages the power of AI to streamline operations, making the entire cybersecurity process more efficient and effective.

Splunk

Splunk is another heavyweight in the AI cybersecurity arena. Its platform turns machine data into actionable insights. With AI-driven analytics, Splunk can pinpoint unusual behaviour across an organisation’s infrastructure. SOC analysts benefit from this by getting clear, concise alerts about potential threats without the noise of false positives. Splunk’s AI also helps in automating responses, making it quicker to neutralise threats and reducing the workload on human analysts.

Splunk also offers a conversational AI assistant, Splunk AI Assistant, which allows a user to search through data, or generate queries, using plain English prompts. This makes it easier for analysts of all skill levels to interact with the system and quickly get the information they need, enhancing productivity and response times.

Sentry

Sentry focuses on error monitoring and application performance. Its AI capabilities are crucial for detecting anomalies that could indicate a security issue. Utilising what it calls Whole Network AI Analysis, Sentry’s real-time device and network traffic monitoring automatically blocks excess traffic to any endpoint on the network.

By continuously monitoring and learning from network traffic patterns, Sentry’s AI can adapt to new threats and reduce false positives, providing SOC analysts with more accurate and reliable alerts. This leads to faster resolution times and a more secure network environment.

Summary

AI is a powerful tool, but it’s also more than that. It’s an assistive technology that helps frontline cybersecurity professionals sift through data and formulate a response faster than ever. It handles the heavy lifting of data analysis, threat detection, and even the initial response, freeing up human analysts to focus on more strategic tasks. AI-powered solutions like CrowdStrike, Splunk, and Sentry are not only improving the efficiency and effectiveness of cybersecurity operations but are also paving the way for a future where cyber threats are anticipated and neutralised before they can cause harm.

As the number of global threats increase each year, AI assistive technologies are helping analysts not just respond to threats, but to outsmart the attackers too.

This post was written by Chris Hawkins.