Red Team Scenarios – Modelling the Threats

Introduction

Yesterday organisations were under cyber-attack, today even more organisations are under cyber-attack, and tomorrow this number will increase again. This number has been increasing for years, and will not reverse. Our world is getting smaller, the threat actors becoming more emboldened, and our defences continue to be tested. Any organisation can become a victim to a cyber security threat actor, you just need to have something they want – whether that is money, information, or a political stance or activity inimical to their ideology. Having cybersecurity defences and security programs will help your organisation be prepared for these threats, but like all defences, they need to be tested; staff need to understand how to use them, when they should be invoked, and what to do when a breach happens.

Cybersecurity red teaming is about testing those defences. Security professionals take on the role of a threat actor, and using a scenario, and appropriate tooling, conduct a real-world attack on your organisation to simulate the threat.

Scenarios

Scenarios form the heart of a red team service: they are defined by the objective,  the threat actor, and the attack vector. This will ultimately determine what defences, playbooks, and policies are going to be tested.

Scenarios are developed either out of threat intelligence – i.e. threat actors who are likely to target your organisation have a specific modus operandi in how they operate; or scenarios are developed out of a question the organisation wants answered to understand their security capabilities.

Regardless of the approach, all scenarios need to be realistic but also be delivered in a safe, secure, and above all, risk managed manner.

Objectives

Most red team engagements start by defining the objective. This would be a system, privilege or data which if breached would result in a specific outcome that a threat actor is seeking to achieve. Each scenario should have a primary target which would ultimately result in impact to the: organisation’s finances (either through theft or disruption (such as ransomware)); data (theft of Personally Identifiable Information (PII) or private research); or reputation (causing embarrassment/loss of trust through breach of services/privacy). Secondary and tertiary objectives can be defined but often these will be milestones along the way to accomplish to primary.

Objectives should be defined in terms of impacting Confidentiality (can threat actors read the data), Integrity (can threat actors change the data), or Availability (can threat actors deny legitimate access to the data). This determines the level of access the red team, will seek to achieve to accomplish their goal.

Threat Actors 

Once an objective is chosen, we then need to understand who will attack it. This might be driven by threat intelligence, which will indicate who is likely to attack an organisation, or for a more open test, we can define it by sophistication level of the threat actor…

Not all threat actors are equal in terms of skill, capability, motivation, and financial backing. We often refer to this collection of attributes as the threat actor’s sophistication. Different threat actors also favour different attack vectors, and if the scenario is derived from threat intelligence, this will inform how that should be manifested.

High Sophistication

The most mature threat actors are usually referred to as Nation State threat actors, but we have seen some cybercriminal gangs start to touch elements of that space. They are extremely well resourced (often with not only capability development teams, but also with linguists, financial networks, and a sizeable number of operators able to deliver 24/7 attacks). They will often have access to private tooling that is likely to evade most security products; and they are motivated usually by politics (causing political embarrassment to rivals, theft of data to uplift country research, extreme financial theft, denigrating services to cause real-world impact/hardship. Examples in this group can include APT28, APT38, and WIZARD SPIDER

Medium Sophistication

In the mid-tier maturity range we have a number of cybercriminal and corporate espionage threat actors. These will often have some significant financial backing – able to afford some custom (albeit commercial) tooling which will have been obtained either legally, or illegally; they may work solo, but will often be supported by a small team who can operate 24/7 but will often limit themselves to specific working patterns where possible. They may have some custom written capabilities, but these will often be tweaked versions of open-source tools. They are often motivated by financial concerns – whether that is profiting from stolen research, or directly obtaining funding from their victim due to their activities. Occasionally they will also be motivated by some sort of activism – often using their skills to target organisations which represent or deliver a service for a perceived cause which they do not agree with. In this motivation they will often either seek to use the attack as a platform to voice their politics or to try and force the organisation to change their behaviour to one which aligns better with their beliefs. Examples of threat actors in this tier have included  FIN13 and LAPSUS$.

Low Sophistication

At the lower tier maturity range, we are often faced with single threat actors, rather than a team; insiders are often grouped into this category. Threat actors in this category often make use of open-source tooling, which may have light customisation depending on the skill set of the individual. They will often work fixed time zones based on their victim, and will often only have a single target at a time or ever. Their motivation can be financial but can also be motivated by personal belief or spite if they believe they have been wronged. Despite being considered the lowest sophistication of threat actor, they should never be underestimated – some of the most impactful cybersecurity breaches have been conducted by threat actors we would normally place in this category- such as Edward Snowden, or Bradley Manning.

Attack Vector

Finally, now that we know what will be attacked, and who will be attacking we need to define how the attack will start. Again, threat intelligence gathered on different threat actors will show their preferences in terms of how they can start an attack, and if the objective is to keep this realistic, that should be the template. However if we are using a more open test we can mix things up and use an alternative attack vector. This is not to say that specific threat actors won’t change their attack vector, but they do have favourites.

Keep in mind, the attack vector determines which security boundary will be the initial focus of the attack, and they can be grouped into the following categories:

External (Direct External Attackers)

  • Digital Social Engineering (phishing/vishing/smshing)
  • Perimeter Breach (zero days)
  • Physical (geographical location breach leading to digital foothold)

Supply Chain (Indirect External Attackers)

  • Software compromise (backdoored/malicious software updates from trusted vendor)
  • Trusted link compromise (MSP access into organisation)
  • Hardware compromise (unauthorised modified device)

Insider (both Direct and Indirect Internal Attackers)

  • Willing Malicious Activity
  • Unwilling Sold/stolen access
  • Physical compromise

Each of these categories not only contain different attack vectors, but will often result in testing different security boundaries and controls. Whilst a Phishing attack will likely result in achieving a foothold on a user’s desktop – the likely natural starting position for an insider conducting willing or unwilling attacks; they will test different things, as an insider will not need to necessarily deploy tooling which might be detected, and will already have passwords to potentially multiple systems to do their job. Understanding this is the first step in determining how you want to test your security.

Pulling it together

Once all these elements have been identified and defined, the scenario can move forward to the planning phase before delivery. This is where any pre-requisites to deliver the scenarios, any scenario milestones, any contingencies can be prepared to help simulate top tier threat actors,  and any tooling preparations can be done to ensure the scenario can start. Keep in mind that whilst the scenario objective might be to compromise a system of note, the true purpose of the engagement is to determine if the security teams, tools, and procedures can identify and respond to the threat. This can only be measured and understood if the security teams have no clue when or how they will be tested, as real-world threats will not give any notice either.

Even if the red team accomplish the goals, the scenario will still help security teams understand the gaps in their skills, tools, and policies so that they can react better in the future. Consider contacting Prism Infosec if you would like your security teams to reap these benefits too.

Our Red Team Services: https://prisminfosec.com/service/red-teaming-simulated-attack/

Red Teams don’t go out of their way to get caught (except when they do)

Introduction

In testing an organisation, a  red team will be seeking to emulate a threat actor by achieving a specific goal – whether that is to gain administrative control of the network and prove they can control backups (a kin to how many ransomware operators work), through to proving access to financial systems, or even gaining access to sensitive data repositories. They will employ tactics, tools and capabilities aligned to the sophistication level of the threat actor they are pretending to be. The question asked about red teams asks is always “can the bad guys get to system X”,  when it really should be, “can we spot the bad guys before they get to system X AND do something effective about it”. The unfortunate answer is that with enough time and effort, the bad guys will always get to X. What we can do in red teaming is try to tell you how the bad guys will get to X and help you understand if you can spot the bad guys trying things.

Red Team Outcomes

In assessing an organisation, we often have engagements go in one of two ways – the first (and unfortunately more common) is that the red team operators achieve the objective of the attack, sometimes this is entirely without detection and sometimes there is a detection, but containment is unsuccessful. The other is when the team are successfully detected (usually early on) and containment and eradication is not only successful, but extremely effective.

So What?

In both cases, we have failed to answer some of the exam questions, namely the level of visibility the security teams have across the network.

In the first instance, we don’t know why they failed to see us, or why they failed to contain us, and why they didn’t spot any of the myriad other activities we conducted. We need to understand if the issue is one of process or effort (is the security team drinking from a firehose of alerts and we were there but lost in the noise; or did the security team see nothing because they don’t have visibility in the network; or do we have telemetry but no alerts for the sophistication level of the attacker’s capabilities/tactics?). The red team can try to help answer some of these questions by moving the engagement to one of “Detection Threshold Testing” where the sophistication level of the Tactics, Techniques and Procedures of the test are gradually lowered, and the attack becomes noisier until a detection occurs, and a response is observed. If the red team get to the point of dropping disabled, un-obfuscated copies of known bad tools on domain controllers which are monitored by security tools and there are still no detections, then the organisation needs to know and work out why. This is when a Detection and Response Assessment (DRA) Workshop can add real value to understand the root causes of the issues.

In the second instance we have observed a great detection and response capability, but we don’t know the depth of the detection capabilities – i.e. if the red team changed tactics, or came in elsewhere would the security team have a similar result? We can answer this sometimes with additional scenarios which model different threat actors, however multiple scenario red teams can be costly, and what happens if they get caught early on in all three scenarios? I prefer to adopt an approach of trust but verify in these circumstances by moving an engagement through to a “Declared Red Team”. In this circumstance, the security teams are congratulated on their skills, but are informed that the exercise will continue. They are told the host the red team are starting on, and they are to allow it to remain on the network uncontained but monitored while the red team continue testing. They will not be told what the red team objective is or on what date the test will end – they will however be informed when testing is concluded. If they detect suspicious activity elsewhere in the network  during this period they can deconflict the activity with a representative of the test control group. If it is the red team, it will be confirmed, and the security team will  be asked to record what their next steps would be. If it isn’t then the security team are authorised to take full steps to mitigate the incident; a failure on the red team to confirm, will always be treated as malicious activity unrelated to the test. Once testing is concluded (objective achieved/time runs out), the security team is informed, and the test can move onto a Detection and Response Assessment (DRA) Workshop.

Next Steps

In both of these instances, you will have noted that the next step is a Detection and Response Assessment (DRA) Workshop – DRA’s were introduced by the Bank of England’s CBEST testing framework, LRQA (formerly LRQA Nettitude) refined the idea, and Prism Infosec have adapted it by fully integrating NIST 2.0 into it. Regardless, it is essentially a chance to understand what happened, and what the security team did about it. The red team should provide the client security team with the main TTP events of the engagement – initial access, discovery which led to further compromise, privilege escalation, lateral movement, action on objectives. This should include timestamps and locations/accounts abused to achieve this. The security team should come equipped with logs, alerts, and playbooks to discuss what they saw, what they did about it, and what their response should be. Where possible, this response should also have been exercised during the engagement so the red team can evaluate its effectiveness.

The output of this workshop should be a series of observations about areas of improvement for the organisation’s security teams, and areas of effective behaviours and capabilities. These observations need to be included in the red team report – and should be presented in the executive summary to help senior stakeholders understand the value and opportunities to improve their security capabilities, and why it matters.

Conclusion

Red Teams will help identify attack paths, and let you know if bad guys can get to their targets, but more importantly they can and should help organisations understand how effective they are detecting and responding to the threat before that happens. Red Teams need to be caught to help organisations understand their limits so they can push them, show good capabilities to senior stakeholders, and identify opportunities for improvement. An effective red team exercise will not only engineer being caught into their test plan, but they will ensure that when it happens, the test still adds value to the organisation.

Our Red Team services: https://prisminfosec.com/service/red-teaming-simulated-attack/

To you it’s a Black Swan, to me it’s a Tuesday…

Cybersecurity is a discipline with many moving parts. At its core though, it is a tool to help organisations identify, protect, detect, respond, and recover, then adapt to the ever-evolving  risks and threats that new technologies, and capabilities that threat actors employ through threat modelling. Sometimes these threats are minor – causing annoyance but no real damage, but sometimes these threats are existential and unpredictable; these are known as Black Swan events.

They represent threats or attacks that fall outside the boundaries of standard threat models, often blindsiding organisations despite rigorous security practices.

In this post, we’ll explore the relationship between cybersecurity threat modelling and Black Swan events, and how to better prepare for the unexpected.

What Are Black Swan Events?

The term Black Swan was popularized by the statistician and risk analyst Nassim Nicholas Taleb. He described Black Swan events as:

  • Highly improbable: These events are beyond the scope of regular expectations, and no prior event or data hints at their occurrence.
  • Extreme impact: When they do happen, Black Swan events have widespread, often catastrophic, consequences.
  • Retrospective rationalization: After these events occur, people tend to rationalize them as being predictable in hindsight, even though they were not foreseen at the time.

In cybersecurity, Black Swan events can be seen as threats or attacks that emerge suddenly from unknown or neglected vectors—such as nation-state actors deploying novel zero-day exploits, or a completely new class of vulnerabilities being discovered in widely used software.

The Limits of Traditional Threat Modelling

Threat modelling is a systematic approach to identifying security risks within a system, application, or network.

It typically involves:

  • Identifying assets: What needs protection (e.g., data, services, infrastructure)?
  • Defining threats: What could go wrong? Common threats include malware, phishing, denial of service (DoS) attacks, and insider threats.
  • Assessing vulnerabilities: How could the threats exploit system weaknesses?
  • Evaluating potential impact: How severe would the consequences of an attack be?
  • Mitigating risks: What steps can be taken to reduce the likelihood and impact of threats?

While highly effective for many threats, traditional threat modelling is largely based on past experience and known attack methods. It relies on patterns, data, and risk profiles developed from historical analysis. However, Black Swan events, by their nature, evade these models because they represent unknown unknowns—threats that have never been seen before or that arise in ways no one could predict. This is where organisations often encounter significant challenges. Despite extensive security efforts, unknown vulnerabilities, unexpected technological changes, or even human error can expose them to unforeseen, high-impact cyber events.

Real-World Examples of Cybersecurity Black Swan Events

1. The SolarWinds Hack (2020)

The SolarWinds cyberattack, attributed to a nation-state actor, was one of the most devastating and unexpected breaches in recent history. Attackers compromised the software supply chain by embedding malicious code into SolarWinds’ Orion software updates, which were then distributed to thousands of organizations, including U.S. government agencies and Fortune 500 companies.

The sophistication of the attack and the sheer scale of its impact make it a classic Black Swan event. It was a novel approach to cyber espionage, and its implications were far-reaching, affecting critical systems and sensitive data across industries.

2. NotPetya (2018)

The Petya ransomware that launched in 2016 was a standard ransomware tool – designed to encrypt, demand payment and then be decrypted. NotPetya however was something different. It leveraged two changes – the first was that it was changed to not be reversed – once data was encrypted, it could not be recovered; this made it a wiper instead of ransomware. The second was that it also had the ability to leverage the EternalBlue exploit, much like the Wannacry ransomware code that attacked devices worldwide earlier that year – this allowed it to spread rapidly around unpatched Microsoft Windows networks.

NotPetya is believed have infected victims through a compromised piece of Ukrainian tax software called M.E.Doc. This software was extremely widespread throughout Ukrainian businesses, and investigators found that a backdoor in its update system had been present for at least six weeks before NotPetya’s outbreak.

At the time of the outbreak, Russia was still in the throes of conflict with the Ukrainian state, have annexed the Crimean peninsula less than two years prior; and the attack was timed to coincide with Constitution Day, a Ukrainian public holiday commemorating the signing of the post-Soviet Ukrainian constitution. As well as its political significance, the timing also ensured that businesses and authorities would be caught off guard and unable to respond. What the attackers did not consider however was how far spread that software was. Any company local or international who did business in Ukraine likely had a copy of that software. When the attackers struck, they hit multinationals, including the massive shipping company A.P. Møller-Maersk, the Pharmaceutical company Merck, delivery company FedEx, and many others. Aside from crippling these companies, reverberations of the attack were felt in global shipping, and across multiple business sectors.

NotPetya is believed to resulted in more than $10 billion in total damages across the globe, making it one of, if not the, most expensive cyberattack in history to date.

How to Prepare for Cybersecurity Black Swan Events

While it’s impossible to predict or completely prevent Black Swan events, there are steps that organisations can take to enhance their resilience and minimise potential damage:

1. Adopt a Resilience-Based Approach

Rather than solely focusing on known threats, build your cybersecurity strategy around resilience. This means being prepared to rapidly detect, respond to, and recover from attacks, regardless of their origin.

Organisations should prioritise:

  • Incident response plans: Have well-documented and tested response procedures in place for any type of security event.
  • Redundancy and backups: Ensure critical systems and data have redundant layers and secure backups that can be quickly restored.
  • Post-event recovery: Create strategies to mitigate the damage and recover swiftly, minimising long-term business disruption.

2. Encourage Continuous Security Research and Innovation

Security Testing: Many Black Swan events are the result of the exploitation of previously unknown vulnerabilities. Investing in continuous security research and vulnerability discovery (through bug bounty programs, penetration testing, etc.) can reduce the number of undiscovered vulnerabilities and improve overall system security.

Defence Engineering: Implement defensive measures such as application isolation, network segmentation, and behaviour monitoring to limit the damage if a zero-day exploit is discovered.

3. Utilize Cyber Threat Intelligence

Staying informed on emerging cybersecurity trends and participating in industry collaborations can give organisations an edge when it comes to detecting potential Black Swan events. By sharing information, organisations can learn from others’ experiences and uncover threats that might not have been apparent within their own systems.

4. Model Chaos and Test the Unthinkable

Chaos engineering, which involves intentionally introducing failures into systems to see how they respond, can be an effective way to test the robustness of an organization’s defences. These drills can help security teams explore what might happen during an unanticipated event and can uncover system weaknesses that might otherwise be overlooked.

5. Promote a Culture of Adaptive Security

Adopting an adaptive security mindset means continuously monitoring the threat landscape, adjusting security controls, and being willing to evolve when necessary. The concept of security-by-design—where security considerations are built into the very foundation of systems and software—will also help organisations stay ahead of new and unforeseen risks.

Black Swan events in cybersecurity may be rare, but their consequences can be catastrophic. The unpredictability of these threats poses a unique challenge, requiring organisations to shift from a purely reactive, known-threat approach to one that emphasises resilience, adaptation, and continuous learning.

Red Team engagements are one tool which can help organisations develop resilient security strategies designed to respond to Black Swans. What makes this possible is some of the key concepts, controls and attitudes which are introduced during the planning stages of the engagement. The results of red team engagements using this approach helps shape boardroom discussions around strategy, resilience, and capacity in a way that allows the business to anticipate Black Swans and be prepared should they ever arrive.

Our Red Team services: https://prisminfosec.com/service/red-teaming-simulated-attack/

The Value of Physical Red Teaming

Introduction

In testing an organisation, a red team will be seeking to emulate a threat actor by achieving a specific goal – whether that is to gain administrative control of the network and prove they can control backups (a kin to how many ransomware operators work), through to proving access to financial systems, or even gaining access to sensitive data repositories. They will employ tactics, tools and capabilities aligned to the sophistication level of the threat actor they are pretending to be.

However, not all threat actors operate only in the digital threat axis, and will instead seek to breach the organisation itself to achieve their goal. Physical red teaming seeks to test an organisation’s resilience and security culture. It is aimed more at testing people and physical security controls. The most common physical threat actor is the insider threat; however nation state,  criminal, industrial espionage, and activist threats also remain prevalent in the physical arena, however their motivations to cause digital harm will vary.

As part of an organisation’s layered defence we not only have to consider the digital defences but also the physical ones. Consider, would it be easier for the threat actor to achieve their goal by physically taking a computer rather than try to digitally gain a foothold and then get to the target and complete their activities? Taking a holistic approach to security makes a significant difference to an organisation.

Understanding Physical Red Teaming

Physical red team simulates attacks on physical security systems and behaviours to test defences. It accomplishes this by:

  • attempting to gain unauthorised access to buildings though:
    •  the manipulation of locks,
    • use of social engineering techniques such as  tailgating
  • bypassing security protocols such as:
    •  cloned access cards,
    • managing to connect rogue network devices,
    • or gaining access to unattended documents from bins and printers;
  • or exploiting social behaviours and abusing preconceptions
    • using props to appear as though you belong or are a person of authority to avoid being challenged.

In digital red teaming we are evaluating people and security controls in response to remote attacks. The threat actor must not only convince a user to complete actions on their behalf, but must also then bypass the digital controls that are constantly being updated and potentially, monitored.

In comparison, physical security controls are rarely updated due to cost reasons as they are integrated into the buildings. Furthermore, people will often act very differently towards an approach when it is conducted online than if it is conducted in person. This can be down to peoples’ confidence and assertiveness which psychologically is different online than in person. Therefore it can be important to test the controls that keep threat actors out and if they fail, that staff feel empowered and supported to be able to challenge individuals who they believe do not belong, even if that person is one of authority until their credentials have been verified.

Why Physical Security Matters in Cybersecurity

At the top end of the scale, we should consider the breach caused by Edward Snowden at the NSA in 2013  which affected the national security of multiple countries. This was a trusted employee, who abused their privileges as a system administrator to breach digital security controls, and abused and compromised credentials of other users who trusted him to gain unauthorised access to highly sensitive information. He then breached physical security controls to extract that data and remove it, not only from the organisation, but also the country. The impact of that data-breach was enormous in terms of reputational damage, as well as tools and techniques used by the security services. Whilst he claimed his motivation was an underlying privacy concern (which was later ruled unlawful by US courts); the damage his actions caused have undoubtedly, though impossible to distinctly prove, inflicted significant threat to life for numerous individuals worldwide. Regardless, this breach was a failing of both physical controls (preventing material from leaving the premises) and digital (abusing trusted access to gain access to digital data stores).

Other attacks do exist however, consider back in 2008, a 14-year-old, with a homemade transmitter deliberately attacked the Polish city of Lodz’s tram system. This individual ended up derailing four trams, injuring a dozen individuals. Using published material he spent months studying the city’s rail lines to determine where best to create havoc; then using nothing more than a converted TV remote, inflicted significant damage. In this instance, the digital controls were related to the material that had been published regarding the control systems and the unauthenticated and unauthorised signals being acted upon by the system. Whilst the physical controls were in terms of being able to direct signals to the receiver which permitted the attack to occur.

Key Benefits of Physical Red Teaming

A benefit of physical red teaming is in testing and improving an organisation’s response to physical breaches or threats. Surveillance, access control systems, locks, and security staff can be assessed for weaknesses, and it can help identify lapses in employee vigilance (e.g., tailgating or failure to challenge strangers).

This in turn can lead to improvements in behaviours, policies, and procedures for physical access management. Furthermore, physical red teaming encourages employees to take an active role in security practices and fosters an overall culture of security.

Challenges of Physical Red Teaming

However delivering physical red teaming is fraught with ethical and legal risk; aside from trespassing, breaking and entering, and other criminal infringements, there could also be civil litigation concerns depending on the approach the consultants take.

Therefore it is important to establish clear consent and guidelines from the organisation, this must include the agreed scope – what activities the consultants are permitted to do, when and where those activities will take place, and who at the client organisation is responsible for the test. This information, including any additional property considerations such as shared tenancies or public/private events which may be impacted by testing also need to be considered and factored into the scope and planning. It is not unusual for this information to be captured into a “get out of jail” letter provided to the testers along with client points of contact to verify the test and stand down a response.

This is to ensure that testing can remain realistic but also any disruption caused by it can be minimised.

Cost is always also going to be a concern, as it takes time for consultants to not only travel to site, but also conduct surveillance, equip suitable props (some of which may need to be custom made), and develop and deploy tooling to bypass certain controls (such as locks and card readers) if that is required in the engagement.

Conclusion:

The physical threat axis is one that people have been attacking since time immemorial. However in today’s world we have shrunk distances using digital estates, and have managed to establish satellite offices beyond our traditional perimeters and as a result increased the complexity of the environments we must defend. Red teaming permits an organisation to assess all these threat axis and consider how physical and digital controls are not only required but need to be regularly exercised to ensure their effectiveness.

Readers of this post are therefore encouraged to consider the physical security of their locations – whether that is their offices, factories, transit hubs, public buildings through to security of home offices, and ask themselves if they have verified their security controls are effective and when they were last exercised.

Our Red Team services: https://prisminfosec.com/service/red-teaming-simulated-attack/

Red Teams – Supporting Incident Response

Unauthorised access into remote computers has been around since the 1960s but since those early days organisations and their IT systems have become complex, and that complexity is increasing at an exponential rate, making securing those systems increasingly difficult. Defence mechanisms like firewalls, antivirus software, and monitoring systems have become essential, but they aren’t enough on their own. Cybersecurity red teams—groups of ethical hackers tasked with simulating real-world attacks—are increasingly playing a pivotal role in not only identifying vulnerabilities but also supporting incident response efforts. Red teams need to be considered as part training opportunity for defenders, and part organisational security assessment. In this post, we’ll explore how red teams can actively contribute to the incident response (IR) process, helping organizations detect, mitigate, and recover from cyber incidents more effectively.

Proactive Detection and Prevention

Red teams conduct simulations that mimic threat actors of varying degrees of sophistication, which includes phishing attacks, insider threats, and other malicious activities to evaluate the effectiveness of an organisation’s security defences. Incident response teams, also known as blue teams, are responsible for defending against and responding to active threats. As red teams can simulate a wide range of attack scenarios, they provide the blue team realistic training opportunities.

Key Contributions

  • Identify vulnerabilities: By testing both technical and human vulnerabilities, red teams can uncover gaps in systems, processes and controls that attackers could exploit. These insights help incident response teams prioritize fixes and harden defences.
  • Test detection capabilities: During simulations, red teams often use tactics that mimic real-world threat actor behaviour. This allows Security Operations Centres (SOCs) to evaluate whether current detection mechanisms are effective in identifying threats – ideally early on in a breach, providing a feedback loop to improve monitoring and alerting systems.
  • Highlight gaps in response: Beyond detection, red teams can uncover weaknesses in the organisation’s ability to respond. These exercises help refine playbooks and improve reaction times in case of a real attack; acting like a fire drill for the organisation’s security teams.
  • Simulation of real-world attacks: Red team exercises provide blue teams with exposure to the tactics, techniques, and procedures (TTPs) used by adversaries. This allows the incident response team to better understand the behaviour of attackers and improve their incident detection and response procedures.
  • Drills under pressure: Simulated attacks create controlled, high-pressure situations where the blue team must react as if the incident were real. This strengthens their ability to work effectively under stress during actual incidents.
  • Collaborative feedback loops: After red team exercises, post-mortem reviews and feedback sessions help blue teams understand what went wrong and what went right. This collaborative effort ensures continuous improvement in incident detection and response.

Ongoing Incident and Forensic Support

When an incident occurs, quick identification of the threat’s origin, scope, and impact is critical. Red teams, by virtue of their expertise in adversary tactics, can aid in threat hunting and digital forensics during an ongoing incident.

Key Contributions

  • Insight into threat actor behaviour: Since red teams specialize in mimicking attacker methodologies, they can offer unique insights into how a real adversary might have breached the system. This includes understanding common evasion techniques, lateral movement strategies, and exfiltration tactics.
  • Identification of blind spots: During live incidents, red teams can collaborate with blue teams to identify blind spots or areas where an attack might have gone unnoticed. Their understanding of complex attack chains helps guide incident responders toward detecting hidden malware or compromised accounts.
  • Improving forensic analysis: Red teams can aid in digital forensics by offering a detailed understanding of how an attack might unfold. They can help analyse compromised systems, logs, and network traffic to identify indicators of compromise (IoCs) and reconstruct the attack timeline more accurately based on their experience of what steps they would take, and an understanding of the footprints various tools leave on system logs.

Fostering a Culture of Continuous Improvement

One of the biggest challenges in cybersecurity is complacency. Organisations often become overconfident after implementing new security measures or surviving an attack. Red teams, by constantly pushing the boundaries and simulating sophisticated attacks, help prevent this.

Key Contributions:

  • Challenge security assumptions: Red teams encourage organisations to avoid a “set-it-and-forget-it” mindset by continually challenging the effectiveness of defences and forcing teams to stay agile and adaptable in their responses.
  • Promote proactive security: By migrating to consistent tempo of red team assessments and testing organisational exposure to different Tactics, Techniques and Procedures, the incident response team can take a proactive approach rather than a reactive one. This works by helping the Blue Team conduct regular threat hunting activities, using this to improve their detections and identify weaknesses in their detections or gaps in network visibility so they can be addressed. This shift reduces the likelihood of severe incidents and ensures faster containment if they do occur.
  • Drive organisational awareness: Red teams don’t just work with security professionals; they also raise awareness across the organisation. They often test phishing or social engineering schemes, helping non-technical employees understand their role in cybersecurity, which indirectly supports better incident response.

Conclusion

In the complex world of cybersecurity, red teams are invaluable in supporting and strengthening incident response efforts. By identifying vulnerabilities, training blue teams in real-world scenarios, aiding in threat hunting, and offering an initiative-taking approach to defending against modern cyber threats. Organisations that leverage both red team and blue team collaboration can better detect, respond to, and recover from cyber incidents, significantly reducing risk and minimizing damage.

Our Red Team Services: Red Teaming & Simulated Attack Archives – Prism Infosec

Have you had a breach? Contact us here for our Incident Response service: Have You Had A Security Breach? – Prism Infosec

Flawed Foundations – Issues Commonly Identified During Red Team Engagements

Cybersecurity Red Team engagements are exercises designed to simulate adversarial threats to organisations. They are founded on real world Tactics, Techniques, and Procedures that cybercriminals, nation states, and other threat actors employ when attacking an organisation. It is a tool for exercising detection and response capabilities and to understand how the organisation would react in the event of a real-world breach.

One of outcomes of such exercises is an increased awareness of vulnerabilities, misconfigurations and gaps in systems and security controls which could result in the organisation’s compromise, and impact business delivery, causing reputational, financial, and legal damage.

Most of the time, threat actors rarely need to employ cutting edge capabilities or “zero day” exploits in order to compromise an organisation. This is because organisations grow organically, they exist to deliver their business, and as a result, security is not a key consideration from its founding, this means that critical issues can exist in the foundations of the organisation’s IT which threat actors will be more than happy to abuse.

This post covers five of the most common vulnerabilities we regularly see when conducting red team engagements for our clients. Its’s purpose is to raise awareness among IT professionals and business leaders about potential security risks.

Insufficient privilege management

This issue presents when accounts are provided with privileges within the organisation greater than what they require to conduct their work. This can present as: users who have local administrator privileges, accounts who have been given indirect administrator privileges, or overly privileged service accounts.

Some examples include:

  • Users who are all local administrators on their work devices –  This gives them the ability to install any software they might need to conduct their work, but also exposes the organisation to significant risk, should that device or user account become compromised. If users do require privileges on their laptops, then they should also be provided with a corporate virtual device (either cloud or on host based), which has different credentials from their base laptop, and is the only device permitted to connect to the corporate infrastructure. This will limit the exposure of the risk and permit staff to continue to operate. In a red team, this permits us to abuse a machine account, and gain the ability to bypass numerous security tools and controls which would normally impede our ability to operate.
  • Users with indirect administrator privileges – in Microsoft Windows Domains, users can belong to groups, however groups can also belong to other groups, and as a result users can inherit privileges due to this nesting. Whilst it was never the intention to grant  a user administrator privileges, and whilst the user is unaware that they have been given this power, such a misconfiguration can result quite easily and exposes the organisation to considerable risk. This can only be addressed through an in-depth analysis of the active directory and consistent auditing combined with system architecture. This sort of subtle misconfiguration only really becomes apparent when a threat actor or red team starts to enumerate the active directory environment; when found though it rapidly leads to a full organisation compromise.
  • Overly privileged service accounts – service accounts exist to ensure that specific systems such as databases or applications are able to authenticate users accessing them from the domain and to provide domain resources to the system. A common misconfiguration is providing them with high levels of privilege during installation even though they do not require them. Service accounts, due to the way they operate need to be exposed, and threat actors who identify overly privileged accounts can attempt to capture an authentication using the service. This can be attacked offline to retrieve the password, which can then lead to greater compromise within the estate. Service accounts should be regularly audited for their privileges, where possible these should be removed or restricted. If it is not a domain managed service account (a feature made available from Windows Server 2012 R2 onwards), then ensuring the service account has a password of at least 16 characters in length, which is recorded in a secure fashion if it is required in the future will severely restrict threat actors abilities to abuse these. Abuse of service accounts is becoming rarer but legacy systems which do not support long passwords means there are still significant amounts of these sorts of accounts present. Abuse of these accounts can often be tied to whether they have logon rights across the network or not – as identifying them being compromised can often be problematic if the threat actor or red team operate in a secure manner.

Poor credential complexity and hygiene

This issue presents when users are given no corporately supported method to store credential material; as a result passwords chosen are often easy to guess or predict, and they are stored either in browsers, or in clear text files on network shared drives, or on individual hosts.

  • Credential Storage – staff will often use plain text files, excel documents, emails, one notes, confluence,  or browsers to store credentials when there is no corporately provided solution. The problem with all of these options is that they are insecure – the passwords can be retrieved using trivial methods; which means the organisations are often one step away from a  significant breach. Password vaults such as LastPass, BitWarden, KeyPass, OnePass, etc. whilst targets for threat actors do offer considerably greater protection, as long as the credentials used to unlock them are not single factor, or stored with the wallet. It is standard practice for red teams and threat actors to try to locate clear text credentials, and attacking wallets significantly increases the difficulty and complexity of the tradecraft required when the material to unlock the wallet uses MFA or is not stored locally alongside it.
  • Credential Complexity – over the last 20 years the advice on password complexity has changed considerably. We used to advise staff to rotate passwords every 30/60/90 days, choose random mixes of uppercase, lowercase, numbers and punctuation, and have a minimum length; today we advise not rotating passwords regularly, and instead choosing a phrase or 3 random, easy to memorise words which are combined with punctuation and letters. The reason for this is because as computational power has increased, smaller passwords, regardless of their composition have become easier to break. Furthermore, when staff rotated them regularly, it would often result in just a number changing rather than an entirely new password being generated, as such they would also become easy to predict. Education is critical in addressing this. Furthermore many password wallets will also offer a password generator that can make management of this easier for staff whilst still complying with policies.  Too often I have seen weak passwords, which complied with password complexity policies because people will seek the simplest way to comply. Credential complexity buys an organisation time, time to notice a breach, raises the effort a threat actor must invest in order to be effective in attacking the organisation.

Insufficient Network Segregation

 This issue occurs when a network is kept flat – hosts are allowed to connect to any server or other workstations within the environment on any exposed ports regardless of department or geographical region. It also covers cases where clients  which connect to the network using VPN are not isolated from other clients.

  • VPN Isolation –  Clients which connect to the network through VPN to access domain resources such as file shares, can be directly communicated with from other clients. This can be abused by threat actors who seed network resources with materials which will force clients who load them to try to connect with a compromised host. Often this will be a compromised client device. When this occurs, the connecting host transmits encrypted user credentials to authenticate with the device. These can be taken offline by the threat actor and cracked which could result in greater compromise in the network.  Securing hosts on a VPN limits the threat actor, and red team in terms of where they can pivot their attacks, and makes it easier to identify and isolate malicious activities.
  • Flat Networks – networks are often implemented to ensure that business can operate efficiently, the easiest implementation for this is in flat networks where any networked resource is made available to staff regardless of department or geographical location, and access is managed purely by credentials and role-based access controls (RBAC). Unfortunately, this configuration will often expose administrative ports and devices which can be attacked. When a threat actor manages to recover privileged credentials then, a flat network offers significant advantages to them for further compromise of the organisation. Segregating management ports and services, breaking up regions and departments and restricting access to resources based on requirements will severely restrict and delay a threat actors and red teams ability to move around the network and impact services.

Weak Endpoint Security

Workstations are often the first foothold achieved by threat actors when attacking an organisation. As a result they require constant monitoring and controls to ensure they stay secure. This can be achieved through a combination of maintained antivirus, effective Endpoint Detection and Response, and Application Controls. Furthermore controlling what endpoint devices are allowed to be connected to the network will limit the exposure of the organisation.

  • Unmanaged Devices -Endpoints that are not regularly monitored or managed, increasing risk. Permitting Bring Your Own Device (BYOD) can increase productivity as staff can use devices they have customised; however it also exposes the organisation as these devices may not comply with organisation security requirements. This also compounds issues when a threat is detected, as identifying a rogue device becomes much more difficult as you need to treat every BYOD device as potentially rogue. Furthermore, you have little insight or knowledge as to where else these devices have been used, or who else has used them. By only permitting managed devices to your network and ensuring that BYOD devices, if they must be used, are severely restricted in terms of what can be accessed, you can limit your exposure to risk. Restrictions of managed devices can be bypassed but it raises the complexity and sophistication of the tradecraft required which means it takes longer, and there is a greater chance of detection.
  • Anti-Virus – it used to be the case that anti-virus products were the hallmark of security for devices. However, the majority of these work on signatures, which means they are only effective against threats that have been identified and are listed in their definitions files. Threat Actors know this and will often change their malware so that it no longer matches the signature and therefore can be evaded. This means the protection they offer is often limited but if well maintained, they can limit the organisations exposure to common attacks and provide a tripwire defence should a capable adversary deploy tooling that has previously been  signatured. Bypassing antivirus can be trivial, but it provides an additional layer of defence which can increase the complexity of a red team or threat actors activities.
  • Lack of Endpoint Detection and Response (EDR) configuration- EDR goes one step beyond antivirus and looks at all of the events occurring on a device to identify suspicious tools, behaviours, and activities that could indicate breach. Like anti-virus they will often work with detection heuristics and rules which can be centrally managed. However they require significant time to tune for the environment, as normal activity for one organisation, maybe suspicious in another. Furthermore it permits the organisation to isolate suspected devices. Unfortunately EDR can be costly, both to implement and then maintain correctly – and is only effective when it is on every device. Too often, organisations will not spend time using it, or understand the implementation of the basic rules versus tuned rules. As such false positives can often impact business, and lead to a lack of trust in the tooling. Lacking an EDR product severely restricts an organisation’s ability to detect and respond to threats in a capable, and effective manner. Well maintained and effective EDR that is operated by a well-resourced, exercised security team significantly impacts threat actor and red team activities; often bringing the Mean Time to Detected a breach down from days/weeks to hours/days.
  • Application Control – When application allowlisting was first introduced, it was clunky and often broke a lot of business applications. However it has evolved since those early days but is still not well implemented by organisations. It takes significant initial investment to properly implement but acts in a manner which can strongly restrict a threat actors ability to operate in an environment. Good implementations are based on user roles; most employees require a browser, and basic office applications to conduct their work. From there additional applications can be allowed dependent on the role, and users who do not have application control applied have segregated devices to operate on, which will help limit exposure. Without this, threat actors and red teams can often run multiple tools which most users have no use for or business using during their day jobs; furthermore it can result in shadow IT applications as users introduce portable apps to their devices which makes investigation of incidents difficult as it muddies the water in terms of if it is legitimate use or threat actor activity.

Insufficient Logging and Monitoring

If an incident does occur – and remember that red team engagements are also about exercising the organisation’s ability to respond; then logging and monitoring become paramount for the organisation to effectively respond. When we have exercised organisations in the past, we often find that at this stage of the engagement a number of issues become quickly apparent that prevent the security teams from being effective. These are almost often linked to a lack of centralised logging, poor incident detection, and log retention issues.

  • Lack of Centralised Logging: Threat actors have been known to wipe logs during their activities, when this occurs on compromised devices, it makes detecting activities difficult, and reconstruction of threat actor activities impossible. Centralising logs allows additional tooling to be deployed as a secondary defence to detect malicious activity so that devices can be isolated; it also means that reconstruction of events is significantly easier. Many EDR products will support centralised logging, however this is only true on devices which have agents installed, and on supported operating systems; therefore to make this effective additional tooling may need to be used such as syslog and Sysmon to ensure that logging is sent to centralised hosts for analysis and curating. Centralised logging can also be easier to store for longer periods of time, permitting effective investigations to understand how, what and where the threat actor/red team have been operating and what they accomplished before being detected and containment activities are undertaken.
  • Poor Incident Detection: Organisations which do not exercise their security teams often will act poorly when an incident occurs. Staff need to practice using SIEM (Security Information and Event Management) tooling, and develop playbooks and queries that can be run against the monitoring software in order to locate and classify threats. When this does not occur, identifying genuine threats from background user activity can become tedious, difficult, and ineffective, resulting in poor containment and ineffective response behaviours. When this occurs inn red teams, it can result in alerts being ignored or classed as false positives which leads to exacerbating an incident.
  • Log Retention Issues: many organisations keep at most, 30 days of logs – furthermore many organisations think they have longer retention than this as they have 180 days of alert retention, not realising that alerts and logs are often different. As a result we can often review alerts as far back as 6 months, but can only see what happened around those alerts for 30 days. A lot of threat actors know about this shortcoming, and will often wait 30 days once established in the network to conduct their activities to make it difficult for the responders to know how they got it, how long they have been there, and where else they have been. This often comes up in red teams as many red teams will run for at least 4 weeks, if not longer to deliver a scenario, which makes exercising the detection and response difficult when this issue is present.

Conclusion

These are just the 5 most common issues we identify when conducting a red team engagement; however, they are not the only issues we come across. They are fundamental issues which are ingrained in organisations due to a mixture of culture and lack or deliberate architectural design considerations.

Red team engagements not only help shine a light on these sorts of issues but also allows the business to plan how to address them at a pace that works for them, rather than as a consequence of a breach. Additionally, red team engagements can help identify areas where additional focus testing can help test additional controls, provide a deeper understanding of identified issues, and exercise controls that are implemented following a red team engagement.

Basically, a red team engagement is just the start or milestone marker in an organisation’s security journey. It is used in tandem with other security frameworks and capabilities to deliver a layered, effective security function which supports an organisation to adapt, protect, detect, respond and recover effectively to an ever-evolving world of cybersecurity threats.

Our Red Team services: https://prisminfosec.com/service/red-teaming-simulated-attack/

Linux RCE – Critical Vulnerability (CVE9.9) in CUPS – Security Awareness Messaging

An inadvertent data leak from a GitHub push update identified an RCE in the Linux Common Unix Printing System (CUPS) service, as an unauthenticated Remote Code Execution vulnerability with a CVE score of 9.9.

The vulnerabilities:

  • CVE-2024-47176 | cups-browsed <= 2.0.1 binds on UDP INADDR_ANY:631 trusting any packet from any source to trigger a Get-Printer-Attributes IPP request to an attacker controlled URL.
  • CVE-2024-47076 | libcupsfilters <= 2.1b1 cfGetPrinterAttributes5 does not validate or sanitize the IPP attributes returned from an IPP server, providing attacker controlled data to the rest of the CUPS system.
  • CVE-2024-47175 | libppd <= 2.1b1 ppdCreatePPDFromIPP2 does not validate or sanitize the IPP attributes when writing them to a temporary PPD file, allowing the injection of attacker controlled data in the resulting PPD.
  • CVE-2024-47177 | cups-filters <= 2.0.1 foomatic-rip allows arbitrary command execution via the FoomaticRIPCommandLine PPD parameter.

CUPS and cups-browsed (a service responsible for discovering new printers and automatically adding them to the system) ship with many versions of UNIX, including most GNU/Linux distributions, but can also be installed in BSD, Oracle Solaris, and even Google’s Chrome OS.

Essentially the vulnerability permits an unauthenticated attacker who can reach the CUPS service port (UDP 631) to replace  or install new printers with a malicious IPP urls without generating alerting. This can result in arbitrary command execution on an attacked computer which starts a print job.

UDP port 631, if exposed to the internet and belonging to an operating system running the affected CUPS service requires no authentication making this particularly impactful, however an adversary who has established a foothold within a network can achieve a similar results.

Recommendation:

In terms of hardening against the vulnerability, removing cups-browsed if it is not needed is probably the easiest solution, failing that ensure that the CUPS package is updated on affected systems, and if it cannot be updated, then use firewalling to ensure only trusted hosts can connect to UDP port 631. 

Further information can be found:

Attacking UNIX Systems via CUPS, Part I (evilsocket.net)

https://github.com/OpenPrinting/cups-browsed/issues/36

The Dark Side of AI: How Cybercriminals Exploit Artificial Intelligence

Cybercriminals and security professionals are in an AI arms race. As quickly as cybersecurity teams on the front lines utilise AI to speed up their response to real-time threats, criminals are using AI to automate and refine their attacks.

Tools that generate images, or conversational AI, are improving their quality and accuracy at increasing speeds. The DALL-E text-to-image generator released version 3, three years after the initial release, ChatGPT is currently at its fourth version only two years after its initial release.

The prevalence of this has become much more apparent in recent times.

In line with this accelerated evolution of AI tools, the range of malicious uses that AI can be used for is also expanding rapidly. From social engineering uses like spoofing and phishing, to speeding up the writing of malicious code.

(Deep)fake it till you make it

AI-generated deepfakes have been in the news several times, the higher-profile stories tend to involve political attacks designed to destabilise governments or defame people in the public eye. Such as the deepfake video released in March 20221 that appeared to show Ukrainian president Volodymyr Zelensky urging his military to lay down their weapons and surrender to invading Russian forces. Sophisticated scammers are now using deepfaked audio and video to impersonate CEOs, financial officers, and estate agents to defraud people.

In February 2024, a finance worker in Hong Kong was duped into paying out USD 25.6 million2 to scammers in an elaborate ruse that involved the criminals impersonating the company’s chief financial officer, and several other staff members, on a group live video chat. The victim originally received a message purportedly from the UK-based CFO asking for the funds to be transferred. The request seemed out of the ordinary, so the worker went on a video call to clarify whether it was a legitimate request. Unknown to them, they were the only real person on the call. Everyone else was a real-time deepfake.

The general public is also being targeted by deepfakes, most famously by a faked video purporting to show Elon Musk encouraging people to invest in a fraudulent cryptocurrency3. Unsuspecting victims, believing in Musk’s credibility, are lured into transferring their funds.

Authorities are warning the public to be vigilant and verify any investment opportunities, especially those that seem too good to be true.

The following video which was quickly identified also had a convincing AI Generated voice of Elon Musk dubbed over, instructing users to scan the QR code.

Police forces all over the world are also reporting an increase in deepfakes being used to fool facial recognition software by imitating people’s photos on their identity cards.

Evolution of scamming

Aside from high-profile cases like those above, scammers are also using AI in more simple ways. Not too long ago, phishing emails were relatively easy to spot. Bad grammar and misspellings were well-known red flags, but now criminals can easily craft professional-sounding, well-written emails by using Large Language Models (LLMs).

Spear-phishing has been refined too, using AI to craft a targeted email that uses personal information, scraped from social media, to sound personally written for the target. These attacks can also be sent out at a larger scale than manual attacks.

In place of generic emails, AI allows attackers to send out targeted messages to people at a larger scale, which can also adapt and improve based on the responses received.

WormGTP

LLMs like ChatGPT have restrictions in place to stop them from being used for malicious purposes or answering questions regarding illegal activity.
In the past, carefully written prompts have allowed users to temporarily bypass these restrictions.

However, there are LLMs available without any restrictions at all, such as WormGPT and FraudGPT. These chatbots are offered to hackers on a subscription model and specialise in creating undetectable malware, writing malicious code, finding leaks and vulnerabilities, creating phishing pages, and teaching hacking.

At the risk of this becoming a shopping list of depressing scenarios, a brief mention should also be given to how AI is speeding up the time that it takes to crack passwords. Using generative adversarial networks to distinguish patterns in millions of breached passwords, tools like PassGAN can learn to anticipate and crack future passwords. This makes it even more critical for individuals and organisations to use strong, unique passwords and adopt multi-factor authentication.

In summary

Looking ahead, the future of AI in cybercrime is both fascinating and concerning. As AI continues to evolve, so too will its malicious applications. We will see AI being used to find and exploit zero-day vulnerabilities, craft even more convincing social engineering attacks, or automate reconnaissance to identify high-value targets.

This ongoing arms race between attackers and defenders will shape the landscape of cybersecurity for years to come. AI is being exploited by cybercriminals in ways that were unimaginable just a few years ago. However, by raising awareness, investing in robust cybersecurity measures, and fostering collaboration across sectors, we can stay one step ahead in this high-stakes game of Whack-A-Mole.

This post was written by Chris Hawkins.

1 https://www.wired.com/story/zelensky-deepfake-facebook-twitter-playbook/

2 https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html

3 https://finance.yahoo.com/news/elon-musk-deepfake-crypto-scam-093000545.html

Blog Post: Top 3 Common Networking Attacks

Prism Infosec’s Senior Security Consultant, Aaron, reviews the “Top 3 Common Networking Attacks”​

During this unprecedented period when much of the world’s population is affected by lockdown measures and limited activities, cyber criminals have intensified their attacks. The state of fear and uncertainty has provided them with a new “business opportunity” and whilst most of us are spending more time on the Internet than ever before, several types of cyber-attacks have seen a drastic increase over the last few months.

1. Phishing Attacks

Amid this chaotic situation, many people are seeking out COVID-19 related information online, hoping to find reliable guidelines to stay safe and well. At the same time, hackers are taking advantage of this by ramping up “phishing” attacks that trick internet users into opening malicious files or links that report to provide COVID-19 information.

Cyber criminals do this by impersonating trusted organisations and sending out convincing emails containing attachments that are laden with malicious payloads. On opening, the attachments execute the code and allow an attacker unauthorised access to system resources and data, along with the capability to execute further attacks on other networked devices or resources.

In other phishing attacks, unsuspecting users are tricked into following links that lead the user to realistic login pages for trusted brands. On logging in, the valid usernames and passwords are captured and later used by criminals to conduct financial fraud and impersonation. 

Phishing attacks can be mitigated in several ways:

  • Implement anti-spoofing policy with malware and spam filters on mail servers to keep malicious emails from employees.
  • Implement email security protection measures such as SPF, DKIM and DMARC. This increases assurance around the validity of the sender associated with a particular domain and verifies whether it has been impersonated and prevents the emails from reaching inboxes.
  • Training employees on how to identify phishing exploits and the actions to take when they suspect phishing or have already opened an attachment or followed a link.

2. Denial of Service (DoS) and Distributed Denial of Service (DDoS) Attack

At a time when Internet connections are required more than ever, a successful Denial of Service attack will have a more damaging impact than ever before.

A Distributed Denial-of-Service (DDoS) attack is when a collection of computers are infected with malicious code and controlled as a group (botnet). They are then targeted on another Internet service such as a web site, which is flooded with Internet traffic to deny its service to legitimate users. The outcome of a DDoS attack is operational disruption, which is achieved when systems and services are taken offline. Furthermore, attackers can disrupt organisations by threatening to shut down business services unless large sums of money are paid.

  • Utilising a Web Application Firewall (WAF)
  • Implementing rate limiting

It is crucial that organisations understand Denial of Service attacks and always be prepared to defend against it.

3. Remote Desktop Server Attack

Recently, many organisations have turned to Microsoft’s Remote Desktop Protocol (RDP) as a method of allowing remote workers access to corporate resources. The sharp increase in corporate services that need to be remotely accessible has significantly increased and with it the requirement to support remote working, however so has the number of reported RDP attacks.

RDP is a simple and cost-efficient method of facilitating remote working and access to corporate resources such as applications or desktops. However, the protocol is not sufficiently secure to be exposed to the internet. Without adequate security configurations in place, it can be easily compromised allowing an external attacker to gain a foothold into internal networks.

RDP attacks typically involve brute-forcing usernames and passwords, attempting all possible combinations until the correct one is found. Upon discovery of a correct combination, an attacker can gain full desktop access to a computer in the target network.

If your organisation must enable RDP, it is crucial that the following protection measures are in place:

  • Unique, long and random passwords are in use to protect the systems
  • Two factor authentication
  • Limiting the use of RDP to devices using a Corporate VPN
  • Ensure security options such as Network Level Authentication are enabled
  • Avoid connectivity of the RDP service to a corporate domain

If RDP access is not required, then it should be disabled and access to port 3389 should be blocked at the firewall.

Conclusion

In conclusion, cyber-crime is bound to increase for the rest of 2020 as cyber criminals are constantly engineering new methods to attack business operations. Hence, it is crucial that businesses stay ahead of cyber threats by maintaining good security practices, such as:

  • Regularly review network security – Audit security controls in place to ensure that network perimeters are well protected and unnecessary access are removed. Continue to monitor all systems and networks for unusual activities.
  • Maintain user education and awareness – Constantly remind employees of the importance of both physical and cyber security awareness. Develop home working policies and train employees to adhere to it.
  • Ensure Malware prevention is in place – Ensure that all anti-virus solutions are updated daily and anti-malware policies are in place.
  • Maintain secure configuration on all systems – Make sure that all servers and end user devices are patched up to date. Ensure that all remote working devices are subject to integrity checks before they are allowed access into corporate networks.
  • Secure remote access configurations – All remote solutions should utilise secure authentication, encryption technologies and have multifactor authentication enforced where possible.
  • Monitor user activities and privileges – Continue to monitor user activities for potential malicious activities and ensure that principle of least privilege is actively applied.
  • Incident response plan – Always be alert and prepared for potential cyber-attacks, ensure that an incident response plan is in place to deal with any emergencies.

Blog Post: Home Working Cyber Security Guidance

During these uncertain times, Prism Infosec are doing their utmost to support the community with information security guidance and advice.

To start, Prism Infosec has published a blog post (longer read) and quick guide (key points) as essential updates for ensuring systems and data availability without compromising security.

A PDF of our full blog post can be downloaded from here.

For the quick guide, this can be downloaded here.