LATEST CYBER SECURITY NEWS AND VIEWS

Home > News > Exploring Chat Injection Attacks in AI Systems

Latest news

Exploring Chat Injection Attacks in AI Systems

Posted on

Introduction to AI Chat Systems

What are they?

AI powered chat systems, often referred to as chatbots or conversational AI, are computer programs that are designed to simulate human conversation and interaction using artificial intelligence (AI). They can understand and respond to text or voice input from users and it make it seem like you are just talking to another person. They can handle a variety of tasks from answering questions and providing information to offering support or even chatting casually to the end user.

Since the release of OpenAI’s ChatGPT towards the end of 2022, you have probably seen a huge increase in these types of systems being used by businesses. They are used on platforms such as online retail websites or banking apps, where they can assist with placing orders, answering account questions, or help with troubleshooting. They can also perform a huge variety of more complex tasks to such as integrating with calendars and scheduling appointments, responding to emails, or even writing code for you (brilliant we know!). As you can imagine they are super powerful, have huge benefits to both businesses and consumers and will only get more intelligent as time goes on.

How do they work?

You may be wondering how they work, well it’s not a little robot sat at a desk typing on a keyboard and drinking coffee that’s for sure. AI chat systems use complex data sets, and something called natural language processing (NLP) to interpret your messages and then generate responses based on their understanding of the conversation’s context and their existing knowledge base. This allows them to communicate with you in a way that feels like you are talking to a real person, making interactions feel more natural and intuitive.

Here is a basic step by step workflow of how they work:

  1. A user initiates a chat by typing a message in the prompt or speaking to the chatbot.
  2. The chatbot then employs natural language processing (NLP) to examine the message, identifying words and phrases to gauge the user’s intent.
  3. It then looks through its library of responses to find the most relevant answer.
  4. A response is sent back to the user through the interface.
  5. The user can then continue the conversation and the cycle repeats until the chat concludes.

Natural language processing (NLP) is made up of multiple components which all work together to achieve the required results, some of these components include the following:

  • Natural Language Understanding (NLU): This part focuses on comprehending the intent behind the user’s input and identifying important entities such as names, locations, dates, or other key information.
  • Natural Language Generation (NLG): This component handles generating human like responses based on the input and context.
  • Machine Learning (ML): Chatbots often use machine learning algorithms to improve their performance over time. They can learn from user interactions and feedback to provide more accurate and relevant responses in the future.
  • Pre-built Knowledge Bases: Chat systems can be built with pre-existing knowledge bases that provide information on specific topics, services, or products. These can be enhanced with machine learning to offer more nuanced responses.
  • Context and State Management: AI chat systems keep track of the conversation’s context, allowing them to remember past interactions and tailor responses accordingly. This context awareness enables the chatbot to offer more personalised responses.
  • Integration with Backend Systems: AI chat systems can integrate with other software or databases to retrieve data or execute tasks, such as processing a payment or booking an appointment.
  • Training Data: Chatbots are often trained using large datasets of human conversation to learn language patterns and user intents. The more diverse and representative the data, the better the chatbot’s performance.
  • Deployment: Once built and trained, AI chat systems can be deployed on various platforms such as websites, messaging apps, or voice assistants to interact with users.

Chat Injection Attacks

Introduction to Chat Injection Attacks

AI chat systems can be a real game changer when it comes to getting things done efficiently, but it’s worth noting that they do come with some risks. In this section we are going to explore one of the main attack vectors that we see with AI chat systems, something called Chat Injection, also known as chatbot injection or prompt injection. This vulnerability is number one on the OWASP Top 10 list of vulnerabilities for LLMs 2023.

Chat injection is a security vulnerability that happens when an attacker tricks the chatbot’s conversation flow or large language model (LLM), making it do things it isn’t supposed to do. Attackers can therefore manipulate the behaviour to serve their own interests, compromising users, revealing sensitive information, influencing critical decisions, or bypassing safeguards that are in place. It’s similar to other versions of injection attacks such as SQL injection or command injection, where an attacker can target the user input to manipulate the system’s output in order to compromise the confidentiality, integrity or availability of systems and data.

There are two types of chat injection vulnerabilities, direct and indirect. Below we have detailed the differences between the two:

  • Direct Chat Injections: This is when an attacker exposes or alters the system prompt. This can let attackers take advantage of backend systems by accessing insecure functions and data stores linked to the language model. We often refer to this as ‘jailbreaking’.
  • Indirect Chat Injections: This is when a language model accepts input from external sources like websites, pdf documents or audio files that an attacker can control. The attacker can hide a prompt injection within this content, taking over the conversation’s context. This lets the attacker manipulate either the user or other systems the language model can access. Indirect prompt injections don’t have to be obvious to human users; if the language model processes the text, the attack can be carried out.

Chat Injection Methods

AI chat injection attacks can take various forms, depending on the techniques and vulnerabilities being exploited. Here are some of the common methods of AI chat injection:

  • Crafting Malicious Input: An attacker could create a direct prompt injection for the language model being used, telling it to disregard the system prompts set by the application’s creator. This allows the model to carry out instructions that might change the bot’s behaviour or manipulate the conversation flow.
  • Prompt Engineering: Attackers can use prompt engineering techniques to craft specific inputs designed to manipulate the chatbot’s responses. By subtly altering prompts, they can steer the conversation towards their goals.
  • Exploiting Context or State Management: Chatbots keep track of the conversation context to provide coherent responses. Attackers may exploit this context management by injecting misleading or harmful data, causing the bot to maintain a false state or context.
  • Manipulating Knowledge Bases or APIs: If a chatbot integrates with external data sources or APIs, attackers may attempt to manipulate these integrations by injecting specific inputs that trigger unwanted queries, data retrieval, or actions.
  • Phishing & Social Engineering: Attackers can manipulate the conversation to extract sensitive information from the chatbot or trick the chatbot into taking dangerous actions, such as visiting malicious websites or providing personal data.
  • Malicious Code Execution: In some cases, attackers may be able to inject code through the chatbot interface, which can lead to unintended execution of actions or commands.
  • Spamming or DOS Attacks: Attackers may use chatbots to send spam or malicious content to other users or overwhelm a system with excessive requests.
  • Input Data Manipulation: Attackers may provide inputs that exploit weaknesses in the chatbot’s data validation or sanitisation processes. This can lead to the bot behaving in unexpected ways or leaking information.

Below is an example of a chat injection attack which tricks the chatbot into disclosing a secret password which it should not disclose:

As you can see the way in which the message is phrased it confuses the chatbot into revealing the secret password.

Impact on Businesses & End Users

As you can see, AI chat injection attacks can pose significant risks to both businesses and end-users alike. For businesses, these types of attacks can lead to the chatbot performing unexpected actions, such as sharing incorrect information, exposing confidential data, or disrupting their services or processes. These issues can tarnish a company’s reputation and erode customer trust, as well as potentially lead to legal challenges. Therefore, it is important that businesses implement safeguarding techniques to reduce the risk of chat injection attacks happening and prevent any compromises of systems and data.

There are also various risks for end users too. Interacting with a compromised chatbot can result in falling victim to phishing scams, system compromises or disclosing personal information. An example would be the chatbot sending a malicious link to a user which when they click it, they could wither be presenting with a phishing page to harvest their credentials or bank details or it could be a web page to entice the user to download some malware to their system which could give the attacker remote access to their device. To mitigate these risks users should remain vigilant when engaging with AI chat systems.

Mitigating the Risks

It is important for both businesses consumers to reduce the likelihood of being a victim of a chat injection attack. Although in some cases it is difficult to prevent, there are some mitigations that can be put in to play which will help protect you. This last section of the blog will go through some of these protections.

The first mitigating step that chatbot developers can use is input validation and sanitising messages. These can minimise the impact of potentially malicious inputs.

Another mitigating tactic to use would be rate limiting, such as throttling user requests and implementing automated lockouts. This can also help deter rapid fire injection attempts or automated tools/scripts.

Regular testing of the AI models/chatbots as part of the development lifecycle can also help in protecting users and businesses as this will allow any vulnerabilities to be discovered and fixed prior to public release.

User authentication and verification along with IP and device monitoring can help deter anonymous online attackers as they would need to provide some sort of identification before using the service. The least privilege principle should be applied to ensure that the chatbot can only access what it needs to access. This will minimise the attack surface.

From a user’s perspective, you should be cautious when sharing sensitive information with chat bots to prevent data theft.

It would be a good idea to incorporate human oversight for critical operations to add a layer of validation which will act as a safeguard against unintended or potentially malicious actions.

Lastly, any systems that the chatbot integrates with should be secured to a good standard to minimise impact should there be a compromise.

Get Tested

If you are integrating or have already integrated AI or chatbots into your systems, reach out to us. Our comprehensive range of testing and assurance services will ensure your implementation is smooth and secure: https://prisminfosec.com/services/artificial-intelligence-ai-testing/

This post was written by Callum Morris

FILTER RESULTS

Latest tweets

A great conference @BSidesLondon, thanks for having us at #BSidesLDN2024! Looking forward to continuing the relationship next year!

Prism Infosec is proud to be a gold sponsor of @BSidesLondon 2024! Come and visit us on our stand and join in our cyber scavenger hunt! #CyberSecurity #bsides

Sign up to our newsletter

  • Fields marked with an * are mandatory

  • This field is for validation purposes and should be left unchanged.