AI powered chat systems, often referred to as chatbots or conversational AI, are computer programs that are designed to simulate human conversation and interaction using artificial intelligence (AI). They can understand and respond to text or voice input from users and it make it seem like you are just talking to another person. They can handle a variety of tasks from answering questions and providing information to offering support or even chatting casually to the end user.
Since the release of OpenAI’s ChatGPT towards the end of 2022, you have probably seen a huge increase in these types of systems being used by businesses. They are used on platforms such as online retail websites or banking apps, where they can assist with placing orders, answering account questions, or help with troubleshooting. They can also perform a huge variety of more complex tasks to such as integrating with calendars and scheduling appointments, responding to emails, or even writing code for you (brilliant we know!). As you can imagine they are super powerful, have huge benefits to both businesses and consumers and will only get more intelligent as time goes on.
You may be wondering how they work, well it’s not a little robot sat at a desk typing on a keyboard and drinking coffee that’s for sure. AI chat systems use complex data sets, and something called natural language processing (NLP) to interpret your messages and then generate responses based on their understanding of the conversation’s context and their existing knowledge base. This allows them to communicate with you in a way that feels like you are talking to a real person, making interactions feel more natural and intuitive.
Here is a basic step by step workflow of how they work:
Natural language processing (NLP) is made up of multiple components which all work together to achieve the required results, some of these components include the following:
AI chat systems can be a real game changer when it comes to getting things done efficiently, but it’s worth noting that they do come with some risks. In this section we are going to explore one of the main attack vectors that we see with AI chat systems, something called Chat Injection, also known as chatbot injection or prompt injection. This vulnerability is number one on the OWASP Top 10 list of vulnerabilities for LLMs 2023.
Chat injection is a security vulnerability that happens when an attacker tricks the chatbot’s conversation flow or large language model (LLM), making it do things it isn’t supposed to do. Attackers can therefore manipulate the behaviour to serve their own interests, compromising users, revealing sensitive information, influencing critical decisions, or bypassing safeguards that are in place. It’s similar to other versions of injection attacks such as SQL injection or command injection, where an attacker can target the user input to manipulate the system’s output in order to compromise the confidentiality, integrity or availability of systems and data.
There are two types of chat injection vulnerabilities, direct and indirect. Below we have detailed the differences between the two:
AI chat injection attacks can take various forms, depending on the techniques and vulnerabilities being exploited. Here are some of the common methods of AI chat injection:
Below is an example of a chat injection attack which tricks the chatbot into disclosing a secret password which it should not disclose:
As you can see the way in which the message is phrased it confuses the chatbot into revealing the secret password.
As you can see, AI chat injection attacks can pose significant risks to both businesses and end-users alike. For businesses, these types of attacks can lead to the chatbot performing unexpected actions, such as sharing incorrect information, exposing confidential data, or disrupting their services or processes. These issues can tarnish a company’s reputation and erode customer trust, as well as potentially lead to legal challenges. Therefore, it is important that businesses implement safeguarding techniques to reduce the risk of chat injection attacks happening and prevent any compromises of systems and data.
There are also various risks for end users too. Interacting with a compromised chatbot can result in falling victim to phishing scams, system compromises or disclosing personal information. An example would be the chatbot sending a malicious link to a user which when they click it, they could wither be presenting with a phishing page to harvest their credentials or bank details or it could be a web page to entice the user to download some malware to their system which could give the attacker remote access to their device. To mitigate these risks users should remain vigilant when engaging with AI chat systems.
It is important for both businesses consumers to reduce the likelihood of being a victim of a chat injection attack. Although in some cases it is difficult to prevent, there are some mitigations that can be put in to play which will help protect you. This last section of the blog will go through some of these protections.
The first mitigating step that chatbot developers can use is input validation and sanitising messages. These can minimise the impact of potentially malicious inputs.
Another mitigating tactic to use would be rate limiting, such as throttling user requests and implementing automated lockouts. This can also help deter rapid fire injection attempts or automated tools/scripts.
Regular testing of the AI models/chatbots as part of the development lifecycle can also help in protecting users and businesses as this will allow any vulnerabilities to be discovered and fixed prior to public release.
User authentication and verification along with IP and device monitoring can help deter anonymous online attackers as they would need to provide some sort of identification before using the service. The least privilege principle should be applied to ensure that the chatbot can only access what it needs to access. This will minimise the attack surface.
From a user’s perspective, you should be cautious when sharing sensitive information with chat bots to prevent data theft.
It would be a good idea to incorporate human oversight for critical operations to add a layer of validation which will act as a safeguard against unintended or potentially malicious actions.
Lastly, any systems that the chatbot integrates with should be secured to a good standard to minimise impact should there be a compromise.
If you are integrating or have already integrated AI or chatbots into your systems, reach out to us. Our comprehensive range of testing and assurance services will ensure your implementation is smooth and secure: https://prisminfosec.com/services/artificial-intelligence-ai-testing/
This post was written by Callum Morris