Insights

Abuses of AI

Much like Google and Anthropic, OpenAI have released their latest report on how threat actors are abusing AI for nefarious ends, such as using AI to scale deceptive recruitment efforts, or using AI to develop novel malware.

It is no surprise that as AI has become more pervasive, cheap to gain access to, and readily accessible, that threat actors are actively abusing it to further their own agendas. So having companies like Google, OpenAI and Anthropic openly discussing the abuses they are seeing is immensely helpful in terms of understanding the threat landscape and to understand the direction that threat actors are taking.

These reports should be C Suite level required reading. They contain nuggets of information that affect business from recruitment practices to securing their perimeter, and best of all they are free to access.

Adversarial Misuse of Generative AI | Google Cloud Blog

Disrupting malicious uses of AI: June 2025

Detecting and Countering Malicious Uses of Claude \ Anthropic

For us at Prism Infosec, we not only use these reports to help inform our clients, but we also feed them into our scenarios for tabletop exercises and red team scenarios, so we can help our clients prepare for and defend against being victims of threat actors abusing these technologies.

If you would like to know more, please reach out to us.

Prism Infosec: Cyber Security Testing and Consulting Services

About the author

Prism Social Icon
Prism Infosec
Prism Infosec’s innovative approach to the delivery of PCI projects and technical security testing was recognised with a PCI Award for Technical Excellence in January 2020. The award was presented for the delivery of a client project that was considered by the review panel to be an outstanding example of best practice.
the-cyber-scheme
pci
Crest
cbest
CHECK Penetration Testing (Dark Logo)
Cyber Incident Exercising

Experiencing a security breach?
Contact the cyber security experts now