LATEST CYBER SECURITY NEWS AND VIEWS

Home > News > The Dark Side of AI: How Cybercriminals Exploit Artificial Intelligence

Latest news

The Dark Side of AI: How Cybercriminals Exploit Artificial Intelligence

Posted on

Cybercriminals and security professionals are in an AI arms race. As quickly as cybersecurity teams on the front lines utilise AI to speed up their response to real-time threats, criminals are using AI to automate and refine their attacks.

Tools that generate images, or conversational AI, are improving their quality and accuracy at increasing speeds. The DALL-E text-to-image generator released version 3, three years after the initial release, ChatGPT is currently at its fourth version only two years after its initial release.

The prevalence of this has become much more apparent in recent times.

In line with this accelerated evolution of AI tools, the range of malicious uses that AI can be used for is also expanding rapidly. From social engineering uses like spoofing and phishing, to speeding up the writing of malicious code.

(Deep)fake it till you make it

AI-generated deepfakes have been in the news several times, the higher-profile stories tend to involve political attacks designed to destabilise governments or defame people in the public eye. Such as the deepfake video released in March 20221 that appeared to show Ukrainian president Volodymyr Zelensky urging his military to lay down their weapons and surrender to invading Russian forces. Sophisticated scammers are now using deepfaked audio and video to impersonate CEOs, financial officers, and estate agents to defraud people.

In February 2024, a finance worker in Hong Kong was duped into paying out USD 25.6 million2 to scammers in an elaborate ruse that involved the criminals impersonating the company’s chief financial officer, and several other staff members, on a group live video chat. The victim originally received a message purportedly from the UK-based CFO asking for the funds to be transferred. The request seemed out of the ordinary, so the worker went on a video call to clarify whether it was a legitimate request. Unknown to them, they were the only real person on the call. Everyone else was a real-time deepfake.

The general public is also being targeted by deepfakes, most famously by a faked video purporting to show Elon Musk encouraging people to invest in a fraudulent cryptocurrency3. Unsuspecting victims, believing in Musk’s credibility, are lured into transferring their funds.

Authorities are warning the public to be vigilant and verify any investment opportunities, especially those that seem too good to be true.

The following video which was quickly identified also had a convincing AI Generated voice of Elon Musk dubbed over, instructing users to scan the QR code.

Police forces all over the world are also reporting an increase in deepfakes being used to fool facial recognition software by imitating people’s photos on their identity cards.

Evolution of scamming

Aside from high-profile cases like those above, scammers are also using AI in more simple ways. Not too long ago, phishing emails were relatively easy to spot. Bad grammar and misspellings were well-known red flags, but now criminals can easily craft professional-sounding, well-written emails by using Large Language Models (LLMs).

Spear-phishing has been refined too, using AI to craft a targeted email that uses personal information, scraped from social media, to sound personally written for the target. These attacks can also be sent out at a larger scale than manual attacks.

In place of generic emails, AI allows attackers to send out targeted messages to people at a larger scale, which can also adapt and improve based on the responses received.

WormGTP

LLMs like ChatGPT have restrictions in place to stop them from being used for malicious purposes or answering questions regarding illegal activity.
In the past, carefully written prompts have allowed users to temporarily bypass these restrictions.

However, there are LLMs available without any restrictions at all, such as WormGPT and FraudGPT. These chatbots are offered to hackers on a subscription model and specialise in creating undetectable malware, writing malicious code, finding leaks and vulnerabilities, creating phishing pages, and teaching hacking.

At the risk of this becoming a shopping list of depressing scenarios, a brief mention should also be given to how AI is speeding up the time that it takes to crack passwords. Using generative adversarial networks to distinguish patterns in millions of breached passwords, tools like PassGAN can learn to anticipate and crack future passwords. This makes it even more critical for individuals and organisations to use strong, unique passwords and adopt multi-factor authentication.

In summary

Looking ahead, the future of AI in cybercrime is both fascinating and concerning. As AI continues to evolve, so too will its malicious applications. We will see AI being used to find and exploit zero-day vulnerabilities, craft even more convincing social engineering attacks, or automate reconnaissance to identify high-value targets.

This ongoing arms race between attackers and defenders will shape the landscape of cybersecurity for years to come. AI is being exploited by cybercriminals in ways that were unimaginable just a few years ago. However, by raising awareness, investing in robust cybersecurity measures, and fostering collaboration across sectors, we can stay one step ahead in this high-stakes game of Whack-A-Mole.

This post was written by Chris Hawkins.

1 https://www.wired.com/story/zelensky-deepfake-facebook-twitter-playbook/

2 https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html

3 https://finance.yahoo.com/news/elon-musk-deepfake-crypto-scam-093000545.html

FILTER RESULTS

Latest tweets

Data #leakage is just one of numerous risks associated with #GenAI necessitating the use of an #AI #risk framework, as Phil Robinson explains via  @governance_and. #cybersecurity

We interview Phil Robinson, Principal Security Consultant and Founder at @prisminfosec, who shares his views on ethical hackers and the latest ransomware trends.

Sign up to our newsletter

  • Fields marked with an * are mandatory

  • This field is for validation purposes and should be left unchanged.