Skip to content

AI-powered impostors perpetrating identity thefts and financial frauds

Identity thieves employ generative AI to perpetrate attacks and fraud, as revealed in a recent report by Transmit Security. Entitled "The GenAI-Fueled Threat Landscape: A Dark Web Research Report by Transmit Security," the study was compiled by a team of fraud analysts within Transmit...

AI-driven Scammers Conducting Identity Theft and Fraud via Generative Models
AI-driven Scammers Conducting Identity Theft and Fraud via Generative Models

AI-powered impostors perpetrating identity thefts and financial frauds

A new report titled "The GenAI-Fueled Threat Landscape: A Dark Web Research Report" by Transmit Security has shed light on the increasing use of blackhat generative AI tools in sophisticated scams and fraud cases, particularly in Australia and New Zealand.

The report reveals that these tools are helping fraudsters create new fraud campaigns at unprecedented levels of sophistication, speed, and scale. One such tool, FraudGPT, is easily accessible on the dark web and requires minimal skills to use. Similarly, WormGPT is another blackhat generative AI platform that poses a significant threat.

Fraudsters are using these tools to generate synthetic identity data and high-quality fake IDs that bypass security checks, including AI-driven identity verification systems. This is a major concern as it opens the door for a wide range of fraudulent activities, such as unauthorised payments and identity theft.

The Australian Payment Fraud Report indicates a 35.6% increase in fraud on payment cards in the 12 months leading up to June 2023, amounting to AUD677.5 million. Meanwhile, the New Zealand banking ombudsman has highlighted an increase in sophisticated unauthorised payment scam cases, costing New Zealanders over NZD200 million annually.

Dark Web Marketplaces, notorious for supporting a wide range of illegal activities, offer services like RDPs and credit card checkers, backed by high seller ratings and escrow services. These marketplaces are becoming a breeding ground for these blackhat generative AI tools.

To combat this growing threat, Transmit Security recommends implementing converged fraud prevention, identity verification, and customer identity management services powered by AI, generative AI, and machine learning. A unified, smart defence is crucial to removing data silos, closing security gaps, and detecting and stopping advanced fraud with accuracy and speed.

Organisations like Regula, a global developer of identity verification and forensic solutions, have reported an increasing number of identity attack appearances and fraud conducted by criminals using generative artificial intelligence, including AI-generated deepfakes and synthetic identities. This affects about one-third of companies worldwide.

The report does not discuss the use of generative AI in creating synthetic identity data or high-quality fake IDs that bypass security checks, nor does it provide specific details about any new generative AI tools being used on the dark web. However, the implications of these findings are clear: the fight against fraud is becoming more complex, and organisations must adapt to stay ahead.

In the face of these advancements, it is essential for businesses to stay vigilant and invest in robust security measures. By doing so, they can protect their customers, their reputation, and their bottom line.

Read also:

Latest