Banks must combat deepfakes utilizing improved artificial intelligence, according to Barr's statements
Banks are under pressure to bolster their cyber defenses against the growing threat of deepfake attacks, but the process of developing effective defenses may slow them down. To counter this, financial institutions can employ advanced analytics, invest in human controls, and evolve their use of artificial intelligence (AI).
According to Federal Reserve Governor Michael Barr, banks must increase their investment in AI to combat deepfake attacks. This includes the use of facial recognition, voice analysis, and behavioral biometrics. However, preventing deepfake attacks is not just the responsibility of banks. Customers, regulators, and law enforcement agencies also play a crucial role.
Regulators can work with law enforcement to make generative AI-driven crimes more costly. This can be achieved through public-private collaboration, investing in AI-powered crime detection tools, updating legal frameworks, and creating real-time alerting systems.
Public-private partnerships enable the sharing of information and coordinated responses to AI-enabled crime, allowing faster detection and disruption of attacks involving deepfakes or synthetic identities. Investment in AI capabilities for detecting behavioral anomalies, synthetic identities, and AI-generated fraud can enhance law enforcement’s ability to trace perpetrators and prevent damage proactively.
Real-time AI-enabled alerting frameworks can notify financial institutions and other stakeholders quickly about suspicious activity tied to deepfake scams, enabling timely administrative holds on illicit assets to raise the cost and risk for criminals. Updating legal and regulatory frameworks ensures authorities have clear mandates and tools to investigate and prosecute AI-enabled offenses.
Regulatory emphasis on developing standards to detect, verify, and respond to AI-generated content can fortify industries vulnerable to misinformation or impersonation, increasing the operational risks and expenses for perpetrators. Coordination with agencies such as the FBI, HSI, IRS-CI, and the Department of Justice, supported by new mechanisms for information sharing and enforcement, is critical for closing gaps exploited by AI-driven criminals.
Together, these efforts create a multi-layered defense—combining detection, prevention, enforcement, and regulatory clarity—to increase operational complexity and cost for deepfake attackers and other generative AI-enabled criminals.
If generative AI tools become more accessible to criminals and fraud detection technology does not keep pace, everyone is at risk of a deepfake attack. A 2024 business.com survey reported that deepfakes have already affected one in 10 companies. Voice detection technology used by banks for identity verification could also become vulnerable to generative AI tools. Therefore, it is crucial for all parties involved to stay vigilant and proactive in the fight against deepfake attacks.
Bank regulators can invest in AI-powered crime detection tools to help combat deepfake attacks and make generative AI-driven crimes more costly for criminals. Additionally, technology companies can work on improving their cybersecurity measures, particularly voice detection technology used for identity verification, to prevent it from becoming vulnerable to generative AI tools.