Using AI to Prevent Deepfakes and Other Bank Frauds

Imagine this: you get a video call from your CEO requesting a multimillion-dollar transfer. The tone is serious. Her voice is unmistakable. The background noise sounds like her office. But it’s not her. It’s a deepfake created with less than a minute of voice data and publicly available images. You respond—only to realize too late that you’ve authorized a scam.

The scale of AI-driven fraud is growing fast

One study found that people looking at a set of 20 videos could, on average, only correctly identify whether 12 of them were or were not deepfakes—and these people were specifically looking for deepfakes.

AI-driven fraud is escalating at an alarming pace.

  • Deepfake attacks rose by over 2100% in the past three years (Consult Hyperion & Signicat, 2024).
  • GenAI-based fraud is projected to triple, from $12.3B in 2023 to $40B in 2027 (Deloitte).
  • 1 in 10 executives report deepfake incidents at their companies—yet few have implemented countermeasures (Federal Reserve Bank of New York, 2024).
  • Gartner predicts that by 2026, 30% of enterprises will move away from relying solely on face biometrics due to deepfake risks.

Not all AI-enabled fraud relies on deepfakes. GenAI also powers phishing scams, where attackers impersonate executives via email — a tactic frequently cited by the FBI as one of the most common and costly forms of bank fraud. Moreover, recovering stolen funds from financial fraud is rarely successful without real-time detection. A 2022 report by the Association for Financial Professionals found that 44% of fraud cases result in a total loss, while only 27% recover 75% or more of the stolen funds.

Although not all these fraud cases involve deepfakes, the use of the latter is particularly disconcerting because it challenges our fundamental belief that seeing and hearing is believing. Deepfakes may be audio only, image, or full video with audio. We typically think of deepfakes as impersonations of people, but they may also be of documents. Audio and video deepfakes may be recorded, but they also may be generated in real time. The voice synthesis alone can use as little as a minute of posted audio to match not only a person’s voice, but their accent, phrase patterns, tone, and inflection.

Consider the case of Arup in January 2024. An employee received a request to transfer $25.6 million. Initially skeptical, the employee joined a video call with what appeared to be the company’s CFO and other known colleagues. He completed the transaction. None of the people on the call were real. All were AI-generated deepfakes. This was not a flaw in policy or intent, but a sophisticated social engineering attack that manipulated human trust in visual and auditory cues.

To deal with the known weaknesses of authentication through account names and passwords, many financial institutions have taken advantage of voice recognition or other biometric security systems. These systems have generally been more secure than account names and passwords, but they can also be spoofed, particularly voice recognition. Unfortunately, while biometric spoofing may be less glamorous than social engineering attacks, their cumulative effect on financial losses may be larger over time.

AI can detect what humans miss

Fortunately, AI serves as a powerful defense against deepfakes and fraud, leveraging its strengths in pattern recognition and continuous learning. AI can identify subtle signs of deepfakes that are difficult for humans to detect, including:

  • Inconsistent blinking patterns
  • Irregular lighting, reflections, or shadows
  • Lip-sync mismatches
  • Reverse image search matches with images on the web
  • Audio pacing inconsistencies and unnatural pauses
  • Voice signatures that are biologically implausible

AI tools can also analyze behavioral biometrics, such as typing speed or mouse movements, to detect deviations from typical user behavior. These signs can be subtle, but when analyzed at scale, they become powerful indicators of fraud.

Machine learning as a key to AI-based fraud detection

With behavioral biometrics, changes in behaviors can be captured and fraud guidelines can be adjusted. More broadly, ongoing machine learning can incorporate detection of new ways to perpetuate fraud, since AI is able to recognize potential fraud like other recent attacks.

While some frauds can clearly be identified, other pattern inconsistencies can be flagged by AI with a score indicating the likelihood of fraud. Such cases can then be followed immediately by a security specialist or risk manager for further examination.

The fact that most AI analysis of potential fraud is done in real-time is particularly valuable. Given the fact that most financial losses due to fraud are unrecoverable, it is critical to stop the fraud in real-time before it occurs.

Choosing the right tools and partners

Financial institutions, especially smaller banks, face important decisions about whether to build or buy fraud detection solutions. A growing range of vendors now offer specialized tools in both traditional fraud detection and deepfake identification.

Common AI-powered fraud prevention tools include platforms from Jack Henry, Fiserv, and FIS. Deepfake-specific tools include Sensity, Reality Defender, Sentinel, OpenAI Deepfake Detector, and Deepware. Larger institutions often integrate multiple commercial products with proprietary machine learning models in layered defense systems.

Technology alone isn’t enough

AI tools are vital, but they cannot stand alone. The most effective fraud defense strategies combine:

  • AI-powered detection
  • Multi-factor authentication
  • General cybersecurity hygiene
  • Employee and executive training

Multi-factor authentication reduces the risk of unauthorized access by requiring two or more verification methods. These may include a password, a device in your possession, or a biometric trait. Each additional layer increases the difficulty of a successful attack.

Employee education is equally essential. Team members must be empowered to verify suspicious requests, even from apparent executives. A well-trained employee should never fear reprisal for questioning legitimacy. Instead, they should be thanked for their diligence.

Conclusion

As we move into a future world where the nature of truth and identity are in question, financial institutions must be prepared to fully take advantage of AI to combat deepfakes and other fraud. Combining these efforts with multi-factor authentication, general cybersecurity measures, and employee / executive training will help protect both the customers the financial institution serves and the profitability of the company itself.

To explore the full set of strategic AI recommendations for financial institutions — including explainable AI, AI governance, and adoption frameworks — download our white paper, Banking on AI.

Download the white paper

MORE BLOGS

BLOG
more
Maximizing the Value of Open-source in Test Automation

Aligning tools and frameworks for long-term success

Learn More
Open-source Test Automation
BLOG
more
Strengthening Data Confidence Through ETL Test Automation 

Streamlined ETL for trusted insights

Learn More
ETL test automation
BLOG
more
Enhancing AI Governance

Guiding AI toward trusted outcomes

Learn More