The $25 Million Video Call That Changed Everything

In February 2024, a finance worker at Arup, the multinational engineering firm, joined a video conference call with his company’s CFO and several senior colleagues. The CFO explained a confidential transaction and instructed the finance worker to wire $25 million to a designated account. The video quality was slightly grainy, but the faces looked right, the voices matched, and the CFO made his characteristic hand gestures while explaining the deal.

Every person on that call was a deepfake. Not one of them was actually present. The finance worker transferred $25 million to criminals before anyone realized what had happened.

That incident was a turning point. But what followed in 2025 was an avalanche. North America alone lost over $200 million to deepfake fraud in just the first quarter of 2025, according to Wall Street Journal reporting and data tracked by Resemble AI.

By Q2, losses climbed to $347.2 million. Resemble tracked 487 discrete deepfake incidents in Q2 2025, a 312% year-over-year increase, and then 2,031 incidents in Q3. The Deloitte Center for Financial Services projects that US fraud losses facilitated by generative AI will climb from $12.3 billion in 2023 to $40 billion by 2027, compounding at 32% annually.

If your organization has not conducted a formal deepfake risk assessment, you are operating with a control gap that regulators, auditors, and attackers have already noticed. The Allianz Risk Barometer 2026 ranked AI as the number two global business risk.

The FBI has issued specific warnings about voice-cloning scams targeting enterprises. And Gartner predicts that by 2026, 30% of enterprises will no longer consider identity verification and authentication solutions reliable in isolation due to AI-generated deepfakes.

This guide walks you through a practical deepfake risk assessment: how to identify where your organization is exposed, what controls actually work, and how to integrate deepfake risk into your existing enterprise risk management framework.

The Deepfake Threat Landscape in 2026: What Risk Managers Need to Know

Before you can assess deepfake risk, you need to understand what you are actually defending against. The threat has evolved significantly beyond the novelty face-swap videos that most people associate with the word “deepfake.” Here is what the current attack landscape looks like for US enterprises:

Voice Cloning: The Fastest-Growing Attack Vector

Voice cloning fraud rose 680% year-over-year in 2024, according to the Pindrop 2025 Voice Intelligence and Security Report. Attackers need as little as three seconds of audio to create a voice clone with an 85% match to the original speaker. Your CEO’s earnings call, your CFO’s conference presentation, their LinkedIn video post: all of it is raw material.

The attack pattern is straightforward. The attacker clones an executive’s voice from publicly available audio. They call a finance team member, accounts payable clerk, or treasury analyst with an urgent payment request.

The voice sounds exactly like the executive. The caller ID may be spoofed to match the executive’s number. The request is framed as urgent and confidential. In 2019, a UK energy firm lost €220,000 to a deepfaked voice clone of its CEO. By 2025, CEO fraud using deepfakes was targeting at least 400 companies per day.

Video Deepfakes: Real-Time Impersonation on Zoom and Teams

The Arup $25 million loss was a video deepfake attack conducted over a live video conference. In March 2025, a similar attack hit a multinational firm in Singapore, where attackers impersonated the CFO and multiple executives in a Zoom call to authorize a wire transfer. What made these attacks devastating was that they bypassed the one control most organizations rely on: visual confirmation that you are talking to the right person.

Video deepfakes now account for the largest share of deepfake incidents. In Q1 2025, video was the most common format at 46% of all incidents, followed by images (32%) and audio (22%). The technology has reached a point where 68% of deepfakes are now considered “nearly indistinguishable from genuine media,” and an iProov study found that only 0.1% of participants correctly identified all deepfakes shown to them.

Authentication Bypass: Defeating Biometrics with Synthetic Media

Deepfakes are not just used for social engineering. They are actively being used to bypass biometric authentication systems. Injection attacks, where synthetic deepfake video is fed directly into a biometric verification system rather than presented to a camera, increased 200% in 2023 according to Gartner.

Deepfakes now make up 6.5% of all fraud attacks globally, a 2,137% rise since 2022. This is why Gartner predicts that 30% of enterprises will consider standalone identity verification and authentication solutions unreliable by 2026: the technology that was supposed to replace passwords is being undermined by the same AI that created it.

Brand and Reputation Attacks: The Weaponization of Executive Likeness

Beyond financial fraud, deepfakes are weaponized against brand reputation. A fabricated video of a CEO making inflammatory statements, a fake earnings call leaked to social media, or a synthetic audio clip of an executive discussing illegal activity can move stock prices, trigger customer exodus, and generate regulatory scrutiny before the organization even identifies it as fake.

In 2025, the US Treasury’s Financial Crimes Enforcement Network warned about a surge in deepfake scams targeting banks, insurers, mortgage brokers, and casino operators.

Deepfake Threat Statistics: The Numbers That Matter

MetricData
Average loss per deepfake incident (2024)$500,000; up to $680,000 for large enterprises
North America deepfake fraud losses, Q1 2025$200+ million
North America deepfake fraud losses, Q2 2025$347.2 million
Deepfake incidents tracked, Q3 20252,031 (up 1,500% since 2023)
Voice cloning fraud growth (YoY 2024)680% increase (Pindrop)
Audio needed to clone a voice3 seconds for 85% voice match
CEO fraud attempts using deepfakes (daily)400+ companies targeted per day
Human detection accuracy for deepfakesOnly 0.1% correctly identify all deepfakes (iProov)
Deepfakes as share of all fraud attacks6.5% of all fraud attacks (up 2,137% since 2022)
Gartner prediction for biometric auth30% of enterprises will view standalone IDV as unreliable by 2026
US generative AI fraud projection (2027)$40 billion (Deloitte, 32% CAGR from $12.3B in 2023)
Deepfake detection market forecast9.9 billion detection checks by 2027, ~$5B revenue

How to Conduct a Deepfake Risk Assessment

A deepfake risk assessment follows the same inherent-to-residual methodology you already use in your enterprise risk management program. The difference is the threat vectors. Here is a step-by-step approach you can implement this quarter.

Step 1: Map Your Attack Surface

Deepfake attacks exploit publicly available media of your people. Your first step is understanding exactly how exposed your organization is. Conduct an executive media audit:

  • Earnings calls and investor presentations: These provide minutes of high-quality audio and video of your CEO, CFO, and other C-suite executives speaking in a professional context. Attackers use these as training data for voice clones.
  • Conference appearances and media interviews: Speaking engagements, panel discussions, and TV interviews give attackers diverse audio samples across different emotional tones and speaking cadences.
  • LinkedIn and social media video content: Executive thought leadership videos, company announcements, and social media clips are easily scraped and provide both face and voice training data.
  • Internal video communications: Recorded all-hands meetings, town halls, and training videos that may be accessible through compromised credentials or insider threats.
  • Contact center and IVR recordings: Voice biometric enrollment data, customer call recordings, and automated phone system voice prompts can all be targeted.

For each executive and high-value target, document: how many minutes of publicly available audio exist, how many minutes of publicly available video exist, what platforms host this content, and whether the content can be removed or restricted. This becomes your deepfake attack surface inventory.

Step 2: Identify High-Value Deepfake Scenarios

Not all deepfake attacks carry the same risk. Map your specific threat scenarios using the cause-event-consequence structure from your risk assessment methodology:

ScenarioAttack VectorTargetPotential ImpactLikelihood (2026)
CFO payment fraudVoice clone phone call or deepfake video conferenceFinance/Treasury teamDirect financial loss ($500K-$25M+), regulatory scrutiny, reputational damageHigh
Executive impersonation BECAI-generated email + voice clone callback verificationAccounts payable, procurement, legalWire fraud, contract manipulation, data breachHigh
Biometric auth bypassInjection attack using synthetic video/audio fed into IDV systemCustomer onboarding, employee accessAccount takeover, unauthorized system access, compliance breachMedium-High
Brand/reputation attackFabricated video of executive making inflammatory statementsInvestors, media, customers, regulatorsStock price impact, customer churn, regulatory inquiry, litigationMedium
Board meeting impersonationReal-time deepfake on video conferencing platformBoard members, corporate secretaryUnauthorized strategic decisions, M&A manipulation, insider trading riskMedium
Vendor/partner impersonationVoice clone of key vendor contact requesting banking changeProcurement, accounts payablePayment diversion, supply chain disruptionHigh
Employee hiring fraudDeepfake video interview by impostor seeking employmentHR, hiring managersInsider threat placement, IP theft, espionage (DOJ: 300+ companies hired North Korean impostors in 2024)Medium
Customer service impersonationVoice clone of account holder calling contact centerCustomer service, call center agentsAccount takeover, unauthorized transactions, PII exposureHigh

Step 3: Assess Your Current Controls

For each scenario, document what controls you currently have in place and honestly evaluate whether they would detect or prevent a deepfake attack. Most organizations discover their existing controls were designed for a pre-deepfake world. Common gaps include:

  • Callback verification to a known number: This works against email spoofing but fails when the attacker uses a voice clone on a spoofed caller ID. The person you call back may sound exactly like the executive.
  • Visual confirmation on video calls: After the Arup incident, we know video presence is no longer sufficient verification. The finance worker saw the CFO’s face and heard his voice. Both were fabricated.
  • Voice biometric authentication: Gartner’s prediction that 30% of enterprises will view standalone biometric auth as unreliable by 2026 reflects the reality that voice biometrics alone cannot distinguish a high-quality clone.
  • Email-based approvals: AI-generated emails that perfectly mimic executive writing style, combined with voice clone callback verification, create a multi-channel attack that defeats single-channel controls.
  • Manager approval thresholds: These work against unauthorized employees but not against deepfake attacks where the “approver” is the impersonated executive.

Step 4: Score and Prioritize Risks

Apply your standard likelihood-times-impact scoring. For deepfake risks, remember that the likelihood is rising exponentially (deepfake incidents increased 1,500% between 2023 and Q3 2025), while impact severity remains very high (average $500,000 per incident, with outliers at $25 million). Any deepfake risk scenario targeting financial processes or authentication systems should be scored at minimum as High on your risk register.

Deepfake Risk Register Template

Here is a ready-to-use deepfake risk register that slots into your existing enterprise risk register. Adapt the scoring scales to your organization’s risk appetite framework.

Risk IDRisk DescriptionAttack VectorInherent Risk (LxI)Current ControlsResidual Risk (LxI)Risk OwnerReview
DF-001Deepfake voice clone used to authorize wire transfer from CFO to finance teamVoice cloning + spoofed caller ID5×5 = 25 CriticalDual authorization for transfers >$50K; callback to registered number4×5 = 20 CriticalCFO / TreasuryMonthly
DF-002Video deepfake impersonation of executives on conferencing platform to approve transactionReal-time video deepfake on Zoom/Teams5×5 = 25 CriticalNo current control for video authenticity verification5×5 = 25 CriticalCISO / CFOMonthly
DF-003Synthetic voice bypasses voice biometric authentication in contact centerVoice clone injection into IVR/call center4×4 = 16 HighVoice biometrics + knowledge-based questions3×4 = 12 HighCISO / Head of CXQuarterly
DF-004Fabricated executive video damages brand reputation and stock priceAI-generated video published to social media/news4×5 = 20 CriticalSocial media monitoring; no deepfake detection tool deployed4×4 = 16 HighCCO / CommsQuarterly
DF-005Deepfake video interview places impostor in employment (insider threat/espionage)Pre-recorded or real-time deepfake in video interview3×5 = 15 HighIn-person final round for sensitive roles; ID verification at onboarding2×5 = 10 MediumCHRO / SecurityQuarterly
DF-006Vendor impersonation using voice clone to redirect paymentsVoice clone of vendor contact requesting bank detail change4×4 = 16 HighVendor master change requires written confirmation + dual approval3×3 = 9 MediumHead of ProcurementQuarterly

Notice that DF-002, video deepfake impersonation, scores 25/25 both inherent and residual. That is because most organizations today have zero controls specifically designed to verify video call authenticity. If your organization is in this position, that single line item should drive immediate action.

Practical Controls That Actually Work Against Deepfakes

Technology alone will not solve this problem. The most effective deepfake defense combines procedural controls, detection technology, and human awareness. Here is what works in practice:

Procedural Controls: The First Line of Defense

These controls cost nothing or very little to implement and address the highest-risk scenarios immediately:

1. Code Word / Shared Secret Protocol

Establish a rotating code word or shared secret between executives and their direct reports who handle financial transactions. Any request for payment, wire transfer, or sensitive action must include the current code word. Rotate weekly or after any suspected compromise. This is a simple, effective, non-technical control that defeats both voice clone and video deepfake attacks because the attacker does not know the current code word.

2. Multi-Channel Verification with Channel Switching

When you receive a high-stakes request via one channel (phone call, video call), verify it through a completely different channel. Receive a voice call from the CFO? Verify via encrypted messaging app.

Receive a video call request? Confirm via a phone call you initiate to the known mobile number. The key is that you initiate the verification contact through a different medium, not that you call back on the same channel the attacker may control.

3. Mandatory Cooling-Off Period for High-Value Transactions

Deepfake attacks almost always use urgency as a pressure tactic: “This acquisition is confidential and must close today.” Implement a mandatory 4-hour cooling-off period for any wire transfer above your materiality threshold that was initiated by verbal instruction only.

No exceptions. If the CFO actually needs an urgent transfer, they can wait four hours. If it is an attacker, those four hours are enough for verification to catch the fraud.

4. Enhanced Dual Authorization with Physical Presence Requirement

For transactions above a defined threshold (calibrate to your risk appetite), require at least one in-person authorization. Not video. Not phone. Physical presence with photo ID. This is operationally inconvenient, and that is precisely the point. The inconvenience is proportionate to the risk when a single deepfake attack averages $500,000 in losses.

5. Vendor Master Data Change Controls

Vendor payment fraud using deepfaked voices of vendor contacts is a growing attack vector. Any change to vendor banking details must require: written confirmation on vendor letterhead, verification call to a number from your records (not the number the “vendor” provides), and dual approval from procurement and finance.

Document this in your business continuity management procedures as well, since vendor payment disruption can cascade into supply chain failures.

Detection Technology: Layered Defense

Procedural controls are essential but not sufficient. You also need technology that can detect deepfakes in real time. The detection market is maturing rapidly, with the Biometric Update and Goode Intelligence report forecasting 9.9 billion deepfake detection checks by 2027 and nearly $5 billion in market revenue. Here are the categories of detection technology to evaluate:

CategoryHow It WorksKey Vendors (2026)Best For
Voice Deepfake DetectionAnalyzes audio for synthetic speech artifacts, acoustic fingerprinting, behavioral voice biometrics, and phonetic anomalies invisible to human earsPindrop Pulse (99% accuracy, 2-second detection, 20M+ audio file training set; TIME Best Inventions 2025); Resemble AI Detect (90% out-of-box detection across all GenAI vendors); ValidSoftContact centers, treasury functions, executive call verification, IVR authentication
Video Deepfake DetectionMulti-model detection analyzing physiological cues (blood flow, micro-expressions), artifact patterns, temporal inconsistencies, and model-based authenticity signalsReality Defender RealScan (multi-modal: video, audio, image; RSA Innovation Award; JPMorganChase 2025 Hall of Innovation); Intel FakeCatcher (biological signal analysis); Sensity (threat intelligence + detection)Video conferencing verification, media content authentication, brand protection monitoring
Biometric Liveness DetectionPresentation attack detection (PAD) combined with injection attack detection (IAD) and image inspection to verify genuine human presenceiProov; Facia (up to 90% accuracy, diverse training datasets); Persona (multi-layered signals: device metadata, behavioral analytics, link analysis)Customer onboarding, employee identity verification, access management, KYC/AML processes
Content ProvenanceCryptographic verification of media origin and chain of custody using C2PA (Coalition for Content Provenance and Authenticity) standardsAmber Authenticate (cryptographic capture verification); Adobe Content Credentials; TruepicPublishing, marketing, investor relations, legal evidence, regulatory documentation

A critical caveat from the research: defensive AI detection tools lose 45-50% effectiveness when used against real-world deepfakes outside controlled lab conditions. This is why layered defense, procedural controls plus detection technology plus human awareness, is essential. No single technology layer is sufficient on its own.

Human Awareness: Training That Reflects the Actual Threat

More than half of business leaders say their employees have not received any training on identifying or addressing deepfake attacks. Meanwhile, 32% of leaders have no confidence their employees could recognize deepfake fraud attempts. Your existing security awareness training probably covers phishing emails and password hygiene. It almost certainly does not cover:

  • What a voice clone sounds like (play real examples in training sessions)
  • How to respond to an urgent payment request from an executive on a video call (follow the code word / multi-channel verification protocol regardless of how convincing the caller appears)
  • Red flags specific to deepfake attacks: unexpected urgency, requests to bypass normal approval processes, first-time-ever direct contact from a senior executive to a junior finance employee, and refusal to verify through alternative channels
  • The 3-second rule: if you feel pressured to act immediately on a high-value request, that pressure itself is the red flag. Pause. Verify. Follow protocol.
  • Tabletop exercises that simulate deepfake attack scenarios against your actual processes (see the incident response section below)

Deepfake Risk KRI Dashboard

Integrate these deepfake-specific key risk indicators into your existing risk monitoring and board reporting framework:

KRIMeasurementFrequencyThreshold
Executive Media Exposure ScoreTotal minutes of publicly available audio/video per executiveQuarterlyGreen: <30 min | Amber: 30-120 min | Red: >120 min
Deepfake Incident CountAttempted or successful deepfake attacks detectedMonthlyGreen: 0 | Amber: 1-2 attempts | Red: Any successful attack
Verification Protocol Compliance% of high-value transactions that followed multi-channel verificationMonthlyGreen: >95% | Amber: 85-95% | Red: <85%
Detection Tool Alert VolumeSynthetic media detections flagged by AI detection platformWeeklyGreen: Baseline | Amber: 2x baseline | Red: >3x baseline
Employee Training Coverage% of finance/treasury/HR staff completing deepfake awareness trainingQuarterlyGreen: >90% | Amber: 75-90% | Red: <75%
Biometric Auth Override Rate% of identity verification requiring step-up auth due to liveness failureMonthlyGreen: <2% | Amber: 2-5% | Red: >5%
Vendor Master Change AnomaliesBanking detail change requests flagged as suspiciousMonthlyGreen: 0-1 | Amber: 2-3 | Red: >3
Mean Time to Deepfake DetectionAverage hours from deepfake exposure to organizational identificationPer incidentGreen: <4 hrs | Amber: 4-24 hrs | Red: >24 hrs

Deepfake Incident Response Playbook

When a deepfake attack is detected or suspected, your team needs a documented response plan. This should be integrated into your existing incident management and business continuity planning. Here is a five-phase playbook:

Phase 1 — Detection and Initial Triage (0-30 minutes)

Confirm the suspected deepfake by independently verifying the identity of the person impersonated. Contact them directly through a pre-established secure channel. If a financial transaction was authorized, immediately instruct the bank to freeze or reverse the transfer. Every minute matters, banks can sometimes recall wire transfers within the first 24-72 hours. Preserve all evidence: call recordings, video conference recordings, email headers, system logs, and screenshots.

Phase 2 — Containment (30 minutes to 4 hours)

Alert the CISO, general counsel, CFO, and communications team. Issue an internal alert to all finance and treasury personnel about the specific attack vector used. If voice biometrics were compromised, temporarily disable voice-only authentication and require multi-factor verification. If the attack used a specific conferencing platform, review all recent high-value decisions made through that platform.

Phase 3 — Investigation (4-72 hours)

Engage forensic specialists or your deepfake detection vendor to analyze the synthetic media used in the attack.

Determine: what source material the attacker used to create the deepfake, how they obtained it, what communication channels were compromised, and whether this was an isolated attack or part of a broader campaign. Submit the deepfake media to your detection platform to improve future detection accuracy.

Phase 4 — Notification and Reporting (24-72 hours)

Depending on the nature and impact: file a report with the FBI Internet Crime Complaint Center (IC3), notify your cyber insurance carrier (deepfake losses may fall under crime or cyber policies, check your policy wording carefully), assess SEC disclosure obligations if the incident is material, and consider proactive communication if the deepfake was public-facing to get ahead of the narrative.

Phase 5 — Lessons Learned and Control Improvement (1-2 weeks)

Conduct a structured debrief. Update your deepfake risk register with the actual attack data. Revise controls based on how the attack succeeded or was detected.

Run a tabletop exercise simulating the same scenario to test whether revised controls work. Update employee training materials with the real-world example. For guidance on structuring post-incident reviews, see our approach to auditing business continuity plans.

Integrating Deepfake Risk into Your Enterprise Risk Management Program

Deepfake risk is not a standalone program. It belongs inside your existing risk management framework, reported alongside operational, financial, and cyber risks. Here is how to integrate it:

  • Risk Register: Add deepfake scenarios to your operational risk register under the cyber/technology risk category. Use the template above as your starting point. Map each risk to your existing risk appetite thresholds and escalation criteria.
  • Three Lines Model: First line (business units) owns procedural controls like code word protocols and transaction verification. Second line (risk and compliance) owns the deepfake risk assessment methodology, KRI monitoring, and detection tool oversight. Third line (internal audit) tests control effectiveness through deepfake simulation exercises.
  • Board Reporting: Include deepfake risk in your quarterly risk dashboard. Do not bury it in a technology appendix. It is a financial fraud risk and a reputational risk, and boards understand those categories. Report KRIs, incident counts, and control maturity alongside your other top risks.
  • BCM Integration: Include deepfake attack scenarios in your business impact analysis. What is your RTO if voice biometric authentication is compromised? What is your communication strategy if a fabricated executive video goes viral? Build deepfake scenarios into your next BCP exercise program.
  • Cyber Risk Alignment: Deepfake risk overlaps significantly with your cybersecurity risk management program. Voice cloning attacks are social engineering. Biometric bypass is identity fraud. Content provenance is information security. Ensure your deepfake controls are coordinated with your CISO rather than running as a parallel initiative.

The Bottom Line

The deepfake threat is no longer theoretical, emerging, or future-state. It is here, it is measured in hundreds of millions of dollars in losses, and it is growing at triple-digit percentages year over year.

Every risk manager reading this should ask three questions today: Do we have deepfake scenarios in our risk register? Do our payment authorization controls survive a voice clone or video deepfake? And when was the last time we tested?

The good news is that effective deepfake defense does not require massive technology investment to start.

The procedural controls described in this guide, code word protocols, multi-channel verification, cooling-off periods, and enhanced dual authorization, can be implemented within weeks at minimal cost. Layer detection technology on top as your budget allows, and build deepfake awareness into your next security training cycle.

As the ISACA analysis of 2025 AI incidents concluded: the biggest AI failures were not technical; they were organizational. Weak controls, unclear ownership, and misplaced trust. A structured deepfake risk assessment addresses all three. And now you have the framework to build one.

Strengthening your risk management program? Explore our full library at riskpublishing.com, including guides on enterprise risk management technology practices, ERM in cloud computing environments, key risk indicators and monitoring frameworks, and ISO 22301 business continuity implementation.

Sources and Further Reading