AI Deepfake Fraud: How Fake Video Calls Are Stealing Millions from Businesses

Reza

A finance manager at Arup, a global engineering firm, joined what looked like a routine video conference in late 2025. The CFO was there. Several colleagues from the London office were on camera. Everyone looked right. Everyone sounded right. The CFO explained they needed to execute wire transfers for a confidential acquisition - large amounts, but not unusual for a company that size.

The finance manager authorized transfers totaling $25 million.

Every person on that video call was AI-generated. The CFO was fake. The colleagues were fake. The whole meeting was a fabrication built from publicly available photos, LinkedIn profiles, and conference videos.

This isn't a story about a sophisticated nation-state attack or a vulnerability in some obscure enterprise software. It's about a tool that cost the attackers a few hundred dollars in commercial AI subscriptions - and a technique that's now being adapted for businesses of every size.

What AI Deepfake Fraud Actually Looks Like in 2026

Most people picture deepfakes as manipulated celebrity videos or political disinformation. The reality hitting businesses right now is different: targeted, business-specific attacks designed to trick employees into transferring money, sharing credentials, or bypassing normal approval processes.

There are three main variations showing up in 2026:

Voice Cloning Calls

An attacker records a few minutes of audio - from a YouTube interview, a podcast, or even a recorded earnings call - and feeds it into a voice cloning tool. The output is a synthetic voice that sounds nearly identical to the real person. They then call an employee, impersonating the CEO or a senior manager, and request an urgent wire transfer or a password reset.

According to a recent Hiya report, one in four Americans say they've received a deepfake voice call in the past 12 months. In a business context, that number will keep climbing as the tools get cheaper and easier to use.

Deepfake Video Calls

The Arup attack. Real-time video synthesis has gotten good enough that a fake "executive" can appear on a video call, respond naturally to questions, and maintain the illusion for long enough to get what the attacker wants. The CFO blinks. The background shows a real-looking office. The voice matches exactly. There's no obvious tell.

AI-Generated Phishing Emails

This is the most common and most scalable version. Large language models now write phishing emails without the grammatical errors and awkward phrasing that used to flag fraud. Attackers research targets on LinkedIn, learn their internal terminology, reference real projects, and craft messages that read exactly like internal communications.

The World Economic Forum's Global Cybersecurity Outlook 2026 found that AI vulnerabilities now concern 87% of cybersecurity professionals worldwide - and AI-enabled phishing is a top driver of that concern.

Why Small and Mid-Size Businesses Are Getting Hit

Arup is a large company with a full security team. If it can happen there, the same playbook works against a 50-person manufacturing company or a regional accounting firm. Smaller businesses actually present some advantages for attackers:

  • Fewer approval layers - one phone call from the "CEO" to an office manager may be all it takes
  • Less security training - employees at smaller companies rarely practice identifying AI-generated content
  • Leaner finance processes - urgent wire requests don't always trigger the same verification steps as they would at a large enterprise
  • More public information than you'd expect - a business owner's LinkedIn, local news coverage, and a company website provide enough voice and video samples to clone them

The Auxis 2026 Cybersecurity Trends report notes that one in three small and mid-size businesses experienced a cyberattack in the preceding year, with costs running as high as $7 million per incident. AI-enabled social engineering is becoming a primary driver of those losses.

How These Attacks Are Built - Step by Step

Understanding how attackers prepare is actually the most useful thing you can read here, because the reconnaissance phase is where prevention starts.

Phase 1: Intelligence Gathering (All Public Sources)

Attackers spend days researching before they make a move. They pull from:

  • LinkedIn - organizational hierarchy, who reports to whom, recent job changes
  • Company website - executive bios, headshots, videos
  • YouTube and podcast appearances - voice samples and speaking patterns
  • Local business news, press releases, and Chamber of Commerce profiles
  • Social media - personal accounts that reveal communication style

None of this requires any hacking. It's all public. A determined attacker can build a detailed dossier on your leadership team in a few hours.

Phase 2: Deepfake Generation

With a handful of photos and a few minutes of audio, commercial tools can produce a convincing synthetic voice and video. The cost is low. The technical skill required is moderate - not expert-level. What would have taken a nation-state team six months in 2020 now takes an organized criminal group a few days.

Phase 3: Social Engineering the Target

The attackers don't lead with the wire transfer request. They establish normalcy first - maybe an initial call about a routine business matter, or a few emails referencing real internal projects. They build credibility before making the unusual request. By the time the "CEO" asks for the wire transfer, the employee has already had two or three normal-seeming interactions with what they believe is a real person.

What You Can Actually Do About It

This is where most coverage falls flat - long on explaining the threat, short on practical steps. Here's what actually works.

Establish a Verbal Code Word for Financial Requests

The simplest and most effective control: pick a word or phrase that only your leadership team and finance staff know. Any request for a wire transfer, payroll change, or significant expenditure - regardless of how it arrives - requires the requestor to provide the code word during a separate, confirmed phone call.

This takes about five minutes to set up and defeats deepfake video call attacks entirely. The attacker can clone your CEO's face and voice, but they can't know a code word that was never made public.

Build a Verification Protocol for Unusual Requests

Define "unusual" in writing: any wire transfer over a set dollar amount, any changes to payment methods for vendors, any request to bypass normal approval steps. When a request meets that threshold, the receiving employee must verify it through a completely separate channel - calling a known phone number (not one provided in the request) or walking down the hall.

This sounds obvious but it needs to be a written policy, not just common sense. Common sense fails under pressure. Written policy gives employees cover to say "I have to follow our verification procedure" even when an "executive" is pushing for speed.

Train Employees on What AI Fraud Looks Like

General security awareness training is good. Training specifically focused on AI-generated content is better. Your team should know:

  • Video calls can be faked, even in real time
  • Voice calls claiming to be from executives require callback verification
  • Urgency and secrecy are the two biggest red flags - legitimate executives don't usually ask employees to keep financial transfers confidential from the rest of the team
  • Email that references real internal details (a project name, a colleague's nickname) is not automatically trustworthy

Our security awareness training program covers AI-specific attack scenarios alongside traditional phishing - something that wasn't part of most training programs even two years ago.

Tighten Your Financial Controls

Most businesses have approval processes. The issue is that those processes can be pressured or bypassed when someone believes they're getting a direct order from the CEO. A few structural fixes help:

  • Require dual authorization for any transfer over a set threshold (two separate approvals, not just two steps by one person)
  • Set a mandatory waiting period for new payees - no same-day wires to a vendor who's never been paid before
  • Require all payment method change requests to come through the actual vendor's known contact, never through an inbound email or call

Review What's Publicly Visible About Your Leadership

You can't make executives invisible, but you can be deliberate about what's available. Check:

  • Do executive LinkedIn profiles include personal phone numbers or email addresses? Remove them.
  • Are there long video recordings of your leadership team available publicly? Be aware these are training data for voice cloning.
  • Does your website include individual cell phones or direct lines for senior staff? Consider routing all external inquiries through a main number.

Consider Email Authentication and Filtering

On the technical side, properly configured email authentication - SPF, DKIM, and DMARC - prevents attackers from sending email that appears to come from your own domain. Many businesses have these partially configured but not fully enforced.

Your email security configuration should include DMARC in "reject" mode, which blocks spoofed emails from reaching inboxes at all. This doesn't stop AI-generated phishing from external domains, but it closes a common attack vector.

What to Look for in Your Existing Policies

Most businesses have policies written before deepfake fraud was a practical threat. Check yours against these questions:

  • Does your wire transfer approval process specify how the request must be verified - or does it just say "get approval from the CFO"?
  • Does your acceptable use policy cover what employees should do if they receive a suspicious call from someone claiming to be a company executive?
  • Does your vendor payment policy include controls for new payees and payment method changes?
  • Has your team practiced any of these scenarios, or just read about them in a document?

Written policies that nobody's practiced are weak. Running a tabletop exercise - even just talking through "what would we do if the CFO called and asked for a $50,000 wire transfer" - builds the muscle memory that protects you when it actually happens.

The Bigger Picture: Verifying Identity Is Getting Harder

The Arup case revealed something important: "I saw them on video" is no longer sufficient verification. Face and voice used to be things only the real person possessed. Now they can be generated from public information in hours.

This shifts identity verification toward something harder to fake: shared secrets (code words), out-of-band confirmation (calling a known number you already have stored), and documented process (requiring specific steps that can't be skipped based on an executive's verbal request).

Businesses that rely on "we know what our CEO sounds like" for verification are operating with a method that worked fine in 2022 and fails in 2026. The update is procedural, not technical - and it costs almost nothing to implement.

If you want to dig into your current vulnerability management posture more broadly, our vulnerability management program covers both technical gaps and process-level risks like these.

FAQ: AI Deepfake Fraud and Business Protection

How realistic are deepfake voice calls in 2026?

Very realistic. Current voice cloning tools can produce synthetic audio that most people cannot distinguish from the real person, especially over a phone call where audio quality is compressed. The tell-tale "robotic" artifacts from earlier generations of the technology are largely gone. Your team should assume that a voice call alone is not sufficient verification for any high-stakes financial request.

What's the simplest protection a small business can put in place today?

A code word system for financial requests, combined with a callback verification process. If anyone - regardless of who they appear to be - requests a wire transfer or payroll change, your finance staff calls back a pre-stored number and asks for the code word. The attacker cannot provide it. This one control defeats most deepfake phone and video call attacks.

Does AI-powered email filtering catch AI-generated phishing?

Partially. Modern email security tools use behavioral signals - unusual sending patterns, link analysis, header analysis - that catch some AI-generated phishing. But they're not perfect, especially when the phishing email comes from a legitimate external account that's been compromised. Layered controls (authentication, filtering, and employee training) together are more reliable than any single tool.

How did attackers build deepfakes of the Arup executives?

From entirely public sources: LinkedIn profiles, corporate website bios, conference presentations, and other publicly available video and audio. No hacking was required. The lesson for businesses is that the attack preparation phase is invisible - by the time they call, they've been researching your team for days.

Should we stop using video calls for internal meetings?

No - that's not practical and the risk doesn't warrant it. The issue is using video calls as the only verification method for sensitive financial or credential-related requests. Video calls are fine for their normal purpose. Add a separate verification step for any action that would normally require seeing someone in person to authorize.

---

If you're not sure how your current policies and technical controls would hold up against this type of attack, Burgi Technologies works with Orange County businesses to review and close these gaps. Reach out through our contact page or call us at (949) 381-1010 - we're happy to take a look.

Check our other posts

""