Defence Playbook • 2026

The AI Deepfake
Defence Playbook

How NZ businesses can detect, prevent, and respond to AI-generated voice cloning, video deepfakes, and social engineering attacks.

$1.1B
Lost to deepfakes in 2025
3 sec
To clone any voice
680%
Voice fraud increase YoY
Section 01
The 3-Second Problem
AI can now clone anyone's voice from a single audio clip. Here's how it works and why it matters.
3 seconds

That's all the audio an attacker needs to clone your voice at 85% accuracy. A LinkedIn video, a podcast appearance, a voicemail greeting, or even a conference recording. Source: DeepStrike 2025

How Voice Cloning Works

1. CAPTURE 3s of audio 2. AI CLONING Neural Net processes voice 3. IMPERSONATE 85% accuracy clone 4. ATTACK $$ fraud
70%
of people can't tell a cloned voice from the real one
DeepStrike
77%
of voice clone victims suffered financial loss
DeepStrike
1 in 4
adults have experienced an AI voice scam
McAfee 2024

Where Attackers Get Your Voice

The CEO Fraud Epidemic

400 companies per day are targeted by CEO deepfake fraud. Attackers clone the CEO's voice, call the finance team, and request urgent wire transfers. It works because employees are trained to follow their boss's instructions quickly. Source: DeepStrike

Section 02
Real Attacks, Real Losses
These aren't hypothetical scenarios. They happened in 2025-2026.
Case Study 1 — Arup Corporation, 2025

$25 Million Lost in a Single Deepfake Video Call

An employee at international engineering firm Arup was invited to a video conference with what appeared to be senior management. Every person on the screen was an AI-generated deepfake. The employee was instructed to transfer funds across multiple transactions. Total loss: $25 million.

$25M
Total loss
1
Video call
100%
AI-generated
0
Real humans on screen
Case Study 2 — Swiss Executive, January 2026

"Several Million Francs" Lost to Cloned Business Partner Voice

A Swiss businessman received a phone call from what sounded exactly like his long-time business partner. The cloned voice referenced specific details of their ongoing deals. The businessman authorised a transfer of several million Swiss francs before realising the call was fraudulent.

~$3M+
Estimated loss
1
Phone call
AI
Voice clone used

Deepfake Fraud Losses — US Market

$360M 2024 $1.1B 2025 $3B+ 2025 (Jan-Sep) $3B $1B $360M 3x YoY growth
$500K
average loss per deepfake incident for businesses
DeepStrike
1,600%
increase in deepfake vishing attacks Q1 2025
Programs.com
$40B
projected deepfake fraud losses by 2027
DeepStrike

The Financial Services Target

Over 10% of financial institutions have already suffered a deepfake vishing attack costing more than $1 million. The average loss at financial institutions is approximately $600,000 per incident. Attackers target finance teams because they're authorised to move money quickly. Source: Group-IB

Section 03
Can You Actually Spot a Deepfake?
The honest answer: probably not. Here's why detection alone isn't enough.

Human Deepfake Detection Rates

24.5% detect video Video Deepfakes 30% detect voice Voice Clones 3 out of 4 people are fooled

False Confidence

  • 60% of people believe they can spot deepfakes
  • Only 24.5% actually can (for video)
  • Only 30% can detect cloned voices
  • 80% of companies have zero deepfake response protocols
  • 50%+ of business leaders report zero employee training

The Reality Check

  • Detection software exists but isn't 100% reliable
  • AI deepfakes are improving faster than detection tools
  • Prevention > detection is the correct strategy
  • Process-based defences (code words, callbacks) work regardless of quality
  • Human training + verification protocols = best defence

AI-Enhanced Phishing: The Multiplier Effect

Deepfakes are only part of the picture. AI is supercharging every form of social engineering:

The Scale of the Problem

8 million deepfake files are projected to be circulating online in 2025, with a 900% annual growth rate in deepfake video volume. ChatGPT is mentioned 550% more than any other AI model in criminal forums. Over 300,000 ChatGPT credentials have been exposed via infostealer malware. Sources: DeepStrike, CrowdStrike, IBM X-Force

Section 04
The 8-Step Defence Playbook
Process beats technology. These defences work regardless of how good deepfakes become.
1

Establish Code Words for Financial Requests

Create a shared code word between executives, finance, and anyone who can authorise payments. Change it monthly. A deepfake can clone a voice, but it can't know your secret phrase.

2

Mandatory Callback Verification

Any request to transfer money, change bank details, or share sensitive data must be verified by calling back on a pre-registered number (not the number from the email/call). No exceptions.

3

Dual-Approval for Transfers Over $1,000

No single person should be able to authorise a payment alone. Two people verify, two people approve. This simple control would have prevented the Arup attack.

4

Train Staff on Deepfake Awareness (15 Minutes)

Run a 15-minute session: show examples of AI cloned voices, demonstrate how easy it is, explain the code word system. Do this quarterly. Security training reduces phishing success by 86% after one year.

5

Create an "Urgent Request" Red Flag Policy

Attackers create urgency to bypass rational thinking. Policy: the more urgent a request, the more verification is required. "We need this done NOW" = slow down and verify harder.

6

Limit Voice and Video Exposure

Audit what executive audio/video is publicly available. Consider whether conference recordings, podcast appearances, or LinkedIn videos need to be public. Each one is potential cloning material.

7

Never Trust Video Calls Alone for Identity

If someone joins a video call requesting money or sensitive actions, verify their identity through a separate channel. A quick text to their known mobile: "Are you really on this call?"

8

Run Deepfake Phishing Simulations

Test your team with controlled deepfake scenarios. WeHack offers deepfake social engineering simulations to test whether your staff would fall for a cloned voice requesting a transfer.

The Golden Rule

Never rely solely on voice or video to authenticate a person's identity. Trust the process, not the voice. A well-designed verification procedure protects you even when the deepfake is indistinguishable from reality.

Section 05
Your Deepfake Defence Checklist
Print this page. Check off what you've done. Fix what you haven't.
Code word system established between executives and finance team. Changed monthly.
Callback verification policy in writing. All financial requests verified via pre-registered number.
Dual-approval required for all payments over $1,000 NZD.
Staff trained on deepfake awareness. Can demonstrate AI voice cloning. Understand red flags.
Urgent request policy documented. More urgency = more verification, not less.
Executive voice/video exposure audited. Unnecessary public audio removed or restricted.
Video call identity verification process in place. Second-channel confirmation for sensitive requests.
Deepfake phishing simulation conducted. Team tested with realistic scenarios.
Bank detail change process requires in-person or multi-channel verification. Not just email/phone.
Incident response plan updated to include deepfake scenarios. Team knows the escalation path.
Email security (SPF, DKIM, DMARC) configured to prevent domain spoofing alongside deepfake attacks.
MFA enabled on all accounts. Deepfakes bypass authentication, but MFA adds a layer attackers can't clone.

If You Can Check 10+

You're ahead of 90% of NZ businesses. A penetration test will validate your controls and find any remaining gaps.

If You Can Check Fewer Than 5

Your business is vulnerable to deepfake social engineering. The good news: most of these items cost nothing to implement today.

NZ Context: It's Already Happening Here

New Zealand lost $7.8 million to cybercrime in Q1 2025 alone. The NCSC handles approximately one nationally significant cyber incident per day. While deepfake-specific NZ statistics are still emerging, the global trends are clear: NZ businesses face the same threats as every other country, often with fewer resources to defend against them.

25% of NZ's nationally significant incidents were linked to state-sponsored actors who increasingly use AI-powered social engineering as their primary access method.

What's Next

Don't Wait for the Call

The businesses that survive deepfake attacks are the ones that prepared before the call came. Your defence starts with a conversation.

Test Your Team Before Attackers Do

Start with our free security assessment — 20 questions across 6 domains, with a personalised radar chart and risk exposure in NZD. Then let's talk about what to fix first.

Take Your Free Security Score →

Every business has different security challenges. Book a free 15-minute chat and we'll recommend the right approach — no obligation.

Mustafa Demirsoy
Founder & Hacker, WeHack

wehack.co.nz  |  info@wehack.co.nz  |  022 091 7242
148 Durham Street, Tauranga 3110