In early 2024, a finance employee at a multinational firm in Hong Kong received a video call from what appeared to be the company's CFO and several colleagues — all of whom were synthetic. The deepfakes instructed him to transfer $25 million USD. He did. This was not the first case and it will not be the last. Generative AI has fundamentally changed the economics of social engineering: attacks that previously required days of preparation can now be assembled in hours, at a quality level that bypasses the heuristics most people use to detect fraud.
The challenge for security teams is that user awareness training — already the bedrock of social engineering defense — was built for a threat landscape that no longer exists. Telling employees to look for spelling mistakes in phishing emails is advice from 2015. Today's AI-written phishing email has no spelling mistakes. It references your actual projects, quotes your colleagues, and arrives from a spoofed address indistinguishable from the real one.
What GenAI Has Changed — and What It Has Not
GenAI has changed scale, quality, and personalization. Attackers can now generate thousands of individually personalized phishing emails per hour, each drawing on data scraped from LinkedIn, company websites, social media, and previous data breaches. Voice cloning tools can replicate a CEO's voice from 30 seconds of audio pulled from a podcast or earnings call. Deepfake video has crossed from 'obviously fake' to 'plausible under time pressure.'
What has not changed is the fundamental social engineering playbook: create urgency, bypass critical thinking, exploit authority, make it hard to verify. GenAI makes the delivery mechanism more convincing. The underlying manipulation dynamic is the same as it was in the 1990s. That matters for how you train people.
The Attack Vectors You Need to Brief Your Teams On
- AI-generated spear phishing — highly personalized emails referencing real colleagues, projects, and terminology
- Voice cloning fraud — phone calls from synthesized voices of executives or IT support requesting credentials or transfers
- Deepfake video calls — video meetings using real-time face and voice synthesis of known individuals
- Synthetic identity fraud — AI-generated personas with fabricated histories used to gain access or trust
- AI-powered chatbot impersonation — fake IT helpdesk or HR chatbots harvesting credentials
- Context-aware vishing — attackers use AI to research targets in real time during phone calls, answering questions convincingly
One thing that still works: out-of-band verification. If you receive an unexpected request for money, credentials, or access — even from someone you recognize — verify through a separate channel you already trust. Call back on a number you have stored, not one given in the message. This is low-tech and it genuinely stops most attacks. The $25 million deepfake fraud succeeded partly because the target did not call back to verify.
What Actually Helps: Controls That Work in 2025
Update your awareness training to explicitly address GenAI threats — show examples of AI-generated phishing and voice clones so employees know what to look for. More importantly, train on the verification behavior you want, not just threat recognition. People will not always correctly identify a sophisticated fake. They can always be trained to verify before acting.
On the technical side: enforce multi-factor authentication that is not SMS-based (SIM swapping remains trivial). Implement email authentication standards (DMARC, DKIM, SPF) properly — most organizations have them partially configured but not enforced. Use anomaly detection on financial transaction patterns. Establish explicit internal protocols for any request involving money movement or credential changes: a second approver, a callback to a known number, a time delay. These process controls are boring and they are effective.
The hard truth is that no amount of technical control fully addresses a threat that operates at the human layer. Social engineering works because it exploits trust, authority, and time pressure — things that are features of human organizations, not bugs. The goal is not to make attacks impossible but to raise the cost and effort required so high that attackers move to easier targets.