Phishing & Social Engineering
What Is CEO Fraud?
CEO fraud is a targeted variant of Business Email Compromise in which adversaries impersonate a company's CEO or other C-suite executive to manipulate lower-level employees—typically in finance, HR, or accounts payable—into making unauthorized wire transfers, sharing sensitive data, or executing ...
CEO fraud is a targeted variant of Business Email Compromise in which adversaries impersonate a company's CEO or other C-suite executive to manipulate lower-level employees—typically in finance, HR, or accounts payable—into making unauthorized wire transfers, sharing sensitive data, or executing other fraudulent actions. The attack exploits organizational power dynamics and the natural tendency of subordinates to obey perceived authority, bypassing normal verification procedures through manufactured urgency and psychological pressure. The FBI classifies CEO fraud as a subset of BEC, defining the broader category as attacks conducted "by compromising legitimate business email accounts through social engineering or computer intrusion techniques" against "businesses working with foreign suppliers and/or businesses that regularly perform wire transfer payments."
How does CEO fraud work?
CEO fraud operates through multiple attack vectors that have evolved significantly with the emergence of artificial intelligence. Traditional CEO fraud follows an intelligence-driven methodology: the attacker researches the target organization to identify the CEO and their communication style, gathers information from LinkedIn profiles, company websites, SEC filings, earnings calls, and social media. The attacker then either compromises the CEO's actual email account through credential phishing or credential stuffing, or creates a look-alike domain (e.g., "companyname.net" instead of "companyname.com") and may spoof the display name or manipulate email header fields to appear legitimate.
The attacker then identifies a target employee with financial authority—typically in accounts payable or finance—who processes wire transfers without requiring direct executive verification. A message is crafted impersonating the CEO, emphasizing urgency and confidentiality: "urgent wire transfer," "new account information," or "code to admin expenses." Messages are often sent before weekends, holidays, or during executive travel when normal verification procedures are disrupted. According to KnowBe4, victims in traditional CEO fraud are pressured to complete transactions quickly, with the attacker communicating that the matter is sensitive and must not be discussed with other employees. Once the victim executes the fraudulent transaction, funds are rapidly moved through intermediary accounts and converted to cryptocurrency or withdrawn. The FBI IC3 found that funds are primarily routed to China and Hong Kong.
AI-enhanced CEO fraud represents a significant escalation. According to Brightside AI, commercial voice synthesis tools can clone voices from just 3 seconds of clear audio, with services like ElevenLabs and Resemble AI offering real-time voice cloning. An attacker collects audio and video samples of the target executive from public sources—YouTube conference recordings, webinars, earnings calls, investor presentations, social media videos—and feeds these samples to AI speech synthesis models. The attacker then combines email with vishing (voice phishing) or video conference calls using the cloned voice or video. The victim receives an email warning of an urgent matter, followed by a phone call from the "CEO" confirming the request. Advanced variants use real-time voice synthesis, enabling natural back-and-forth conversation with the victim as the attacker responds immediately to unexpected questions.
According to Group-IB's 2025 research, multiple real-world incidents have deployed deepfake attacks. In 2019, a UK energy firm lost £243,000 (approximately $300,000) in the first documented AI voice deepfake CEO fraud incident. In 2024, a finance worker at Arup (Hong Kong) was tricked into wiring $25.6 million after attending a video conference call with the company's CFO and colleagues—later determined to be deepfaked participants. The psychological impact of deepfake attacks is significant: a voice that sounds exactly like the CEO or video showing the CFO's face creates overwhelming authority pressure that few employees feel empowered to question.
How does CEO fraud differ from whaling and general Business Email Compromise?
Dimension | CEO Fraud | Whaling | General BEC | Spear Phishing |
|---|---|---|---|---|
Who is impersonated | CEO or C-suite executive | N/A (executives are the targets, not impersonators) | Any trusted party (vendor, lawyer, colleague) | Any trusted individual |
Who is targeted as victim | Lower-level employees (finance, HR, accounts payable) | C-suite executives themselves | Any employee with financial/data authority | Specific, researched individuals |
Primary goal | Unauthorized wire transfer, data theft through authority pressure | Credential theft, data exfiltration, account compromise | Wire fraud, invoice fraud, payroll diversion, data theft | Credential theft, malware, data theft |
Payload type | No malware (pure social engineering) | May include malicious links/attachments | No malware (social engineering only) | Often includes malicious links/attachments |
Authority exploitation | Extreme (CEO's authority pressures subordinates) | Moderate (impersonates trusted contacts) | Variable | Varies by target |
AI/Deepfake risk (2025) | Very high (executives are public figures; voices/videos publicly available) | High (executives are public figures) | Moderate | Lower |
Average loss per incident | Very high (~$500,000 for deepfake incidents; $100M+ for major cases) | Very high | ~$50,000 median (FBI IC3) | Varies widely |
Reporting/Identification | Low (victims often embarrassed; difficult to distinguish from legitimate requests) | Low | Moderate | Moderate |
Ideal for | Organizations lacking dual authorization or callback verification; high-value targets | Targeting executive-level access and data | Organizations with weak financial controls | Any organization with security-aware targets |
Key differentiator: CEO fraud specifically exploits the power dynamic between executives and employees. Abnormal AI's 2025 research found that 89% of BEC attacks involve impersonation of authority figures, making CEO fraud the dominant BEC subcategory. Neither approach is universally better; rather, they represent attacker strategy based on organizational structure and victim psychology. Whaling targets security-conscious executives directly, while CEO fraud targets less security-trained finance staff. Spear phishing uses technical payloads, while CEO fraud relies entirely on psychological manipulation.
Why has CEO fraud gained traction?
CEO fraud has become increasingly prevalent due to a combination of organizational vulnerability, technological enablement, and economic incentive. According to Trustpair's 2025 research, 90% of U.S. companies experienced cyber fraud in 2024 (up from 79% in 2023), with imposter/BEC email scams representing 63% of fraud tactics—a 103% year-over-year increase. Within BEC, 44% of fraud cases involve CEO or CFO impersonation. FBI IC3 data shows cumulative global BEC losses (which includes CEO fraud as the highest-value subset) reached $55.5 billion from 2013-2023, with 2024 alone accounting for $2.77 billion. However, CEO fraud represents a disproportionate share of those losses; notable single incidents have exceeded $100 million in losses.
The emergence of AI deepfakes has dramatically accelerated CEO fraud effectiveness. Businesses lost an average of nearly $500,000 per deepfake-related incident in 2024, with large enterprise losses reaching $680,000. Global losses from deepfake-enabled fraud exceeded $200 million in Q1 2025 alone. By Q2 2024, approximately 40% of BEC phishing emails flagged as AI-generated content. Most critically, AI-generated CEO fraud emails achieve 54% click-through rates versus 12% for human-written CEO fraud emails—a 4.5x improvement. Trustpair research indicates a 118% year-over-year increase in AI-driven fraud tactics (deepfakes). According to Hoxhunt's 2026 report, BEC grew from 1% of all cyberattacks in 2022 to 18.6% following generative AI proliferation. Deloitte projects AI-facilitated U.S. fraud losses will climb from $12.3 billion in 2023 to $40 billion by 2027, representing 32% compound annual growth.
However, critical limitations constrain CEO fraud's ultimate scale. The technique requires extensive reconnaissance, making it labor-intensive and not scalable to mass campaigns. Additionally, wire transfers create audit trails, and the FBI IC3 Recovery Asset Team achieved a 66% success rate freezing fraudulent transfers in 2024, demonstrating recoverability through rapid intervention. Voice deepfakes degrade in quality against non-native accents or disguised voices, and reliable real-time deepfake detection tools are still emerging. Most organizations still lack the financial controls (dual authorization, callback verification) that would defeat CEO fraud entirely, indicating vulnerability exists primarily due to procedural gaps rather than technical inevitability.
What are the limitations of CEO fraud?
CEO fraud faces significant operational and technical constraints that limit its effectiveness and enable defenses. First, the technique is fundamentally vulnerable to out-of-band verification—a single phone call or in-person confirmation to the real executive can immediately defeat the attack. Organizations with mandatory callback protocols (using independently verified contact numbers from internal records, not from the suspicious email) are significantly harder to defraud. This single control eliminates CEO fraud risk for routine financial transactions.
Second, CEO fraud requires extensive reconnaissance and customization. Attackers must research organizational structure, identify high-value targets, learn communication patterns, and craft plausible pretexts. Each attack is labor-intensive and not scalable to mass campaigns, limiting attack volume to dozens or hundreds per attacker rather than thousands. Third, wire transfers create audit trails and can be partially recovered if detected rapidly. The FBI's 66% success rate freezing fraudulent BEC transfers demonstrates recoverability; organizations that contact their bank within 24-48 hours of detecting fraud have the highest recovery rates.
Fourth, executive communication patterns are distinctive. Employees who work closely with their CEO often recognize anomalies in writing style, vocabulary, formatting, or request context that deviate from normal patterns. Fifth, AI deepfake voice quality degrades with non-native accents, disguised voices, or insufficient training data samples. Deepfakes require high-quality audio/video samples to be effective, and some executives may have limited public media exposure. Sixth, look-alike domain registration creates traceable records that law enforcement can investigate and subpoena. Seventh, emerging deepfake detection tools using audio forensics and video analysis are improving, though no tool is yet highly reliable.
Defense gaps persist: only 47% of companies implement dual approval processes (meaning 53% lack secondary authorization for wire transfers); only 40% of finance teams use segregation of duties; most organizations still rely on verification methods that deepfakes have rendered obsolete (voice recognition fails against voice clones); and employees feel psychological pressure to obey leadership, making them reluctant to question or verify requests from the CEO. Additionally, attackers deliberately time attacks before weekends, holidays, or during executive travel when normal verification procedures are disrupted.
How can organizations defend against CEO fraud?
Organizations should implement a defense strategy combining email authentication, financial controls, behavioral training, and incident response procedures. Technically, deploy DMARC at p=reject to prevent domain spoofing, and monitor for look-alike domain registrations. Enforce Multi-Factor Authentication (MFA) on all email accounts—especially executive accounts—to prevent account compromise that enables impersonation from legitimate addresses. Deploy email security gateways configured to flag external emails claiming to be from internal executives, and implement AI/ML-based anomaly detection (such as Abnormal Security, Proofpoint, or Microsoft Defender) that can identify unusual communication patterns, unexpected recipient lists, or behavioral deviations from baseline sender behavior.
From a process perspective, implement mandatory out-of-band verification as the most effective control: require verification of all wire transfer requests, payment changes, and sensitive data requests via a separate communication channel (phone call to a known number, NOT a number from the suspicious email). Establish dual authorization requirements for wire transfers and ACH changes above a threshold, requiring two-person approval. According to Trustpair 2025 data, only 56% of finance teams have implemented double-checking and 47% use dual approval, indicating significant opportunity for improvement. Implement segregation of duties so no single employee can authorize large transactions independently. Verify invoice changes through secondary contact channels using established vendor contact information.
Block automatic email forwarding to external domains, which attackers use to exfiltrate data and monitor correspondence. Monitor for mailbox rule changes and anomalous login behavior using SIEM or security analytics platforms. Conduct regular security awareness training focused on CEO fraud-specific scenarios—urgency-driven social engineering, impersonation techniques, authority manipulation, and callback verification procedures. Use CEO fraud simulations to reinforce learning. Train employees that even emails from the CEO warrant callback verification for sensitive transactions.
For deepfake defense specifically, establish code words or passphrases for verifying high-value transactions that cannot be spoofed by AI voice synthesis. For example, a senior executive might establish a pre-agreed phrase that they would use in verbal confirmations, providing a secondary verification mechanism that AI cannot reliably replicate without prior training. Reduce public audio/video content of executives (limit conference recording distribution, restrict webinar recordings) to reduce the training data available for voice/video cloning. Implement incident response procedures: upon suspecting CEO fraud, immediately contact the bank to attempt transfer freezing, file an IC3 complaint at ic3.gov, and conduct email account forensics to determine compromise timeline.
FAQs
Q: What is the difference between CEO fraud and whaling?
CEO fraud involves impersonating an executive (typically the CEO) to trick lower-level employees into unauthorized actions like wire transfers or data sharing. Whaling targets the executives themselves as phishing victims, aiming to compromise their credentials or devices. In CEO fraud, the attacker pretends to BE the CEO; in whaling, the attacker targets the CEO as the victim. They overlap when an attacker compromises a CEO's account (whaling) and then uses it to defraud employees (CEO fraud), creating a cascading impact. Whaling requires more sophisticated social engineering because executives typically receive more security awareness training. (IBM, "What is Whale Phishing?"; IRONSCALES)
Q: How are deepfakes changing CEO fraud?
AI-powered voice cloning and deepfake video have dramatically escalated CEO fraud impact and effectiveness. In 2024, a finance worker at Arup was tricked into wiring $25.6 million after attending a deepfaked video call with the company's CFO and colleagues. Commercial voice cloning tools can clone voices from just 3 seconds of clear audio. By Q2 2024, approximately 40% of BEC emails were flagged as AI-generated. AI-generated CEO fraud emails achieve 54% click-through rates versus 12% for human-written CEO fraud—a 4.5x improvement. Businesses lost an average of ~$500,000 per deepfake incident in 2024, with global deepfake fraud losses exceeding $200 million in Q1 2025 alone. These statistics indicate deepfakes have fundamentally altered the threat landscape. (Eftsure, 2024; DeepStrike, 2025; Hoxhunt, 2026; Trustpair, 2025)
Q: How much money has been lost to CEO fraud?
CEO fraud is a subset of BEC, which has caused $55.5 billion in cumulative global losses from 2013-2023, and $2.77 billion in 2024 alone (FBI IC3). Within BEC, CEO fraud represents the highest-value subset. Notable single incidents include $100 million (Facebook and Google combined, 2013-2015), $70 million (Crelan Bank, Belgium, 2016), $47 million (FACC AG, Austria, 2016), and $25.6 million (Arup, Hong Kong, 2024). Deepfake-enabled CEO fraud averaged $500,000 per incident in 2024. The FBI IC3 Recovery Asset Team successfully froze 66% of fraudulent transfers in 2024, demonstrating that recovery is possible if victims act rapidly. (FBI IC3, 2024/2025; KnowBe4)
Q: What are the warning signs of a CEO fraud email?
Red flags include: (1) Unusual urgency—"this must be done today" or "act immediately," (2) Requests for secrecy—"keep this confidential," (3) Change in payment details—new bank account for a wire transfer, (4) Subtle email address variations from the real domain, (5) Communication outside normal patterns—a CEO emailing a junior employee directly about financial matters, (6) Pressure to bypass normal procedures, (7) Request sent before weekends or holidays when verification is harder. Common phrases include "code to admin expenses," "urgent wire transfer," "new account information," and "confidential matter." (KnowBe4; Microsoft, 2024)
Q: Why do CEO fraud emails target finance and HR teams instead of the CEO directly?
Frontline finance and HR employees are easier targets for social engineering than security-conscious executives. These teams routinely process wire transfers, payroll, and tax documents, often without requiring direct executive oversight for each transaction. Attackers recognize that an email appearing to come from the CEO creates maximum authority pressure on subordinates, who feel psychologically obligated to obey. Additionally, finance and HR teams often receive less security awareness training than executives, creating a knowledge gap. According to FBI IC3 data and Trustpair research, 44% of fraud cases involve CEO/CFO impersonation targeting these teams specifically. (Proofpoint; FBI IC3; Trustpair, 2025)



