In today’s digital landscape, video and audio content are everywhere and we instinctively trust what we see and hear, especially when it features a familiar face or voice. However, the rapid rise of “deepfakes” — highly convincing media created or altered by artificial intelligence (AI) — has disrupted that assumption. Deepfakes can make people appear to say or do something they never did, raising a critical question: “Can you trust what you see online?”
What Is a Deepfake?
Deepfakes use AI to mimic a person’s face, voice, and gestures, creating media that is almost impossible to distinguish from reality. The technology is now widely accessible, making it easier than ever for bad actors to create convincing fakes.
Fraud prevention and forensic analysis are now having to contend with this new frontier of manipulated media.
Deepfakes Have Become a Critical Fraud and Forensics Challenge
Deepfakes have evolved from a fringe novelty into a sophisticated tool for deception, creating risks that span fraud, reputational harm, misinformation, and operational disruption. As AI-generated audio and video become increasingly realistic and accessible to anyone, the line between authentic and fabricated content has blurred in ways that challenge individuals, businesses, and forensic professionals alike. This is not a niche technical issue anymore; it’s a growing threat with real‑world consequences that demand new vigilance, new controls, and new detection capabilities.
- Fraud and Impersonation Attacks: Deepfakes can create convincing impersonations of executives, public figures, or trusted contacts. This often makes it easier for attackers to authorize fraudulent transactions, solicit sensitive information, or manipulate employees into acting on false instructions. For instance, a deepfake video of a company chief financial officer (CFO) instructing a wire transfer can trick victims into handing over money.
- Misinformation and Reputational Harm: AI-generated video or audio of political figures, executives, or private individuals can spread rapidly, damaging credibility, fueling misinformation, or prompting harmful public reactions before the truth can catch up.
- Legal/Regulatory Exposure: Organizations may unwittingly be caught up in deepfake driven fraud, or face scrutiny if their platforms host deepfake content, triggering compliance obligations and potential liability.
- The New Forensics Challenges: Authenticating digital media is now significantly more complex. Forensic teams must verify what is real, identify signs of manipulation, and trace origins whilst generative tools continue to evolve.
Recent Deepfake Scams
Corporate Fraud: The British engineering firm Arup was the subject of a sophisticated deepfake scam, in which an employee in its Hong Kong office transferred approximately HK$200 million (around £20 million), following a video call in which a digital version of the company’s CFO appeared to instruct the transfer.
https://www.theguardian.com/technology/article/2024/may/17/uk-engineering-arup-deepfake-scam-hong-kong-ai-video
Fake Endorsements: Scammers used deepfake videos of Martin Lewis — the UK financial advice broadcaster — in fake adverts promoting investment apps. Martin Lewis felt “sick” seeing the deepfake scam ad on Facebook.
https://www.bbc.com/news/uk-66130785
Public Awareness: In the UK, research by Finder found that less than 4% of British citizens could correctly identify all deepfake videos in a test, demonstrating how convincing the fakes have become.
https://www.finder.com/uk/media/deepfake-scams
How Are Deepfakes Made — And Detected?
- Creation: Deepfakes are built using source images or videos of the target, voice samples, and training of generative models that map the target’s expressions/voice to new content.
- Detection: Forensic analysts look for inconsistencies: unnatural lighting/shadows, irregular blinking/eye movement, audio-video sync mismatches, pixel-level artifacts, metadata anomalies, and “blood-flow” cues under skin. For example, one awareness campaign by Santander UK used purposely created deepfakes to educate how to spot them e.g., “look for odd mouth movement, background reflections, unnatural blinking.” Santander UK+1
What This Means for Individuals and for Organizations
For Individuals
- Do not assume a video or audio message is genuine just because it shows a known person or bears an official look.
- Be cautious when an urgent request arises (e.g., “send money now,” “this will expire,” “you have only a few minutes”) combined with a familiar face or voice.
- Check the source: Has the media been posted by a verified account? Are there independent confirmations?
- Spot the red flags: strange lip movement or blinking, strangely unstable lighting across frames, mismatched voice tone, background that does not match context.
For Organizations and Forensic Practitioners
- Recognize that deepfake-driven fraud is now a serious risk and must be incorporated into risk assessments and incident response planning.
- Forensics must evolve: The challenge is not only retrieving data but verifying authenticity, tracing origin, and providing expert opinion on manipulated media.
- Training and awareness matter: Employees, management, and board members should understand this threat and know how to respond (e.g., verifying handshake, secondary confirmation, multi-factor verification rather than relying on seeing “Mr. CFO on the video”).
Expert Insights: Strengthening Organizational Defenses Against Deepfakes
As deepfake-enabled attacks grow more sophisticated, organizations need a multilayered strategy that extends well beyond traditional cybersecurity controls. The following recommendations outline practical steps that help companies reduce exposure, harden verification processes, and respond quickly when synthetic media is used maliciously.
1. Implement Strong, Multi‑Layer Verification Protocols
Deepfakes succeed when organizations rely on visual confirmation, familiar voices, or the perceived authority of an executive on video. To counter this, build verification steps that cannot be bypassed by synthetic media. Require secondary authorization for all sensitive actions such as financial approvals or data‑access requests and mandate callback procedures or unique authentication codes for urgent instructions, regardless of who appears in the video or audio.
2. Integrate Deepfake Scenarios Into Incident Response Plans
Most incident response plans assume a conventional cyber intrusion, not an AI‑generated impersonation. Updating these plans to include deepfake‑specific scenarios ensures teams know how to contain, verify, and escalate suspicious media. This includes planning for executive impersonation attempts, synthetic extortion threats, and AI‑amplified social engineering campaigns.
3. Strengthen AI and Synthetic Media Governance
Deepfake risk is as much a governance issue as it is a technical one. Establish policies that define acceptable use of generative AI tools, set requirements for vendor transparency, and implement periodic AI risk assessments aligned with frameworks such as the National Institute of Standards and Technology’s (NIST’s) AI Risk Management Framework (RMF). Clear governance helps organizations detect Shadow AI practices, ensure accountability, and prepare for evolving regulatory requirements.
4. Prioritize High‑Risk Groups in Training and Awareness
Executives, human resources (HR), and finance teams, and anyone with authority over money movement or sensitive access are prime targets. Provide them with targeted training that includes real examples of deepfake fraud, walkthroughs of common manipulation cues, and simulation exercises that test their ability to verify unusual or urgent requests. Awareness at the leadership level disproportionately strengthens the organization’s overall resilience.
5. Use Tools and Technical Controls to Detect Synthetic Media
Organizations should begin augmenting traditional cybersecurity controls with technologies capable of spotting manipulated media. These tools can analyze audio, video, and image assets for anomalies such as irregular blinking, mismatched shadows, or pixel‑level inconsistencies while metadata analysis and forensic workflows help validate authenticity. Because detection tools evolve alongside attacker capabilities, regular updates and tuning are essential.
6. Respond Rapidly to Deepfake‑Driven Reputational Threats
If a deepfake targets an organization’s leadership or is used to spread misinformation, speed matters. Establish communication protocols that enable quick verification, public correction, and coordinated messaging across owned channels. When appropriate, pair public statements with forensic validation to maintain trust and credibility. Early clarity can significantly limit reputational fallout.
The Bottom Line
In a world where you can no longer always trust what you see or hear, skepticism and verification are your first line of defense. Organizations can no longer afford passive awareness. Deepfakes have created an era that requires active vigilance, intentional governance, and the ability to verify what used to be taken for granted. For forensic, risk, and security teams, the challenge is to stay ahead leveraging new tools, new instincts, and a commitment to continuous education and verification. Now is the time to take stock of your exposure, strengthen your controls, and prepare your teams before the next deepfake targets you.
© Copyright 2026. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.
