In today’s digital landscape, video and audio content are everywhere and we instinctively trust what we see and hear, especially when it features a familiar face or voice. However, the rapid rise of “deepfakes” highly convincing media created or altered by artificial intelligence (AI) has disrupted that assumption. Deepfakes can make anyone appear to say or do something they never did, raising a critical question: “Can you trust what you see online?”
What Is a Deepfake?
Deepfakes use AI to mimic a person’s face, voice, and gestures, creating media that is almost impossible to distinguish from reality. The technology is now widely accessible, making it easier than ever for bad actors to create convincing fakes.
Fraud prevention and forensic analysis are now having to contend with this new frontier of manipulated media.
Why Deepfakes Matter
- Fraud and Impersonation: A deepfake video of a company chief financial officer (CFO) instructing a transfer, or a celebrity endorsement of a fake investment, can trick victims into handing over money.
- Misinformation and Reputational Harm: Fake voice or video of political figures, public personalities, executives, or private citizens can damage credibility, spread false claims, or incite unrest.
- Legal/Regulatory Risk: Organizations may unwittingly be caught up in deepfake driven fraud, or face liability if their platforms host deepfake content.
- Forensics Challenge: Detecting whether a file is genuine, identifying origin, and verifying manipulation are far more complex when generative AI is involved.
Recent Deepfake Scams
Corporate Fraud: The British engineering firm Arup was the subject of a sophisticated deepfake scam, in which an employee in its Hong Kong office transferred approximately HK$200 million (around £20 million) following a video call in which a digital version of the company’s CFO appeared to instruct the transfer.
https://www.theguardian.com/technology/article/2024/may/17/uk-engineering-arup-deepfake-scam-hong-kong-ai-video
Fake Endorsements: Scammers used deepfake videos of Martin Lewis (the UK financial advice broadcaster) in fake adverts promoting investment apps. Martin Lewis felt “sick” seeing deepfake scam ad on Facebook.
https://www.bbc.com/news/uk-66130785
Public Awareness: In the UK, research by Finder found that less than 4% of British citizens could correctly identify all deepfake videos in a test, demonstrating how convincing the fakes have become.
https://www.finder.com/uk/media/deepfake-scams
How Are Deepfakes Made — And Detected?
- Creation: Deepfakes are built using source images or videos of the target, voice samples, and training of generative models that map the target’s expressions/voice to new content.
- Detection: Forensic analysts look for inconsistencies: unnatural lighting/shadows, irregular blinking/eye movement, audio-video sync mismatches, pixel-level artifacts, metadata anomalies, and “blood-flow” cues under skin. For example, one awareness campaign by Santander UK used purposely created deepfakes to educate how to spot them — e.g., “look for odd mouth movement, background reflections, unnatural blinking.” Santander UK+1
What This Means for You (the Public) and for Organizations
For Individuals
- Do not assume a video or audio message is genuine just because it shows a known person or bears an official look.
- Be especially cautious when an urgent request arises (e.g., “send money now,” “this will expire,” “you have only a few minutes”) combined with a familiar face or voice.
- Check the Source: Has the media been posted by a verified account? Are there independent confirmations?
- Spot the Red Flags: uncanny lip movement or blinking, strangely stable lighting across frames, mismatched voice tone, background that does not match context.
For Organizations and Forensic Practitioners
- Recognize that deepfake-driven fraud is now a serious risk and must be incorporated into risk assessments and incident-response planning.
- Forensics Must Evolve: The challenge is not only retrieving data but verifying authenticity, tracing origin, and providing expert opinion on manipulated media.
- Training and Awareness Matter: Employees, management, and board members should understand this threat and know how to respond (e.g., verifying handshake, secondary confirmation, multi-factor verification rather than relying on seeing “Mr. CFO on the video”).
The Bottom Line
In a world where you can no longer always trust what you see or hear, scepticism and verification are your first line of defence. Deepfakes represent a new era of deceptive media for both individuals and organizations alike. For the forensic technology practitioner, the challenge is to stay ahead of generative tools, to develop detection and authentication capabilities, and to educate clients and the public. Because in the age of deepfakes, truth must be defended.
© Copyright 2026. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.
