How AI is Breaking Trust, Identity, and Financial Controls
Fraudsters have always been slick, doing their best to constantly stay ahead of financial institutions in their tactics. Recently, though, advancements in technology and the wide availability of consumer artificial intelligence (AI) products have helped these fraudsters bring their cons into the 21st century. With these advancements has come a shift not just in scale, but in the very nature of fraud itself.
AI’s impact on fraud is already profound, and it is accelerating. With the right tools, fraudsters have unprecedented capabilities — and those tools are becoming ubiquitous.
AI can help fraudsters scale their operations with minimal cost. AI chatbots can run message-based scams automatically on a larger scale and have the added benefit of avoiding tell-tale syntax and typographical issues. Call-based scams can be run in any language. Fraud can now be personalized, even on the fly, giving the ability to tailor messaging to more closely align with the victim’s profile.[1]
But these capabilities are only the baseline.
What is emerging now represents a more fundamental shift. Fraud no longer simply imitates legitimate activity from afar, it all but becomes it. As a result, fraud today does not resemble what it did five years ago, sometimes not even to the naked eye.
Advancements in commercially available AI have effectively put movie-grade special effects into the hands of fraudsters. The result is a new generation of fraud engineered to exploit trust, identity, and perception in ways that were previously infeasible.
The Dangers of Deepfakes
In January 2025, news broke that a multinational engineering firm, Arup, was involved in an AI-powered deepfake scam that cost them $25.6 million.[2] In this case, an employee was instructed via email purporting to be from the firm’s chief financial officer (CFO) to release several transactions. The “CFO” and several of his “colleagues” even joined a video call with the employee to confirm. The issue in this case was that neither the CFO nor any other executive was truly on this call.
Deepfakes are not necessarily new, but this is the first time in history that technology has been made readily available for anyone to make a deepfake themselves. Essentially, existing video and/or audio clips of a real person are analyzed and used by AI to generate high-tech forgery of someone’s likeness. The more publicly available media to pull from, the more convincing the deepfake may be. When run through a generative adversarial network (GAN), two neural networks are harnessed to make the deepfake as realistic as possible — one network generates the synthetic data and the second works to detect any hints that the data is generated, eventually making the generated deepfake nearly indistinguishable from authentic data.[3]
As a result, traditional trust signals such as voice authentication, video calls, and even familiar faces can no longer be relied upon as definitive proof of identity. As technology improves, the gap between real and synthetic will continue to narrow.
Upticks in Synthetic Identity Fraud
While deepfakes convincingly replicate real people, synthetic identity fraud removes the need for a real person altogether.
Synthetic identity fraud (SIF) uses real information — such as real social security numbers, checking account numbers, driver’s license numbers, including those of the deceased — and combines it with fake names, addresses, and identification. These identities, sometimes referred to as “Frankenstein IDs” because of the way information is stitched together to form the identity, can be developed over time and used to build fraudulent lines of credit that are then maxed out before the identity is abandoned.[4]
With AI being readily available, cases of SIF are becoming more sophisticated in their engineering, but they’re also increasing. By the end of 2024, TransUnion determined there was “USD$3.3 billion in lender exposure to synthetic identities in the US for auto loans, bank credit cards, retail credit cards and unsecured personal loans.” While this figure represented the all-time high at the time, continued consumer access to advanced technical tools suggests this exposure is likely understated today.
Influenced to Commit Fraud
Social media is not exactly new, but it did create a new problem that has snowballed in recent years. You have probably seen them on LinkedIn or TikTok, social media influencers giving advice on financial topics — like loans, paying off debt, investing — also called “finfluencers,” a portmanteau of finance and influencers.
While some finfluencers may hold professional credentials, many do not, and their content is often consumed by audiences to whom traditional financial institutions or education seem inaccessible.
The trend to bring influencers into the financial ecosystem is not necessarily new. For years social media platforms have worked to ensure that sponsored messages and advice are properly documented as such. However, disclosure requirements can only offer limited protection to generations who heavily rely on social media as their primary source of information.
According to a 2023 study conducted by the Financial Industry Regulatory Authority (FINRA),[5] 48% of Gen Z investors look to social media for advice on their financial decisions, with YouTube, Instagram, TikTok, and X, formerly known as Twitter, being four of the top five places Gen Z seeks advice. While not quite as high, 42% of Millennials and 26% of Gen X investors are consulting similar sources for financial advice.
The concern here is twofold: The consumer of content tends to be under-educated in financial matters and may struggle to distinguish sound guidance from poor advice. While the price of entry remains a barrier for many would-be investors, a lack of information is still cited by 56% of Gen Z as a key obstacle. Finfluencers appear to lower this barrier, but they do not always do it safely.
The risk emerges when audiences treat influencer guidance as authoritative, basing entire strategies on advice that at best might be flawed, but at worst may actively encourage fraudulent behavior.
In June 2025, a finfluencer plead guilty to wire fraud and aiding in a false tax filing after having promoted a real estate investment program boasting rates of return over 30% via Facebook and YouTube.[6] This case represents just one example of a broader pattern, with major reporting as early as 2021 highlighting the availability of step-by-step fraud guides circulating openly on social media platforms.[7]
How AI Makes Fraud Easier
While the full extent of AI’s impact on fraud is still unfolding, the trajectory is clear.
According to Feedzai’s report, “The State of Anti-Money Laundering 2025,” 45% of the 300 anti-money laundering (AML) professionals surveyed believe their AML systems will be even further challenged by AI, yet 38% reported running AML programs without any AI support.[8]
But as the fraudsters continue to bolster their toolkit, financial institutions face a narrowing window to modernize their defenses. The same technologies lowering the barrier to fraud will likely prove to be indispensable to both detecting and preventing it.
[2] See: https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk.
See also: https://coverlink.com/case-study/case-study-25-million-deepfake-scam/.
[3] See: https://www.ibm.com/think/topics/generative-adversarial-networks.
[7] https://www.bbc.com/news/uk-58223499
[8] https://www.feedzai.com/wp-content/uploads/2024/12/Feedzai_Report_AMLSurvey.pdf
© Copyright 2026. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.
