Last year’s TikTok-fueled spate of check fraud—allegedly taking advantage of a glitch at Chase Bank—was among the most widely publicized fraud cases of the year. The scheme involved individuals depositing fraudulent checks and withdrawing funds before the bank could verify their validity.
Most of these participants may not have realized they were committing a crime. A study from Javelin Strategy & Research, 2025 Fraud Management Trends, looks at the TikTok scheme within the larger context of friendly fraud and explores what banks can do to fight it. The report also delves into other emerging trends, including the growing use of passcodes and digital wallets.
Defining Friendly Fraud
Friendly fraud, also known as first-party fraud, happens when consumers dispute legitimate charges, often resulting in a refund. The dispute may involve the consumer claiming an unauthorized purchase was made using their account or that a purchase was not received or turned out to be defective.
As mentioned, many individuals don’t realize they are committing a crime when engaging in friendly fraud. For example, someone might claim a product they received was defective and request a refund, even though they simply changed their mind about wanting the product. If the purchase is small enough, the financial institution or merchant may decide the dispute isn’t worth investigating.
Even when someone knowingly commits fraud because they feel a giant corporation owes them something, they may not perceive it as a crime. Suzanne Sando, Senior Analyst of Fraud and Security at Javelin Strategy and Research and author of the report, uses the phrase “morally ambiguous” to describe these disputes.
“There needs to be an explanation from financial institutions and from merchants about what constitutes friendly fraud,” said Sando. “You need to be explicitly clear about what kinds of fraud threats are out there. Younger generations don’t feel the same brand loyalty—they are just out here making purchases and moving on. They don’t feel guilty committing this crime.”
Finding the Right Tone
Financial institutions need to be delicate in how they share this information.
“The number one thing that we hear from consumers when we’re doing our survey data is that victims are sensitive about whether they were made to feel like they were the criminal, like they weren’t trusted,” said Sando.
There’s a way to communicate with consumers without directly accusing them of anything. At the same time, it’s important for consumers to understand that their bank is aware of the prevalence of fraud and is actively monitoring for suspicious activity.
The method of communication is key. Younger consumers may feel completely comfortable receiving text messages or email alerts about trending crimes such as check fraud, impersonation scams, and account takeover. Older consumers, however, may prefer to hear about these issues in person at their branch or through an article on the bank’s website.
“You can’t just give somebody a four-page paper to say, ‘Here’s friendly fraud, don’t do it,’” said Sando. “It needs to be a quick-hitting popup, maybe when you’re filing your charge or planning your chargeback. Maybe when you’re logged in to make a Zelle payment, a popup can say, ‘Do you know this person that you’re sending your money to?’”
In many instances, FIs themselves are partly to blame. Billing descriptions that make sense to back-end processing systems can be completely opaque to consumers. Unclear transaction descriptions often confuse consumers, resulting in disputes or fraud claims for transactions that are, in fact, legitimate.
The Promise—and Threat—of AI
Artificial intelligence has a role to play as well. Fis are already collecting ample amounts of data on their customers, which can be leveraged in the fight against friendly fraud. Advanced analytics like behavioral biometrics and device information can be combined with account and transaction history, recent activity, and typical spending habits to build a robust customer profile. This helps verify both the identity of the consumer and the legitimacy of disputed transactions.
However, there is skepticism around AI. Many have heard about deepfakes and worry that the technology could be used against them.
“AI that’s being used by criminals is not necessarily as sophisticated as some of the technologies out there,” Sando said. “The info sharing being used by banks can protect you from any other AI threat that’s out there.”
Privacy Concerns
Consumers are justifiably concerned about privacy issues surrounding data. Sando admits she does not know the extent of the information collected about her.
“As I was doing this report, I was researching some of the companies out there that do behavioral biometrics and have information-sharing consortiums,” Sando said. “I was looking at lists of data points they collect and use. I don’t feel as though I’m being told that this information is being collected about me. The main takeaway for me has always been if you just tell me what it is that you’re monitoring and what it is that you’re collecting about me, I will likely be OK with it.”
“What happens if it gets out?” she said. “FIs have to be transparent about what we’re collecting, why we’re collecting it, how it might be used, and how long we are keeping it. As we move forward with sharing information across the industry, and using it in AI, you need to be really clear with your customers about what’s happening on the back end.”
Impersonation scams are shaping up to be another significant issue in 2025, especially with the added threat of AI. Real-time payments add to the complexity. It is no longer just a regular payment that a consumer might dispute as fraud and easily recover their money. This is a scam involving an authorized transaction.
“We have to be using this information to try and stop these scams, because otherwise they will keep growing out of control,” said Sando. “That’s why we need AI. We need info sharing across the industry to better tackle these scams.”