Financial automation systems are prime targets for intentional attacks—as well as misuse and manipulation—from bad actors. This situation is escalating for financial companies that are dependent on their bank automation systems since Software-as-a-Service (SaaS) has spurred a new movement in 2022, with financial criminals using Fraud-as-a-Service (FaaS) to make tools and services available to cybercriminals online for fraudulent activity.
Fintechs deploying SaaS to run and grow their business are finding themselves having to confront the reality of fraudsters who are deploying web-based FaaS tactics to get away with fraud at a level never before seen—and with shockingly little risk.
Is this the future of fraud for fintechs, and is there a way they can combat this new generation of fintech-focused cybercriminals who are determined to attack automated systems for their own gain?
The fact is that the level and type of crime fintechs are currently up against is a far cry from what the industry has faced in the past. Still, software is fighting software and a fintech’s own automation systems can be wielded against it. Sandwiched between AI-based onboarding systems and robotic identities that are powered by scripted behaviors or AI, the various automated steps in the onboarding process mean that once criminals have found a hole in any process, they can leverage FaaS to attack fast.
Upscaling Financial Crime
FaaS has become a widespread financial crime that enables fraudsters to quickly and easily gain online access to the very data, automation tools, and analytics that countless fintechs rely upon.
During a recent webinar, Levi Gundert, Senior Vice President of Recorded Future, noted that the fraudsters involved in FaaS are “very clever” and are “looking for weak spots to exploit.” Bank automation systems are certainly one such weak link. As Gundert stated: “Whether it is COVID-19 relief funds, or cryptocurrency exchange thefts of millions of dollars, there is a real incentive for cybercriminals to find new methodologies that work.”
Easy Exploits of Fraud-as-a-Service
One of the hottest areas of fraud-as-a-service is the automation of social engineering scams, which can allow criminals to steal whole or partial identities, payment card or bank details, and other useful data—and then complete fraudulent transactions, overwhelming financial systems with bad traffic. This sensitive data becomes particularly vulnerable when any part of the data-collection process is automated. Whereas in the past, card fraud was always a source of significant losses, more recent payment methods—notably instant payments—have presented fraudsters with a new focus for their criminal activities.
There has been a significant uptick in the proliferation of socially-engineered authorized push payment (APP) scams where genuine customers are duped into making payments in their own name, often after FaaS techniques have permitted the fraudsters access to the requisite personal information of the consumer.
What’s more, there’s evidence of increasing use of robotic identities, which means you can end up onboarding a “person” who doesn’t exist. With around 200 different legal systems worldwide, it can be almost impossible to guarantee a completely secure onboarding process for a global service, opening up further possibilities for FaaS exploitation.
A Case for “FaaS-t” Action
FaaS is a new reality and may already be compromising your automation systems and draining your revenues. Regulatory regimes are left with no choice but to catch up with FaaS-based threats in the fintech sector if they want to safeguard their automated systems.
To push back and attempt to beat cybercriminals at their own game, financial firms should leverage AI and machine learning to tackle these growing and ongoing threats, boosting detection rates and reducing unknown fraud detection, while keeping their automated systems from being compromised.