Smaller financial institutions are increasingly vulnerable to artificial intelligence-generated financial fraud, with the gap between them and larger institutions widening. While larger institutions are busy developing their own AI systems, smaller ones lack the internal data resources required to build and train large models.
These findings stem from a Treasury Department report that focuses on the threat AI-based fraud poses to financial institutions. One key observation is that there has been insufficient data sharing among firms.
As more firms deploy AI, the scarcity of data available to financial institutions for model training has become especially significant in fraud prevention. Large institutions, with far more historical data, have a marked advantage in detecting AI-based fraud. For example, Mastercard anticipates that its use of AI could help it analyze more than a trillion data points to determine the legitimacy of each transaction.
One large, but unidentified, firm that the Treasury surveyed reported a reduction in fraud activity by an estimated 50%. This was achieved through the development of AI models that solely use the firm’s internal historical data. An unfortunate upshot of this is that fraud activity blocked by such models would likely shift to smaller, more vulnerable institutions.
Collaboration Is Key
The Treasury report calls for more collaboration among banks of all sizes. “Except for certain efforts in banking, there is limited sharing of fraud information among financial firms,” it reads. “A clearinghouse for fraud data that allows rapid sharing of data and can support financial institutions of all sizes is currently not available.“
“At the moment, AI benefits the good guys more than the bad, but the pendulum will quickly shift if the financial sector does not quickly address existing and potential gaps in AI and money-laundering risks,” said Tracy Kitten, Director of Fraud and Security for Javelin Strategy & Research. “Financial institutions have been reluctant to share and rely on data from and with third parties – entities that often have enormous data about personas that can be used to identity and authenticate identities in a digital environment. That reluctance will continue to widen potential gaps for synthetic identity fraud, scams and account takeover fraud.”
The survey respondents largely agreed that managing risks requires extensive collaboration. Data poisoning, data leakage, and data integrity attacks can take place at any stage of the AI development chain, which requires more communication than currently seen.
As a result, it’s recommended that data supply chains are more carefully monitored to ensure that models are using accurate and reliable data.
Treasury suggests that “the financial sector would benefit from the development of best practices for data supply chain mapping. Additionally, the sector would benefit from a standardized description, similar to the food ‘nutrition label,’ for vendor-provided AI systems and data providers. These ‘nutrition labels’ would clearly identify what data was used to train the model, where the data originated, and how any data submitted to the model is being used.”
“Regulatory coordination could go a long way to help ease concerns about data and information sharing, especially where standardization comes in to play,” Kitten said. “Even the very basics – such as how we as an industry define what constitutes AI and digital identities – have yet to be addressed in a meaningful way. This is where regulatory coordination could have the most immediate impact.”