AI Experts Claim Bank AI Vulnerable to Cyber Attack

AI Experts Claim Bank AI Vulnerable to Cyber Attack

AI Experts Claim Bank AI Vulnerable to Cyber Attack

The experts have proposed a few approaches that could cripple the processes AI manages, but mitigation approaches seem relatively clear. The experts argue that misleading inputs, such as fake trading data, would trip up the AI. But if fake data can be easily injected without notice, it would likely also fool a human. More importantly, assuming the model was trained in a controlled environment and the model tested before deployment, then it can be tested for how it would behave if it is fed fake data. Better yet, a model could be trained to detect fake data and shut the system down.

A specific example given is that the lending model might be flooded with phony loan applications that would alter the model in a negative way. In the 2017 Report Bringing AI Into the Enterprise: A Machine Learning Primer, Mercator identified the importance of thoroughly vetting training data. A machine learning professional should never train a model on data that hasn’t been carefully evaluated to assure it matches reality and is also not including biased data. Even after validating the training data the model needs to be tested. Perhaps testing should include ingestion of malicious data, but that shouldn’t be difficult to implement.

To inflict systemic damage to the bank or the entire financial system would require the system be operating without human observation and executing either extremely high value transactions or many relatively low value transactions extremely quickly. Consider for example card authorizations. Large banks process millions of transactions every day, but the input data is highly secure and is forced to adhere to a strict ISO format. Altering the input on a large scale isn’t likely. Perhaps more likely is that an AI-based fraud detection company is infiltrated and a trojan model distributed to the endpoints. This could be catastrophic but it isn’t an AI-based attack, it’s a traditional cyber-attack that in my opinion is far more likely to be pursued because it uses existing capabilities and create maximum damage downstream:

Machine-learning models vary in their levels of sophistication, from those that use relatively simple algorithms to complex black-box AI systems, so named because, like human brains, they can’t be simply opened up to see exactly how decisions are being made. And like human brains, AI platforms can be susceptible to being fed faulty information, including by attackers seeking to manipulate them.

Russian expertise in using the Internet and social media to disseminate disinformation could easily be turned against machine-learning models that, like other investors, turn to the Internet to try to gauge market sentiment.

‘Misinformation about a takeover being imminent, or a public-relations debacle unfolding, could easily fool a financial institution’s trading systems, Mr. Gupta said’”

I expect this would also fool human traders?

Overview by Tim Sloane, VP, Payments Innovation at Mercator Advisory Group

Exit mobile version