In his latest annual shareholder letter, JPMorgan Chase CEO Jamie Dimon sounds more like the founder of a fintech startup, and not one of the world’s largest banks whose roots go back to 1799. But then again, the focus on innovation has been critical for the longevity of the iconic firm.
“Artificial intelligence (AI) is an extraordinary and groundbreaking technology. AI and the raw material that feeds it, data, will be critical to our company’s future success—the importance of implementing new technologies simply cannot be overstated,” Dimon noted in the letter.
JPMorgan Chase has more than 300 AI use cases in production, spanning across marketing, customer experience, risk management, and fraud prevention.
Emerging technologies, including generative AI, large-language models (LLMs), and ChatGPT are also top-of-mind for the company. Dimon said: “We’re imagining new ways to augment and empower employees with AI through human-centered collaborative tools and workflow, leveraging tools like large language models, including ChatGPT.”
The launch of ChatGPT is reminiscent of the Netscape browser, which heralded the internet revolution in the mid-90s. However, it’s important to note that the adoption of generative AI needs to be a part of a well-thought-out strategy that considers security, responsible AI, and the needs of stakeholders. While this technology offers clear benefits, there are perils as well.
It may seem ironic but earlier this year JPMorgan banned employees from using ChatGPT—and the firm wasn’t the only one. Major financial institutions, including Citi, Bank of America, Wells Fargo, and Goldman Sachs also put restrictions on ChatGPT.
This shouldn’t be a surprise, nor a disappointment. Because banks must deal with onerous regulations—know-your-customer (KYC) and anti-money-laundering (AML) laws—when new technology emerges, it’s important to take a more conservative approach. Security and compliance are sacrosanct.
Generative AI tools such as ChatGPT and GPT-4 have already demonstrated clear risks. For example, the models tend to give off hallucinations, and as a result, the content that’s generated is false or misleading.
It can also be nearly impossible to understand how the generative AI models are coming up with responses. These systems are essentially “black boxes.” After all, the largest models have hundreds of billions of parameters and are nearly impossible to decipher.
Then there are the nagging problems with bias and fairness. This is because generative AI models are trained on extensive amounts of publicly available content, such as Wikipedia and Reddit.
Finally, the use of generative AI models is primarily carried out by APIs. This means that a bank will send information away from its own private data centers, posing compliance risks for privacy and data residency. Indeed, several security breaches have already occurred. In March, OpenAI disclosed that there was exposure of payments information for its ChatGPT subscription service. For about 1.2% of the subscriber base, it showed usernames, emails, and payment addresses. There were also disclosures of the last four digits of credit card numbers as well as the expiration dates. The breach was the result of bugs in an open-source system.
Given the challenges and risks associated with generative AI, banks and financial services need to take a cautious approach. That means it may be a good idea to avoid customer-facing applications— at least for now.
Instead, a better approach is to experiment with internal operations, especially where there is no use of PII (Personally Identifiable Information). Marketing would be a good place to start as creativity is a key attribute of generative AI. While the technology is not at the point to do final drafts, it can help spark ideas and improve the results of marketing campaigns.
Another area to focus on is service desk operations. With natural language prompts, an employee can describe their issues and the generative AI will provide useful answers—and even help to initiate a process to solve the problems. This can lead to lower costs and improved effectiveness.
Generative AI can also be a useful tool for allowing employees to gain insights from internal proprietary content. This is what Morgan Stanley has done with a pilot program with OpenAI’s GPT-4 model. The application—which is not trained on any customer information—is a tool to allow financial advisors to ask questions that are based on company-generated research reports and commentary.
As generative technology gets more stable, it will be easier to take on more sophisticated projects.
The pace of innovation for generative AI has been breathtaking, but there are notable risks, such as hallucinations and security. This is why banks need to take a thoughtful approach to this important technology. Rushing into it would likely be a mistake. Rather, a good strategy is to start on applications of generative AI for internal purposes that do not use sensitive data. This can be a way to gain real benefits while allowing time for the technology to mature.