5 min
(1403 words)
Read
Financial institutions are forecast to double their spending on AI by 2027
Artificial intelligence tools and the people to use them are the new must-haves for the world’s financial institutions and central banks.
In June 2023, JPMorgan Chase & Co. had 3,600 AI help-wanted postings, according to Evident Insights Ltd., a London-based start-up tracking AI capabilities across financial services companies.
“There’s a war for talent,” said Alexandra Mousavizadeh, the founder of Evident Insights. “Making sure you are ahead of it now is really life and death.”
Like other technological breakthroughs, AI offers fresh potential—accompanied by novel risks. The financial services industry may be among the biggest beneficiaries of the technology, which may enable them to better protect assets and predict markets. Or the sector may have the most to lose if AI spurs theft, fraud, cybercrime, or even a financial crisis that investors can’t conceive of today.
The debut of OpenAI’s ChatGPT in November 2022 is rippling through finance and other industries. It quickly topped 100 million users to become the fastest-growing application in internet history.
In finance, the demand for people who know how to tap into AI is global. Three of the top 10 cities in Evident’s talent index are in India, said Mousavizadeh, an economist, mathematician, and former cohead of country risk at Morgan Stanley.
Financial embrace
The money flowing into AI from financial and other enterprises underscores the new priorities. Sales of software, hardware, and services for AI systems will climb 29 percent this year to $166 billion and top $400 billion in 2027, according to International Data Corp. Financial sector spending will more than double to $97 billion in 2027, with a 29 percent compound annual growth rate—the fastest of five major industries—according to the market researcher.
Hedge funds, long the pioneers of cutting-edge tech, are embracing generative AI. Nearly half of them use ChatGPT professionally, and more than two-thirds of those use it to write marketing text or summarize reports or documents, according to a BNP Paribas survey of funds with $250 billion in total assets.
Investment businesses are using and investigating AI’s potential across various business lines. Europe’s largest investment company, Amundi SA, is building out its own AI infrastructure for research on macroeconomics and markets. It’s also using the technology for applications such as robo-advising tools for individual customers.
Paris-based Amundi, with €2 trillion ($2.1 trillion) under management, uses AI-based tools to customize portfolios for some of its more than 100 million clients by asking their preferences about risk. Responses help shape portfolios and provide a real-time sentiment gauge.
AI tools may exacerbate a crisis, whatever the cause, because they are trained on past data that may not reflect reality in an unprecedented situation.
Aggregate view
“This kind of algorithm allows us to see the behavior of the clients,” said Monica Defend, chief strategist at the Amundi Investment Institute, the company’s research and strategy unit. “There’s a benefit to the customer, but you also have an aggregate view of how attitudes are changing across this user base.”
For other uses, such as making institutional decisions on investment and trading, AI can be limited by data that proves unreliable or by unprecedented high-impact situations, she said. It’s also a priority to avoid abuses and ensure that AI is used within a secure, ethical, and compliant framework.
“Artificial intelligence cannot replace the brain,” Defend said. A wholly AI-driven process could be dangerous, she said. “It’s equally important the interpretation, the understanding, and the check of what the algorithms are providing.”
JPMorgan, the largest US lender, spends more than $15 billion a year on technology, where it deploys almost a fifth of its approximately 300,000 employees. An AI research group employs 200, and AI enables hundreds of uses ranging from prospecting and marketing to risk management and fraud prevention. AI also runs across payment processing and money movement systems worldwide.
“It is an absolute necessity,” Chief Executive Officer Jamie Dimon told shareholders in April.
Monetary world
Much more is at stake for policymakers safeguarding economies. Central banks, which by design are slower-moving and more risk-averse, are learning to use AI in a much different context—and weighing potential risks.
AI has shown promise in a range of central bank applications, such as supervision. Brazil’s central bank built a prototype robot to download consumer complaints about financial institutions and categorize them through machine learning. The Reserve Bank of India this year hired consulting firms McKinsey and Accenture to help deploy AI and related analytics in its supervision work.
The Basel Committee on Banking Supervision found that AI can make lending more efficient in credit decisions and in thwarting money laundering. The committee of central bankers and bank supervisors acts as one of the world’s top standard-setters for regulation. It also cited risks such as understanding outcomes from opaque models to the potential for bias and greater cyber risks.
“Supervisory processes for judging what is safe and sound, and being able to distinguish between responsible and irresponsible innovation, will no doubt improve,” Neil Esho, the panel’s secretary general, said last year. “For now, we still have some way to go.”
The Bank for International Settlements (BIS), the group of global central banks that hosts the committee secretariat in Basel, Switzerland, has tested a variety of potential uses. The BIS Innovation Hub’s Project Aurora, for example, showed that neural networks, a type of machine learning, can help detect money laundering by sniffing out patterns and anomalies in transactions that traditional methods can’t identify.
Signal in noise
The Bank of Canada built a machine learning tool to detect anomalies in regulatory submissions. Data Science Director Maryam Haghighi said its automated daily runs catch things people wouldn’t—while freeing up staff to follow up on the analysis.
“This is an example of where AI can really shine for central banks,” Haghighi said. “It’s something rather tedious, and it’s something that you can train AI to do well and do better and faster than humans.”
The European Central Bank (ECB) is using AI for applications such as automating classification of data from 10 million business and government entities, scraping websites to track product prices in real time. It is also using the technology to help bank supervisors find and parse news stories, supervisory reports, and corporate filings.
With the data universe growing exponentially, cleaning it up to be intelligible is a key issue, especially for unstructured data, said Myriam Moufakkir, the ECB’s chief services officer. AI can help humans make important distinctions. The ECB is also exploring large language AI models to help write code, test software, and even help make public communications easier for people to understand.
Financial stability
London School of Economics researcher Jon Danielsson, who studies how AI affects the financial system, sees the technology’s capabilities on a continuum from basic to advanced, he said. On the basic side, there’s chess, with pieces on a board and rules known to all. AI easily beats humans there, but its advantage diminishes with complexity. People in unexpected situations can draw on a range of knowledge to make better-informed decisions, from economics and history to ethics and philosophy. And this, he said, is where humans beat AI—for now.
AI is already making important financial decisions, such as handling credit card applications, and it’s making rapid inroads in the public and private sectors. The technology can help ensure that banks don’t misbehave by, for example, taking advantage of clients or allowing fraud or money laundering, he said. At the same time, such expanded uses may introduce danger, he said.
The warning reflected Gensler’s work as global economics and management professor at the Massachusetts Institute of Technology, where he published a 2020 paper with Lily Bailey on deep learning. That subset of AI offers “previously unseen predictive powers enabling significant opportunities for efficiency, financial inclusion, and risk mitigation,” they wrote. But they cautioned that financial regulations rooted in earlier eras “are likely to fall short in addressing the systemic risks posed by broad adoption of deep learning in finance.”
‘Polycrisis’ factor
Another danger is that AI tools may exacerbate a crisis, whatever the cause, because they are trained on past data that may not reflect reality in an unprecedented situation, according to Anselm Küsters, head of the digitalization and new technologies department at the Centre for European Policy in Berlin. Küsters has cited the term polycrisis, popularized by fellow economic historian Adam Tooze, referring to interaction of different shocks that together add up to be worse than the sum of their parts.
Increased use of opaque AI applications “creates new systemic risks,” as they can quickly amplify negative feedback loops, Küsters wrote, urging the European Parliament to “focus on the additional risks of algorithmic prediction arising in crises.”
Such questions posed by the rapidly evolving technology will confront central bankers and other policymakers in coming years as benefits and threats become clearer.
“We are not yet at the point where we know what makes sense for central bankers,” the ECB’s Moufakkir said. “We are at the beginning.”
JEFF KEARNS is on the staff of Finance & Development.
Opinions expressed in articles and other materials are those of the authors; they do not necessarily reflect IMF policy.
ais-reverberations-across-finance