Rapid Adoption of AI Poses Risks for U.S. Financial System, Regulators Warn
WASHINGTON, Dec 14 (Reuters) – Rapid adoption of artificial intelligence (AI) could create new risks for the U.S. financial system if the technology is not properly supervised, a panel of regulators warned on Thursday.
The Financial Stability Oversight Council, which comprises top financial regulators and is chaired by Treasury Secretary Janet Yellen, flagged the risks posed by AI for the first time in its annual financial stability report.
“AI can introduce certain risks, including safety and soundness risks like cyber and model risks,” the group said in its annual report published Thursday, adding it recommended firms and their regulators “deepen expertise and capacity to monitor AI innovation and usage and identify emerging risks.”
The panel also flagged the growing role of nonbanks and private credit as meriting close attention, and said financial institutions and regulators should continue to try to better gauge risks stemming from climate change.
Some AI tools can be hugely technical and opaque, making it hard for institutions to explain or properly monitor them for shortcomings. If companies and regulators do not fully understand AI tools, then it is possible they could miss biased or inaccurate results, the report said.
It also noted that AI tools increasingly rely on large external datasets and third-party vendors, which pose their own privacy and cybersecurity risks.
Elsewhere in the report, the FSOC noted that the U.S. banking system remains resilient, despite large bank failures this year. But it urged regulators to keep a close eye on uninsured bank deposits, the rapid flight of which triggered the failures.
Reporting by Pete Schroeder; Editing by David Gregorio
Our Standards: The Thomson Reuters Trust Principles.
As the use of artificial intelligence (AI) continues to grow in the financial sector, regulators are becoming increasingly concerned about the potential risks it poses to the stability of the U.S. financial system. The Financial Stability Oversight Council, a panel of top financial regulators chaired by Treasury Secretary Janet Yellen, highlighted the dangers of unmonitored AI adoption in its annual financial stability report.
While AI has the potential to drive innovation and efficiency in financial firms like banks, its rapid advancements require careful oversight from both the companies implementing the technology and the regulatory bodies responsible for monitoring them. The FSOC emphasized that AI introduces new risks related to safety, soundness, cyber threats, and model accuracy, urging firms and regulators to deepen their expertise and capacity to monitor AI usage and identify emerging risks.
One of the key concerns raised by the FSOC is the opacity and complexity of some AI tools, which can make it challenging for institutions to explain or monitor them effectively. This lack of understanding could lead to the overlooking of biased or inaccurate results, posing a threat to the integrity of financial decision-making.
Additionally, the reliance of AI tools on large external datasets and third-party vendors introduces privacy and cybersecurity risks that need to be carefully managed. Some regulators, including the Securities and Exchange Commission, have already started scrutinizing how firms use AI, while the White House has taken steps to address AI risk through an executive order.
While the U.S. banking system has shown resilience despite recent bank failures, the FSOC stressed the importance of monitoring uninsured bank deposits to prevent future financial instability. As the financial landscape continues to evolve, regulators and institutions must work together to address the risks posed by AI and other emerging technologies to ensure the stability of the financial system.